id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
196470832
pes2o/s2orc
v3-fos-license
MLR (Memory, Learning and Recognition): A General Cognitive Model -- applied to Intelligent Robots and Systems Control This paper introduces a new perspective of intelligent robots and systems control. The presented and proposed cognitive model: Memory, Learning and Recognition (MLR), is an effort to bridge the gap between Robotics, AI, Cognitive Science, and Neuroscience. The currently existing gap prevents us from integrating the current advancement and achievements of these four research fields which are actively trying to define intelligence in either application-based way or in generic way. This cognitive model defines intelligence more specifically, parametrically and detailed. The proposed MLR model helps us create a general control model for robots and systems independent of their application domains and platforms since it is mainly based on the dataset provided for robots and systems controls. This paper is mainly proposing and introducing this concept and trying to prove this concept in a small scale, firstly through experimentation. The proposed concept is also applicable to other different platforms in real-time as well as in simulation. I. INTRODUCTION Intelligent control of robots and systems have been at the center of attention for decades since Robotics is a very useful and applicable field of research in human life. After DARPA 2005 [1], [2], [3], there have been significant investments into Probabilistic Robotics [4] (i.e. the usage of learning for robot control) and more specifically, in Autonomous Navigation. Since then, we have also seen much industrial inclination toward applying this technology to their own products to create more intelligent machines such as Google self-driving cars. The main drawback, however, has always been a lack of common intelligent architecture or a lack of a concrete and applicable definition of intelligence which should somehow explain how it is possible to create an intelligent system. Although there has been an enormous amount of financial investment with many attempts to define intelligence from different fields of research, none of them can actually explain intelligence (human intelligence specifically) in a way that it can be applicable to other fields of research, thus connecting all, i.e. there is a lack of an operational definition. The current advancements in machine learning known as Deep Learning [5] has brought much hope and excitement about AI general applications and perhaps this is the right way to approach intelligence and its definition [6]. An interesting review paper presents the complete historical trend of the deep learning approach in Neural Networks [7]. Another recent excitement in AI was brought by the Deep Reinforcement Learning method proposed by [8] which is presenting a Deep Learning Approach in Atari games which Aras Dargazany, Department of Electrical, Computer, and Biomedical Eng., Univ. Rhode Island arasdar@uri.edu is also a very simple and primitive simulated environment and robots (so-called agent in the paper) for testing the control efficiency of the deep learning approach. The latter proposed approach claims to outperform humanlevel control of the simulated agent in the simulated environment of the Atari games; this is a very promising step forward toward understanding the true nature of intelligence and intelligent control of robots. The latter approach, DQN, has triggered some researchers to look into the new theory of Learn-See-Act instead of the previous Sense-Plan-Act [9] and [10]. II. MLR (MEMORY, LEARNING AND RECOGNITION) The proposed generic cognitive model is basically suggesting three main components for intelligent control: Memory, Learning and Recognition. These three components can also be categorized into three: Cognition (done by Recognition), knowledge (stored in Memory) and Intelligence (produced by learning) (please be advised that this categorization is proposed by this paper). Intelligence includes a combination of Memory and Leaning. For better understanding of the proposed model, the complete intelligent robot control architecture is illustrated in figure 1. The first two layers are shown in more details in the right figure in 1. As you can see the first two layers are preparing the robot to be able to deliver the sensor data to control module and also be able to receive the controlling commands from the control module in the third layer. It is important to know that our proposed cognitive model (MLR) is focusing only on the last and highest two layers, which defines intelligent control. The first two layers are provided and illustrated along with the last and highest two layers to give you an overview of how our complete robot control architecture looks like and also how many layers are needed in either real or simulated robot in order to create a complete intelligent robot control. Figure 2 gives an overview of what is proposed in this paper as the cognitive model. Based on this model, the control module equals to memory and recognition but the intelligence module is composed of memory and learning. The interesting thing about this model is that these two processes are done completely independently and can be performed on different computers and processors. In figures 3 and 4, you can see the model in more detail and as two completely independent processes. After Looking at the overall MLR architecture (the pro- posed cognitive model for intelligent control), now we will get into more detail on each one and how they work together. A. Memory Memory is creating the data space and storage needed for writing data and reading data. Based on the memory limit and therefore our database limit, we can conclude that the intelligence created using the learning module and by learning the stored data in memory can not be beyond the limits of our input and output data stored in our memory. Basically, memory is giving us the database and based on the database, we can begin to create our own data space using learning. It is also very important to know that memorizing and remembering things play an important role in human intelligence and so the same may apply for robots and intelligent systems as well. In the proposed concept, we will start with writing Input Output IO data to our memory (or the Hard disk of the computer). These input output data are sampled throughout time and are generally Sensor Output I and Controller Input O as shown in (1). Parameter t is the time index in which the data samples were acquired and recorded into memory. The recorded IO database in the memory is completely based on the manual controlling of the robot. Therefore, it is highly suggested to create Input-based Output, meaning the Sensor-based control data which is going to be used later by the robot itself for intelligent control. B. Learning Having written and prepared the IO database, we will use all the input data I for learning. Learning, in our work, is analyzing the input data space, decomposing data eigenvalues for creating input data eigenspace finally. But the main idea in learning is usually creating the Data Space in the first place. Data Space is also known as Data Feature Space or Feature Space. In order to create the feature space, We need to read all the input data from the memory as mentioned in (2) and create a matrix of all input data vectors all at once. Once we have the matrix I, we mathematically have created the input data feature space. This matrix is a Column-Major matrix meaning that the number of columns are indicating the number of input data samples and the number of rows are indicating the number of input data dimensions. Using PCA helps us create the data feature eigen space which is basically composed of the principle components. Principle components are also known as eigen values and eigen vectors of I (Input Data Matrix). In order to do PCA, First of all we have to calculate µ (the mean) of I as equation (3): Given µ, we will start translating all the input vectors to the origin as equation (4): Once we have all ϕ, now we can start the principle component analysis (PCA) or eigenvalue decomposition as shown in 4 and mentioned in following equation (5): Using eigenvalue decomposition, we can measure eigenvalues λ and eigenvectors ν. Also we should not forget that, according to Singular Value Decomposition (SVD) equation (5) and (6), Therefore using the equations above equation (5) and (6), singular values Λ can be measured using eigenvalues λ which will help us in scaling that will be used later for recognition purposes II-C. Having calculated µ, Λ, ν, we write them back into the memory. Basically the learning module reads the recorded input data from memory I and generates the learned eigenspace information such as: µ, Λ, ν which can be used to reduce the data dimensionality considerably to the handful of principle components. In figure 5, the input and the output of the learning module are specifically shown in reading I from memory and writing µ, Λ, ν back into the memory. C. Recognition Having learned the input data as discussed in previous section II-B, this module read the IO recorded data both from sensors and controllers along with µ, Λ, ν (learned principle components) as shown in figure 6. Once we read all the recorded data from the memory with their time index IO t , we can start comparing the new input data I t+1 with our learned input data in our database I. This comparison I t+1 vs I = I 1 , ......, I t should be done intelligently meaning using the principle components (µ, Λ, ν) (or intelligence parameters). Basically in recognition, we are comparing I t+1 with every single one of the input data in I based on four different metrics in order to find the most similar and the least different existing recorded input data in our learned database. Once we read all the recorded data from the memory with their time index IO t , we can start comparing the new input data I t+1 with our learned input data in our database I. This comparison I t+1 vs I = I 1 , ......, I t should be done intelligently meaning using the principle components (µ, Λ, ν) (or intelligence parameters). Basically in recognition, we are comparing I t+1 with every single one of the input data in I based on four different metrics in order to find the most similar and the least different existing recorded input data in our learned database. In order to compare them, at first we need to project them onto the learned eigen space using µ, Λ, ν as follows: It is also important to know that the resulting number of principle components in maximum can be equal to the number of data samples t but since we want to reduce the dimensionality of our data in a way that we can still accurately enough be able to reconstruct them, therefore ν = {ν 1 , ........, ν n }, meaning that we only keep n number of principle components ( n t percentage of kept eigenvectors). In equation (7), n is indicating the number of principle components which is also considerably reduced compared to the initial number of input data dimensions. In (8) and (9), both new input data I t+1 and the recorded input data in memory I are projected to the learned eigenspace and can be reconstructed as shown in equation (10): When you can completely reconstruct all the recorded input data I and the new one I t+1 , this means that we can easily compare them using their new projected values ω ϕ vs ω ϕt+1 in the eigen space based on their principle components Λν = {Λ 1 ν 1 , ................., Λ n ν n } based on four metrics as follows: 1) Minimum Squared Difference: This metric is based on minimum distance between two vectors in the eigenspace as shown in equation (11). This metric is also known as Min Square Error and the min is 0 which means that the two vectors have no difference in this space and the max can be any value. (11) 2) Scaled Minimum Squared Difference: Minimum scaled difference is also based on minimum distance but it also applies the importance of every single principle components and since Λ is basically ω max on each component ν, that is why using Λ for scaling might really make the comparison more fair and more accurate (at least theoretically) between two vectors in the space as shown in equation (12). 3) Maximum Normalized Cross Similarity: This metric is mainly measuring the angle between two vectors in the space and since the smaller the angle is, the more similar these two vectors are. This metric is also known as Normalized Cross Correlation which is basically the dot product or inner product of two vectors in the space as shown in equation (13) as follows: The result of this metric is always 0 ≤ cos ≤ 1 which makes it work more like a probability measurement. 4) Scaled Maximum Cross Similarity: This metric is also mainly based on previous metric with two main differences: • it is not normalized • it is scaled using Lamda with completely the same scaled minimum squared difference. As you can see it in equation (14) Using all four metrics, we can almost precisely find the most similar and least different input data among I = III. EXPERIMENTAL RESULTS In order to speed up the implementation of the proposed MLR model, we decided to use the simulation. The simulated environment and simulated robot is the default, wellmaintained and well-documented project in Finroc [12]. For the experimental setup, we are using Linux kernel in Ubuntu 14.04 (64 bit) and also as explained above, Finroc is our framework which is using its own simulation environment known as SimVis3D. The current available open-source project in Finroc is called Forklift and it has been used as the simulated experimental platform for testing the performance of our MLR model as follows: We chose our experimental setup in a way that it can show the exact role of each module in it and how they are working together. A. Recording the dataset into memory The memory is very important module in the proposed MLR model since we have to start managing our memory in our experiments at first by writing sensor data into memory (will be later used as I). These sensor data are specifically camera images (the highest possible image resolution in simulation 900 × 700 size and in RGB scale), distance data and localization data. In this experiment, we decided to use only the camera images to show the power of the MLR model in dealing with high-dimensional input data and also for more clear understanding of the performance of the model in generating the controlling commands to the robot. Also, we should write the controlling data O at the same time with sensor data (controller data are generated initially by the data recorder and the person who is manually controlling the robot to record the data). These controlling commands are O which will be recorded along with I at the same time and gives us IO dataset in the memory. The writing and reading data from memory has been illustrated before in figures II-B, II-A and II-C. Using the Finroc GUI (Known as FinGUI), we can manually control the robot using joystick to explore the default simulated environment as shown in figure 7. At every specific date and specific length of time (or duration of time) in that date, we record only one dataset IO = {IO 1 , ................., IO t }. The date of recording will be used as the directory in which all the data recorded will be written to and during recording, one index will be assigned to all of the data. The date of recording and the index of recorded data both are stored in an XML file shown in figure 8. As you can see in this figure, the first row is time stamp label string which is the exact date of data recording accuracy. The second row is data index which is assigned to the recorded data during the recording to keep corresponding input and output data together. Data index also depends on the duration of recording the data as well. In this experiment, we had two different recordings which means two different time stamps or two complete different datasets. Each one of these recordings took about 10 minutes and we were recording at the speed on 1 data per second. This means we have generated 1 × 10 × 60 = 600 indexes of data in each recording or IO = IO 1 , .........., IO 600 . Therefore, in one recording we have a total of 600 camera images, with resolution of 900 × 700 size and in RGB scale recorded in our memory. These camera images will be used for learning and also for recognition. In XML file 8, the third row is also the name of the camera image and this image is stored in the same folder as the XML file is stored. The name of the folder (or the parent folder) is the same as the time stamp label which is the date of recording. Also in XML file 8, after the camera image, there are distance data value which are more specifically 8 infra-red distance data values and localization data is also place after distance values composed of pose X, Y and Y aw. After localization data, there are controlling data value generated at the same time as these sensor data values. Controller value Fig. 7. A snapshot taken from GUI used to manually control the robot and record the IO dataset. are desired linear velocity and angular velocity of the robot and also the position robot's fork. As you can see OpenCV [11] cv::FileStorage framework has been used for storing data into the memory. Also boost filesystem library [13] is used to search for files and store their paths so that it is possible to read the files from directory based on the XML file 8. It is also important to notice that sensor data and controller data depends on the person who is manually controlling the robot and recording the data. Finally these recorded data create the robot intelligence which means that it is highly dependent on the data recorder intelligence. In figure 7, the GUI for manually controlling the simulated robot and recording the data are shown along with the controller values on top of joystick and the distance value and localization values and camera image at the left side of the joystick. At right side of the joystick you can also see the fork position slider to manipulate the objects and obstacles in the environment. B. Learning the recorded dataset Having recorded the data in the memory with labels including date of recording and the index of the recorded, now the learning process has begun. The tricky part here is to choose a threshold for the number of eigenvalues and the maximum number of eigenvalues to keep without hurting the accuracy of the work and also it is going to be very slow during the recognition process. In this experiment, we are only using the camera images with highest possible resolution in Finroc simulation framework SimVis3D in order to prove the concept in handling a very large dimensional input data. Having read the file path of each camera images, we store them first into OpenCV Matrix and then we push them one by one to the vector. Basically each Matrix is the image data structure used in OpenCV [11] and the vector is the standard library vector class which is very efficient as data container class. Once all the camera images are read and collected, we convert them all intro gray scale since they are all in RGB format. Therefore, we have to change them all from RGB (3-channel) to grayscale (1-channel) and also make sure that they are normalized meaning the pixel values are between 0-255. I call this process scaling the data and it is mainly composed of converting RGB to Gray scale and normalizing the data to 0-255. Having scaled the data, now we should vectorize all of our images or in simpler way we should change all the images into one column image. In result, we change vector to only one Mat in whose each column there is only one image. Images are stored as Column Data. This Matrix (Mat) is called I = I 1 , ........, I t . In this case t = 600 as explained in the process of recording dataset into memory III-A. Having created I all the input data matrix (images in this case), now data space can be created and learned using PCA engine in OpenCV [11]. There are plenty of implementations of PCA, SVD and Eigen Value Decomposition (EVD) which might be implemented in different ways. This would be an interesting research idea to also compare their results all together and see if their results are the same or not since some of them like OpenCV PCA engine is supporting the float precision value and some other implementations such as Matlab and Python are generating the results with Double precision. Using the PCA engine, we can calculate the mean µ, Λandν which are all shown in figure 9 in order of the eigenvalues which are also shown in figure 11 and also the eigenvectors corresponds to the smaller eigenvalues are also shown in figure 10 only to give you a better idea how the eigenvectors changing in order of their corresponding eigenvalues. In our experiment, we choose the first five principle components for data projections. This means that we just reduce the dimensionality of the data from 630000(900×600) dimensions (number of pixels) to only 5 without losing precision which is an enormous amount of compression, processing power saving and memory saving. Having chosen the number of principle components to keep ν based on the largest eigenvalues λ, we start writing back the learned PCA to the memory, more specifically we write µ, λ, ν. Writing the µ, ν, λ = Λ 2 to the memory is for the use for recognition module. C. Recognition of new input data Having learned the input camera images, as shown in figure 6, we first read the recorded data IO from the memory using the aforementioned recorded XML files in figure 8 using boost library functionality [13] along with the learned model µ, Λ, ν from the memory all together and then start comparing the new input data I t+1 with the input data I one In order to start the intelligent robot control, we start the simulation at first, enable the camera images flow then, and load the recognizer with the learned model and the file path of all the recorded XML files (only the file path). Then the robot does an online recognition as shown in figure 12 and finally the output is generated for controlling the robot which is the corresponding linear velocity and angular velocity of the most similar and least different input I in the memory Fig. 11. The eigenvalues are shown in order of their values and the trend of change is also clear in the length their value from top to bottom (they are completely descending). Fig. 12. This is how recognizer finds the best match for the input and generates its corresponding output afterwards. Fig. 13. Having found the best match, recognizer generates the corresponding output of the best input in database to the controller input in order to control the robot's action and next move. compared to the new input I t+1 as shown in figure 13. IV. CONCLUSIONS AND FUTURE WORK The goal of this paper was introducing a new conceptual cognitive model which may redefine intelligence and add parameters to intelligence as well as redefining control as a cognition process. The question addressed is how we can generate a specific detailed intelligence with a strong mathematical foundation and also general enough to be applicable to any input data for generating any kinds of output data for control. This is the application of this cognitive model (MLR) in robotics for intelligent control but later on we may also be able to apply this to any other sensor data for any other application as well. The discussed MLR model is a cognitive process which should be applicable to any kinds of the robot platforms or data structure and data types. For this reason, there are plenty of ideas for applying this model to different applications and different sensor data. More specifically, in the presented experimental results, we still have distance and localization input data as shown in figure 8 which also can be a very interesting quest to apply MLR on them and also their combination as well meaning that applying MLR on all of the sensor data fused together. There is plenty of room and flexibility in putting this concept into an experiment but we should also notice that learning can be very time-consuming, recognition however should be almost real-time. We need to keep that in mind as well. Also the main power the current model is the ability to handle a high-dimensional data in a way that it reduces the dimensionality without reducing the accuracy of their reconstruction which is a very powerful tool in handling and managing the big-sized data set of different kinds. That is why one of the important future work will be to apply MLR on a huge database of sensor data and use that for intelligent control.
2019-07-12T02:40:37.000Z
2019-07-12T00:00:00.000
{ "year": 2019, "sha1": "e7374dc71b795789112fdea963aafba8233c7b2a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5734dbe42443cd411a9eba80c5cf9104daa259a9", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
271280979
pes2o/s2orc
v3-fos-license
Feeling Younger on Active Summer Days? On the Interplay of Behavioral and Environmental Factors With Day-to-Day Variability in Subjective Age Abstract Background and Objectives Subjective age, that is, how old people feel in relation to their chronological age, has mostly been investigated from a macro-longitudinal, lifespan point of view and in relation to major developmental outcomes. Recent evidence also shows considerable intraindividual variations in micro-longitudinal studies as well as relations to everyday psychological correlates such as stress or affect, but findings on the interplay with physical activity or sleep as behavioral factors and environmental factors such as weather conditions are scarce. Research Design and Methods We examined data from 80 recently retired individuals aged 59–76 years (M = 67.03 years, 59% women) observed across 21 days. Daily diary-based assessments of subjective age, stress, affect, and sleep quality alongside physical activity measurement via Fitbit (steps, moderate-to-vigorous physical activity) and daily hours of sunshine were collected and analyzed using multilevel modeling. Results Forty-four percent of the overall variance in subjective age was due to intraindividual variation, demonstrating considerable fluctuation. Affect explained the largest share in day-to-day fluctuations of subjective age, followed by stress and steps, whereas sunshine duration explained the largest share of variance in interindividual differences. Discussion and Implications In our daily diary design, subjective age was most strongly related to self-reported affect as a psychological correlate. We, however, also found clear associations with objective data on daily steps and weather. Hence, our study contributes to contextualizing and understanding variations in subjective age in everyday life. Translational Significance: Current research on the daily variability of subjective age primarily focuses on psychological predictors such as affect.In our daily diary study, we were able to show that behavioral factors such as daily steps also play a role in explaining variations in subjective age.In case light physical activity and mobility can be established as robust predictors of subjective aging, starting points for interventions would exist. Subjective age-also referred to as felt age-is an established construct in aging psychology that has been linked to central developmental outcomes like control beliefs (Infurna et al., 2010), cognitive performance (Stephan et al., 2016), and health, including objective indicators such as biomarker profiles or longevity (Kotter-Grühn et al., 2009;Thyagarajan et al., 2019;Westerhof et al., 2023).Results from these larger cross-sectional and macro-longitudinal studies focus on lifespan development and approach subjective age as a relatively stable, trait-like construct, which is assumed to be part of an aging individual's identity and self-concept (Dutt et al., 2018).However, a growing body of research has explored the statelike aspect of subjective age in experimental designs (e.g., Dutt & Wahl, 2017; for an overview, see Wahl & Kornadt, 2022) and micro-longitudinal studies (e.g., Kornadt et al., 2022). Overall, micro-longitudinal studies demonstrated that subjective age varies considerably on a day-to-day basis.In the works of Kotter-Grühn et al. (2015) and Bellingtier et al. (2017), which followed N = 43 older U.S. American adults (age range: 60-96 years) with an 8-day diary survey, 23% of 2 Innovation in Aging, 2024, Vol. 8, No. 8 variance in subjective age was intraindividual.Comparable proportions (25% and 27%) could be attributed to withinperson variation in the analyses of Kornadt et al. (2022Kornadt et al. ( , 2021) ) by drawing on subsamples (N = 170 and N = 154, age range: 66-90 years) from the German EMIL study ("Emotional Reactivity and Regulation in Old Age") with a maximum of 32 assessments over 1 week.In a larger data set from Israel (Segel-Karpas et al., 2022; N = 334, age range 30-90 years), 49% of within-person variance in subjective age was reported over an observation period of 14 days. Theoretical Considerations: Processes of Anchoring and Adjusting Such everyday variation in subjective age is assumed to be rooted in diverse everyday experiences.Theoretical work in this realm (Hughes & Touron, 2021;Montepare, 2009) states that subjective age varies based on proximal reference points (e.g., physical or interpersonal age markers, but also historic or normative markers), which can be one-time events such as retirement, but in particular repeated aging experiences in everyday life (Miche et al., 2014).These proximal reference points are then evaluated and reflected upon against stable developmental models (e.g., "feeling tired and not being able to go for a walk are part of being old").Hence, the age an individual feels is assumed to be the subject of ongoing adjustment.Therefore, following a more contextual approach of subjective aging as proposed by Hughes and Touron (2021), we aimed to address diverse predictors from daily life to better understand the complexity surrounding variations in subjective age. Previous Research: Everyday Psychological Correlates of Subjective Age Earlier research on everyday variation in subjective age has mostly focused on psychological correlates such as stress and affective mood.In the diary studies cited earlier, participants felt older on days with higher-than-average stress, negative affect, and depressive symptoms (Bellingtier et al., 2017;Kotter-Grühn et al., 2015;Segel-Karpas et al., 2022).In the experimental study conducted by Dutt et al. (2017), the induction of sad mood via both music and text leads to an older subjective age.Likewise, preceding stress-assessed by self-report and salivary cortisol levels-predicted an older subjective age in a study by Kornadt et al. (2022), but subjective age did not predict momentary variability in stress and vice versa.However, aside from psychological factors, research on behavioral and environmental factors that might further explain everyday variation in subjective age is scarce.A simultaneous investigation of more diverse correlates should help in understanding what exactly drives everyday variation in subjective age and as such, the more precise formation and adjustment processes behind subjective age.In the long run, such micro-longitudinal analyses may, in turn, help inform macro-longitudinal research and interventions targeting the crucial impact of subjective age on various indicators of health and survival (e.g., Westerhof et al., 2023). Physical Activity and Sleep as an Everyday Behavioral Predictor of Subjective Age? Apart from psychological correlates, everyday behaviors, particularly health behaviors, may inform an individual about capabilities and limitations and as such constitute aging experiences.Regarding health behaviors, sleep problems and physical inactivity are both highly prevalent among older adults (Hallal et al., 2012;Jaussent et al., 2011) and likely serve as recurrent age markers.For example, being physically active as an older adult should go along with feelings of being physically fit/strong and perceiving the own body as competentboth have been linked to a younger subjective age (Caudroit et al., 2012;Montepare, 2006;Stephan et al., 2013Stephan et al., , 2020), but has not been investigated within an everyday setting.In an attempt to differentiate between different levels of intensity, we aimed to apply daily step counts as a general indicator for mobility and mainly light physical activity and minutes of moderate-to-vigorous physical activity as indicators for more intense activities. Sleep difficulties have been linked to various adverse mental and physical health outcomes as well as lower cognitive functioning among older adults (Cavuoto et al., 2016;Goldman et al., 2008;Hall et al., 2015;Magee et al., 2011).Poor sleep itself might be interpreted as a marker of old age or it may elicit other negative experiences the following day (e.g., memory difficulties, fatigue) that might be interpreted as such and lead to an older subjective age.In the past, only four studies have linked self-reported sleep measures to subjective age with mixed findings, and daily diary designs are missing to the best of our knowledge.Stephan et al. (2017) (Yoon et al., 2023) indicated that a higher subjective age was associated with lower sleep quality, but only among older women.Sabatini et al. (2022) reported that lower sleep quality and shorter subjective sleep duration were related to higher awareness of negative age-related change (see Diehl & Wahl, 2010), but relations with subjective age were negligible.Very recently, Balter and Axelsson (2024) found in a cross-sectional study (age range: 18-70 years) that both the number of days with insufficient sleep in the last month as well as the level of sleepiness were associated with a higher subjective age in relation to calendar age.To better understand the interplay between everyday health behaviors and subjective age, we thus investigated physical activity and self-reported sleep quality as everyday behavioral correlates. Weather Conditions as an Environmental Predictor of Subjective Age Similar to behavioral correlates, everyday correlates of subjective age that lie outside of the individual have rarely been investigated in past research.One exception is the work by Goecke and Kunze (2020), which shows that negative work events account for everyday variation in subjective age.Following the call to contextualize psychological aging research (Hughes & Touron, 2021;Neupert & Bellingtier, 2022;Wahl & Gerstorf, 2018), we aimed to include weather conditions as a highly salient environmental aspect of older individuals' lives, also tied in with our proposed psychological and behavioral correlates.Weather conditions have been shown to profoundly shape the lives of older adults-for example, by affecting their physical activity levels, time out of home, participation in society, as well as affective state (Klimek et al., 2022;Kööts et al., 2011;Petersen et al., 2015;Wu et al., 2017).Sunshine duration in particular may lead to younger felt ages via all these pathways, but in particular a higher out-of-home activity and positive affective pathways. Research Aims and Hypotheses Our overall research aim was to acquire a more profound understanding of subjective age's everyday variation and covariation with diverse, but potentially interwoven, psychological, behavioral, and environmental aspects of everyday life.We aimed to address the population of healthy older adults in "third age" (i.e., from retirement up to an age of approximately 80 years; see Baltes & Smith, 2003) and to extend the observation period (i.e., number of days sampled) in comparison to previous studies.Specifically, we sampled data from up to 21 days spaced over 5-6 weeks.Compared to previous studies targeting everyday subjective age, this longer observation period allowed to capture more heterogeneous subjective aging experiences (Kotter-Grühn et al., 2015) as well as sufficient intraindividual variability in psychological, behavioral, and environmental variables, which are likely to fluctuate with different intensity and on different time scales. Using these more extensive data, we first aimed to replicate existing findings showing considerable intraindividual variability in subjective age and its meaningful relationships with stress and affective mood as psychological correlates.Second, we aimed to expand cross-sectional and long-term longitudinal relationships between physical activity, sleep, and subjective age by investigating them in short-term intervals.Specifically, we expected a higher number of daily steps, a higher amount of moderate-to-vigorous activity, and better self-reported sleep quality to be related to a younger daily subjective age.Third, we aimed to explore the impact of daily sunshine duration as a salient environmental factor and expected individuals to feel younger on sunnier days. Recruitment and Sample The present analyses are based on data of the ActiveAge project, a physical activity intervention study for retired adults aged 60+ who exhibit low physical activity levels and intend to increase their physical activity (Schmidt et al., 2022).Ethical approval was obtained from the ethics commission of the Faculty of Behavioural and Cultural Studies at Heidelberg University. Participants were recruited via flyers and newspaper articles in the Rhine-Neckar Metropolitan Region in Germany throughout the year 2017.They did not receive monetary compensation but were offered feedback on their physical activity and the chance of winning an activity tracker.A total of 135 older adults expressed interest in participation and were screened via telephone by trained scientific staff based on the following inclusion criteria: (1) retired or working less than 10 hr/week (including voluntary work), and (2) no or only very low levels of physical activity.Moreover, the following exclusion criteria were applied: (1) severe functional limitations, acute pain, or chronic conditions preventing physical activity; (2) severe visual impairments; (3) acute depressive episode; (4) severe cognitive impairment; and (5) prior experience with activity trackers.Fifty individuals did not meet the inclusion criteria or fulfilled one of the exclusion criteria, and five dropped out during the first week due to the death or illness of a close other. The final sample consisted of N = 80 retired individuals aged 59-76 years (M = 67.03,standard deviation [SD] = 3.97).Fifty-nine percent of the sample were women, 63% were married, 12% widowed, and 25% divorced, separated, or single.The sample's education was above the population average (M = 12.05 years of schooling, SD = 2.15).Participants rated their own health mainly as good or very good (M = 2.92, SD = 0.65) on a scale ranging from 1 (excellent) to 5 (bad) (Morfeld et al., 2011).Informed consent was obtained from all participants.More information on background characteristics, study aims, and findings not in the focus of the present analyses can be found in Schmidt et al. (2022). Procedure The ActiveAge study was designed as a pre-post physical activity intervention without a control group and was mainly based on monitoring, feedback, and goal setting as behavior change techniques (Marques et al., 2023).The study included a baseline questionnaire (T0; online or paper-pencil) and three personal standardized interviews (T1-T3) that were each followed by 7-day diary periods (paper-pencil) alongside physical activity measurement.T1 and the first week of measurement were immediately followed by T2 and the second measurement week, whereas a break of 2 weeks (in some cases 3 weeks due to scheduling problems for the participants) separated the second measurement period and T3.Hence, diaries should be kept for 21 days, within an average scope of 5-6 weeks as the study duration.Participants kept diaries for an average of M = 20.13days (SD = 3.73).Out of initially 1,610 daily assessments, 122 had to be excluded due to missing subjective age.Further 205 assessments were excluded due to invalid data on physical activity.Altogether, there were 1,283 assessments available from 80 individuals: M = 15.35,SD = 3.84. Daily diary variables Participants were asked to complete their diary every evening and to answer questions on subjective age, stress, affect, and sleep duration. For subjective age, a proportional discrepancy score was calculated following Rubin and Berntsen (2006): We subtracted chronological age from the answer to the question "All in all, how old did you feel today?" (subjective age) and divided the result by chronological age.Five values were identified as extreme values (>M + 3SD) and excluded from analyses.After multiplying the respective proportional discrepancy with 100 for reasons of clarity, a score of −8.0 would indicate feeling 8% younger than one actually is. Perceived stress was assessed with one item asking participants to rate their subjective stress level on the respective day on a visual analogous scale between 0 (not stressed at all) and 100 (totally stressed) (Lesage & Berjot, 2011). Affective mood was measured with the two-item valence subscale of a short mood scale that has proven to be reliable, but also sensitive to change in measurement burst designs (Wilhelm & Schoebi, 2007).The two mood items were answered on bipolar scales ranging from negative to positive poles, that is, 0 (discontent) to 6 (content) and 0 (unwell) to 6 (well).The two items were averaged. Sleep quality was assessed with a single item ("How well did you sleep last night?")based on the Pittsburgh Sleep Quality Index (Buysse et al., 1989) on a scale from 1 (very poorly) to 7 (very well). Physical activity For measuring physical activity, the wrist-worn, commercially available activity tracker Fitbit Charge HR (Fibit, Inc., San Francisco, CA) was used.For privacy reasons and in order to not exclude older adults without a smartphone, we created e-mail aliases and pseudonymous accounts that were not connected to the mobile Fitbit app.As a first indicator for (mainly) light physical activity and mobility, we used step counts, and as a second indicator, we used minutes spent with moderate-to-vigorous physical activity (MVPA).The Fitbit has performed well in measuring step counts and MVPA in earlier research (e.g., Paul et al., 2015) and in our own pilot study where participant compliance and device performance were very satisfactory (Schmidt, Gabrian et al., 2018).We collected information on compliance via self-reports on nonwearing periods in the diaries.Two hundred and five invalid days occurred especially due to insufficient wearing times (e.g., forgot to recharge), difficulties of the device in specific circumstances (e.g., motorcycling), and technical errors.To ease interpretation, the number of steps was divided by 1,000 for the multilevel regression analyses. Weather data The weather data stemmed from the local weather station in Mannheim, Germany.Sunshine was measured in hours per day.Average sunshine duration over the assessment period ranged from 0.56-9.99hr/day, clearly reflecting the yearround assessment. Covariates As our main interest lies in intraindividual associations, we applied interindividual covariates sparsely and only focused on chronological age and sex (0 = women, 1 = men).As an intraindividual covariate, we accounted for the passing of time by entering the respective diary day, starting with Day 0 and amounting to Day 20.The passing of time hereby serves as a proxy for potential intervention effects associated with the pre-post physical activity intervention design.Other control variables that were derived from the original intervention study, such as measurement week or study group, were also tested in earlier analyses but did not reveal additional effects. Data Analysis As the single measurement points (days) were nested within individuals, we used multilevel modeling to account for effects on the level of the individual (Level 2) and effects on the level of intraindividual and day-to-day dynamics (Level 1).We entered age and sex as covariates on Level 2 and modeled all other variables on Level 1.All predictor variables and covariates except time and sex were centered around the grand mean.We started with Model 1, where we built a random-intercept model (allowing for an individualspecific mean in subjective age over the assessment period) and entered all covariates.For Models 2-7, each everyday predictor variable (stress, affect, MVPA, steps, sleep quality, and sunshine duration) was added as a single predictor.By doing so, we determined the strength of each of the six correlates and could determine whether they explained variance on Level 2 (more stable tendencies in subjective age) or Level 1 (intraindividual, everyday dynamics in subjective age).Model 2 focuses on stress, Model 3 on affect, Model 4 on physical activity, Model 5 on steps, Model 6 on sleep quality, and Model 7 on sunshine.Model 8 finally includes all covariates and predictors on Level 1 and Level 2. The models were thus based on the equation SA ti = β 0 + β j C ( j)i + β k P (k)ti + u 0i , where SA ti denotes the subjective age for individual i at time t, β 0 denotes the sample mean, and u 0i denotes the variance of the mean across individuals.The term β j C ( j)i applies to Level 2 predictors j, and the term β k P (k)ti applies to all intraindividual predictor variables k. Due to the relatively high homogeneity of the sample (i.e., less Level-2 variance than Level-1 variance, see Table 1), all daily variables were entered on Level 1 and combined Level-2 variance and Level-1 variance for the main analyses.In Supplementary Table 1, analyses are presented with separated Level-2 and Level-1 variance. Descriptives Table 1 depicts means, inter-and intraindividual SDs, intraclass correlations (ICCs), as well as inter-and intraindividual correlations.Across the 21 days, participants felt on average 8.6% younger than their chronological age.They showed high levels of physical activity with an average of 10,954 steps and 40 min of daily MVPA.On a bivariate level, the only significant interindividual correlate of subjective age was hours of sunshine.Thus, individuals who participated during sunnier periods felt younger on average.Intraindividually, an older subjective age occurred on days where individuals reported more stress, worse mood, and worse sleep quality, as well as on days on which less MVPA, fewer steps, and fewer hours of sunshine were measured.All everyday study variables displayed considerable amounts of intraindividual variance, with shares between 44% (subjective age) and 78% (hours of sunshine) of the whole variance. Does Subjective Age Fluctuate Within Short Time Frames? Subjective age fluctuated considerably within the 21 diary days and 5 weeks of investigation.With an ICC of 0.56, 44% of the overall variance in subjective age was intraindividual (Table 1).The intraindividual variability of subjective age over the diary period is depicted in Figure 1.The intraindividual SD (M = 5.79) ranged from 0 to 13.99.In some of the models (see Table 2), there was a positive time trend in subjective age, indicating that individuals felt slightly older, the longer they participated in the study. Stress and Affective Mood as Psychological Correlates of Subjective Age In Table 2, Models 2 and 3 indicate that both stress and affective mood were significantly related to subjective age.In comparison to a model controlled for day, age, and sex, stress explained 2% of variance on the intraindividual level and 5% of variance on the interindividual level.This indicates that individuals who felt more stressed, on average, also felt older on average.Additionally, individuals felt younger on days on which they felt less stressed than usual.Daily mood explained 21% of variance on the intraindividual level and 4% of variance on the interindividual level, indicating a strong relationship between daily affect and daily subjective age; individuals felt younger in particular on days with a better mood.The associations held even when all other predictors were entered into Model 8. Stress and mood could thus be replicated as everyday correlates of subjective age. Physical Activity and Sleep as Behavioral Correlates of Subjective Age In Table 2, Models 4-6 indicate that MVPA, steps, and sleep quality were significantly related to subjective age.MVPA and steps explained 1% and 4% of variance on the intraindividual level, respectively, indicating that individuals felt younger on days on which they were relatively more physically active or mobile.Specifically, in comparison to days with absolutely no physical activity and mobility, individuals would feel around 2.1% younger when they were active for 2 hr and would feel around 3.0% younger after taking 10,000 steps.Sleep quality explained 1% of variance on the interindividual level indicating that individuals who reported better sleep on average would also report a younger subjective age on average.The difference between sleeping very poorly versus very well would hereby amount to feeling around 2.6% younger.Despite significant effects, the variance explanation for MVPA and sleep quality was low, and some variance components even increased in size.This likely indicates the presence of random slopes, meaning that the relationship between MVPA and subjective age as well as sleep quality and subjective age differed from individual to individual.As soon as other predictors were entered into Model 8, only steps remained as a significant predictor.All unstandardized effects (for MVPA, steps, and sleep quality) clearly decreased in size, which suggests that the other predictors, in particular stress and affective mood might work as mediators: On less active days and after nights with bad sleep, individuals might have felt more stressed and in a worse mood, which would then again explain older subjective ages. Sunshine Duration as an Environmental Correlate of Subjective Age In the single-predictor model (Table 2, Model 7), longer sunshine duration was related to a lower subjective age, however, only explaining 1% of variance on the interindividual level, b = −0.11.This was in contrast with the interindividual correlation of r = 0.25 we found in Table 1, which would indicate a variance explanation of 6% on the interindividual level.One possibility for this result could be that inter-and intraindividual effects of sunshine duration were overlapping in Table 2.In Supplementary Table 1, we disentangled the effects of average sunshine duration across the assessment period (Level 2 predictor) and intraindividual deviations from this average (Level-1 predictor).When doing so, effects were indeed larger, with b = −0.78,standard error (SE) = 0.35, p = .030,and 5% of variance explanation on the interindividual level, and b = −0.13,SE = 0.05, p = .030,and 0% of variance explanation on the intraindividual level (overall variance explanation was 3%).Following these results, individuals were feeling approximately 7.4% younger when the sun was shining the maximum average of 10 hr/day compared to the minimum average of 0.5 hr.When other predictors were entered into the model (Table 2, Model 8), the overall effect of sunshine duration vanished.However, the purely interindividual effect of the supplement remained significant.Taken together, daily fluctuations in sunshine duration did not seem particularly consequential for subjective age.However, individuals who participated during sunnier periods (Western European summer) felt much younger on average than individuals participating during cloudy periods (Western European winter). Discussion Using a daily diary design, we aimed to investigate the potential explanatory power of physical activity and sleep as behavioral factors, and sunshine duration as an environmental factor for daily variations in subjective age beyond more established psychological correlates in this research area.In the first step, we were able to confirm considerable variability in subjective age and to replicate findings supporting a significant relation between everyday stress and affective mood (Kotter-Grühn et al., 2015) across a longer observational period of 3 weeks.Going beyond, physical activity, sleep quality, and weather conditions also explained portions of subjective age variance.Of note, affect still explained most of the variance in day-to-day fluctuations of subjective age.The fluctuations of subjective age in this study clearly substantiate its proposed state component (see also Dutt & Wahl, 2017).The share of intraindividual variance in subjective age was larger than in Kotter-Grühn et al. (2015) and Kornadt et al. (2021), but comparable to Segel-Karpas et al. (2022) who-as in the present study-sampled subjective age for a longer period of time.Potentially, our results also point to early retirement as a period of pronounced fluctuations in subjective age. Intraindividual Variability in Subjective Age and its Relation to Affect The everyday variation in subjective age is non-arbitrary as it covaries with other variables-most strongly with affect.Whereas earlier work was able to show short-term relations between subjective age and negative but not positive affect (Bellingtier et al., 2017), we were able to establish the relation between subjective age and valence of affect by using a bipolar scale.Participants felt younger on days on which they were well rather than unwell, and content rather than discontent.Affect might reflect the count of negative (aging) experiences during the day (such as social situations and health problems) and be a mediating source of fluctuations in subjective age.Also, individuals in a negative mood might concentrate stronger on negative (aging) experiences and therefore feel older (Dutt & Wahl, 2017;Miche et al., 2014).Similarly, affect might have an impact on how individuals deal and cope with everyday (aging) experiences. Additional Everyday Correlates of Subjective Age In addition to affect, stress and steps accounted for small shares of everyday variance in subjective age.Relations were as hypothesized.Individuals felt older on stressful days and during stressful periods-although the effect of stress was considerably smaller than the one found by Kotter-Grühn et al. (2015).Participants also felt younger on days they took more steps, whereas the association between subjective age and more strenuous physical activity (MVPA) disappeared when other predictors (specifically, steps) were entered into the model.In contrast to MVPA, steps per day might include time out-of-home, mobility, and general participation in societal life, which may crucially relate to subjective age.However, the fact that we included only participants with previously (very) low MVPA levels, who then exhibited higherthan-usual MVPA throughout the entire measurement period, should be kept in mind.In the original intervention study (Schmidt et al., 2022), there was already a significant increase in physical activity between the first cross-sectional survey and the measurement period used here; the mean change in physical activity within the measurement period of the present study was correspondingly very small.This selection might have restricted variance in MVPA leading to biased associations with subjective age.Future research, therefore, needs to clarify whether physical activity or related factors such as mobility are actually relevant for subjective age.Self-reported sleep quality was a significant correlate of subjective age only in the single-predictor model.Individuals felt younger after a night of good sleep.This association did, however, not persist when other predictors were entered into the model; the effect of sleep quality was most likely explained by stress and affect, with which sleep quality showed significant relations on the bivariate level.Our findings strongly relate to the study of Sabatini et al. (2022), whose linear regression models indicated that poorer sleep quality was significantly associated with a higher subjective age (R² = 1.0%), but after adjusting for covariates, the associations became weaker (R² = 0.02%).Behavioral correlates such as sleep, physical activity, and steps are potentially modifiable predictors of subjective age.Meta-analyses point to at least small positive effects of physical activity and sleep interventions in community-dwelling older adults (Chase, 2015).Nonpharmacological intervention programs incorporating physical activity to improve sleep quality in older adults were particularly promising (Sella et al., 2023;Vanderlinden et al., 2020).If future research can establish physical activity, steps and sleep as robust correlates of subjective aging, even if mediated by psychological variables such as affective mood, starting points for interventions would hence exist. Finally, sunshine duration had a negligible intraindividual effect, which vanished as soon as other variables (e.g., affect, steps) were entered into the model.Sunshine duration was, however, consistently related to subjective age on the interindividual level, suggesting that individuals participating during sunnier periods or seasons felt several years younger than those participating during cloudy or darker seasons.This does not come as a surprise because weather conditions shape the everyday life and aging experiences of older adults to a considerable degree (Hoppmann et al., 2017;Kööts et al., 2011;Petersen et al., 2015).However, weather has so far not been investigated in the context of subjective age.Naturally, weather may have a different impact on other population groups.For example, individuals of various ages and health states might be affected very differently by their environment (Wahl & Gerstorf, 2018).As weather may constitute an influential, environmental predictor with clear causality (i.e., individuals' subjective ages cannot affect weather), further research could contribute largely to the understanding of (explanatory) mechanisms behind subjective age.Our findings may stimulate new research with weather conditions as predictor variables as well as control variables.For example, in longitudinal study designs covering summer and winter periods, but also cross-sectional designs that take place during a time of weather change, weather might be a source for otherwise unexplainable interindividual variation.Furthermore, future research needs to disentangle the direct effects of weather on subjective age from mediating effects, for example, social and physical activities or mood that change with weather conditions. The theoretical underpinning of our study (Hughes & Touron, 2021;Montepare, 2009) states that proximal aging experiences are continuously evaluated against more stable conceptions of development (i.e., what it means to be old) and lead to fluctuations in subjective age.Our findings support this theory as subjective age fluctuated and was meaningfully related to variables that directly or indirectly reflect aging experiences.However, apart from weather (and sleep quality which was explicitly sampled in regard to the previous night), our statistical tools did not allow us to test for causal or time-ordered mechanisms.Hence, the relations between subjective age and stress, affect, steps, and physical activity might be reciprocal or in the opposing direction.For example, individuals might be more physically active as a consequence of feeling younger. Strengths, Limitations, and Future Research The statistical analysis of the current study focused on cross-sectional associations and does not allow for claims of causality.Time-ordered analyses and experimental designs (like Dutt & Wahl, 2017;Stephan et al., 2013) should investigate the directionality in the relation between everyday experiences and subjective age.Second, due to the specific target group of the original ActiveAge intervention (newly retired older adults without chronic conditions preventing physical activity), our sample was rather homogeneous.This may explain why less variance was explained on the interindividual level and may limit our findings' generalizability.More diverse samples would likely be needed to study moderators and particularly, whether certain developmental ideas (e.g., future expectations, views on aging) may affect the role of everyday predictors of subjective age (see Montepare, 2009).In this regard, it would also be interesting to focus more strongly on random slopes (i.e., individual-specific effects), for which we found statistical indications in our analyses.Third, there were shortcomings and strengths in the study design and assessments.Sleep quality was captured via selfreport at the end of the day, which might be more prone to recall bias than immediate morning diaries.Future research may use additional objective sleep assessments, for example, wearables able to measure a broader range of sleep variables such as sleep duration, times awake, time spent in each of the sleep stages, or sleep efficiency.The objective assessment of physical activity namely comes with many advantages: It is more valid, accurate, and reliable than self-reports of physical activity (Prince et al., 2020), and less influenced by social desirability and demand effects.However, the objective assessment also resulted in missings and exclusion of certain data points.Fourth, we chose sunshine duration as a conceptually meaningful indicator of weather and environment in the specific setting of our study.Depending on geographical and historical contexts, other environmental indicators (e.g., maximum or minimum temperatures, climate events) may be able to better capture the variations and extremes individuals are facing in their objective and subjective aging process. Conclusion The diary study covered 21 days and allowed for an in-depth replication of prior findings that subjective age varies considerably in everyday life.We were able to identify a number of everyday correlates of subjective age.The psychological correlates, stress and affective mood, were hereby the strongest.The number of steps served as a robust behavioral correlate of subjective age.In contrast, the effects of more strenuous physical activity and sleep quality were small and seemed to be explained by the other predictors.Lastly, our yearround assessment and the spacing of the diary periods over 5-6 weeks allowed for the first investigation of the effect of Notes: ICC = intraclass correlation; MVPA = moderate-to-vigorous physical activity; SD = standard deviation.Subjective age is operationalized as a proportional difference score.Means, interindividual and intraindividual standard deviations, as well as ICCs are reported on the left side of the table.Bivariate Intraindividual correlations (Level 1) are reported below the diagonal, interindividual correlations (Level 2) are reported above the diagonal.Significant correlations (p < .05)are printed bold. Figure 1 . Figure 1.Boxplots with intraindividual variation in subjective age for each of the 80 participants. found that subjective age was a salient predictor of poor sleep quality beyond chronological age across three large U.S. surveys (Midlife in the United States Study, Health and Retirement Study, National Health, and Aging Trends Study).Data from a population-based Korean study Table 1 . Descriptives and Bivariate Intercorrelations of the Study Variables. Table 2 . Subjective Age in Relation to Stress, Affect, MVPA, Steps, Sleep Quality, and Hours of Sunshine Notes: MVPA = moderate-to-vigorous physical activity.Subjective age is operationalized as a proportional difference score and is predicted by multilevel models.Model 1 includes the Level-1 covariate day and the Level-2 covariates age and sex.Models 2-8 include Level-1 predictors, which combine Level-2 and Level-1 variance.Unstandardized coefficients are given together with standard errors in parentheses.Significant parameters (p < .05)are printed bold.
2024-07-19T15:11:50.261Z
2024-07-17T00:00:00.000
{ "year": 2024, "sha1": "9ee431f76b82efa4686d207575edfbacba43c359", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/geroni/igae067", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09a3177659f16b3f5e7fa4b8614f267c085ab1d8", "s2fieldsofstudy": [ "Environmental Science", "Psychology" ], "extfieldsofstudy": [] }
16656438
pes2o/s2orc
v3-fos-license
Prevalence and influence of cys407* Grm2 mutation in Hannover-derived Wistar rats: mGlu2 receptor loss links to alcohol intake, risk taking and emotional behaviour Modulation of metabotropic glutamate 2 (mGlu2) receptor function has huge potential for treating psychiatric and neurological diseases. Development of drugs acting on mGlu2 receptors depends on the development and use of translatable animal models of disease. We report here a stop codon mutation at cysteine 407 in Grm2 (cys407*) that is common in some Wistar rats. Therefore, researchers in this field need to be aware of strains with this mutation. Our genotypic survey found widespread prevalence of the mutation in commercial Wistar strains, particularly those known as Han Wistar. Such Han Wistar rats are ideal for research into the separate roles of mGlu2 and mGlu3 receptors in CNS function. Previous investigations, unknowingly using such mGlu2 receptor-lacking rats, provide insights into the role of mGlu2 receptors in behaviour. The Grm2 mutant rats, which dominate some selectively bred lines, display characteristics of altered emotionality, impulsivity and risk-related behaviours and increased voluntary alcohol intake compared with their mGlu2 receptor-competent counterparts. In addition, the data further emphasize the potential therapeutic role of mGlu2 receptors in psychiatric and neurological disease, and indicate novel methods of studying the role of mGlu2 and mGlu3 receptors. This article is part of the Special Issue entitled 'Metabotropic Glutamate Receptors, 5 years on'. Introduction The metabotropic glutamate 2 (mGlu2) receptor belongs to the family of G-protein coupled glutamate receptors that modulate transmission at synapses throughout the mammalian central nervous system, and that have been proposed as major targets for the development of drugs for human psychiatric and neurological diseases (Niswender and Conn, 2010;Nicoletti et al., 2011;Chaki et al., 2013;Li et al., 2015). The mGlu2 receptors signal through Ga i/o proteins inhibiting adenyl cyclase and reducing cAMP (Tanabe et al., 1992), cascading into effects on multiple systems including PKA/MAPK, GSK-3b, Src kinase, AMPA and NMDA receptors etc (Pin and Duvoisin, 1995;Harris et al., 2004;Trepanier et al., 2013;Wang et al., 2013). They also signal through Gb/g inhibiting calcium channels (Chavis et al., 1994;Scanziani et al., 1995) and activating potassium channels (Knoflach and Kemp, 1998;Chavez-Noriega et al., 2002). The major established physiological function of mGlu2 receptors is to modulate synaptic transmission as presynaptic auto-and hetero-receptors at glutamatergic and GABA-ergic terminals (Battaglia et al., 1997;Smolders et al., 2004). The perisynaptic location of mGlu2 receptors ideally positions them for sensing glutamate overflow (Petralia et al., 1996;Shigemoto et al., 1997) and release from astrocytes (Moran et al., 2005;Kalivas, 2009). Such complexity of the actions of a single receptor subtype confounds attempts at predicting effects of exogenous agonists or antagonists of mGlu2 receptor on whole animal behaviours and hence their therapeutic potential. Nevertheless the predicted potential for mGlu2/3 receptor agonists based on limiting glutamate release has been borne out in animal models of schizophrenia (Schoepp and Marek, 2002), anxiety (Helton et al., 1998;Swanson et al., 2005), cerebral ischaemia (Bruno et al., 2001), epilepsy (Smolders et al., 2004), drug addiction (Kalivas, 2009) and chronic pain (Chiechio et al., 2010). There has also been some limited success with clinical studies (Grillon et al., 2003;Patil et al., 2007;Dunayevich et al., 2008) but this has not yet led to an approved drug. Clearly, the importance of understanding the role of mGlu2 receptors in physiology and pathology cannot be overstated. One of the issues has been that orthosteric agonists and antagonists do not separate between mGlu2 and mGlu3 receptors (Nicoletti et al., 2011), which have different, and possibly opposing effects (Corti et al., 2007). To overcome this problem we recently used a new selective mGlu2 receptor agonist, LY395756, and its active enantiomer, LY541850 (Dominguez et al., 2005), to separate between the roles of mGlu2 and mGlu3 receptors in synaptic events (Ceolin et al., 2011;Hanna et al., 2013). However we found that many of the outbred Wistar rats studied were unresponsive to the selective mGlu2 agonist; this apparent anomaly was traced using Western blotting to the lack of mGlu2 receptor expression in some Wistar rats (Ceolin et al., 2011). Such animals being used for animal modeling of human diseases clearly produce misleading results when studying the roles of mGlu2 receptors. For example, mGlu2/3 agonists, known to reduce the phencyclidine-induced hyperlocomotion in other rat strains (Moghaddam and Adams, 1998;Monn et al., 2007), do not show this effect in Wistar rats lacking mGlu2 receptors (Wood et al., 2014). Because of the demonstrated potential of the mGlu2 receptor as a therapeutic target, this finding is of critical importance to the research community. Immediately questions arise as to i) why is the mGlu2 receptor missing from some rats and ii) how frequently does this occur in populations of rats used in laboratory studies. We report here the occurrence of a single point mutation in exon 3 of the Grm2 gene, which results in a premature stop codon at cysteine 407 of the mGlu2 receptor, and resultant loss of functional protein expression. We also report the high frequency of this mutant genotype in certain outbred and inbred rat lines that are commercially available or selectively bred, and we discuss its influence on behavioural characteristics. Animals For the initial studies, Wistar rats from Banting & Kingman Ltd. (UK), Harlan Laboratories (UK) and Charles River Laboratories (UK), were housed in pairs under temperature controlled conditions in standard laboratory housing. Experiments were conducted in accordance with the Animals (Scientific Procedures) Act 1986 and approved by local ethical review (University of Bristol). Sources, and derivation details where appropriate, of animals used in the genotyping survey are given in the Results section and in Tables 1e3. Initial investigation Briefly, spare hippocampal slices from rats determined electrophysiologically to be sensitive or insensitive to a selective mGlu2 receptor agonist (see Mercier et al. (2013) for Methods) were frozen at À80 C and RNA subsequently extracted using Qiagen RNeasy kit according to the manufacturer's protocol. A first strand synthesis was performed using SuperScript III First Strand Synthesis Supermix (ThermoFisher Scientific) with Oligo(dT) according to the manufacturer's protocol. cDNA were then kept at À20 C before PCR amplification. Initially, four pairs (A-D) of custom DNA oligonucleotides were originally designed to cover most of Grm2 mRNA (NM 001105711.1), namely nucleotides 131-147 and 854-834 (A), 819-838 and 1430-1411 (B), 1419-1439 and 2166-2147 (C) and 2003-2023 and 2824-2805 (D). A fifth pair of primers, 1142-1161 and 1733-1714 (E) was subsequently ordered. All PCR primers were purchased from Sigma Aldrich (UK) and used to amplify the appropriate stretches of hippocampal cDNA. Following 35 PCR cycles and confirmation of correct PCR amplification by gel electrophoresis, the amplified cDNA was purified, prepared and sent to Source Bioscience (UK) for Sanger sequencing with the same primers. The sequencing data were analysed using CodonCode Aligner (v5.1.5) and expressed as chromatograms for illustration (Fig. 1B). Subsequent genotyping Following the initial discovery of the cys407* mutation, the protocol for genotyping was refined and focused on the stretch of gDNA, containing the mutation, using the following primers: nucleotides 1301-1320 and 1488-1469 (NM 001105711.1) included in exon 3. With this PCR primer pair (F), the method described above was used for genotyping tissue in a survey of other outbred and inbred rat strains as indicated in Tables 1 and 2. To collect tissue samples, rats were euthanized by equivalent of UK Schedule 1 methods or, where appropriate, gently restrained to collect ear or tail tissue. Tissue was frozen at À80 C, packaged in dry ice and, as necessary, shipped to the University of Bristol for gDNA preparation and assaying as above, with gDNA extracted using Qiagen DNeasy kit according to the manufacturer's protocol. Allelic discrimination In parallel with the above genotyping, allelic discrimination was used to detect the presence or absence of the same mutation in several strains from Harlan (now Envigo) Laboratories, Indianapolis (see Table 1 for details of rat lines examined). All animal tissue collection protocols used in this study were approved by the Envigo IACUC. Ear pinnae of approximately 2 mm were collected from each animal tested and shipped frozen overnight to the Envigo genetic testing services laboratory, located at the Bionomics and Research Technology Center (BRTC) in Piscataway, New Jersey. SNP genotypes were determined using Taqman chemistry with probes and primers designed using Primer Express v.3.0. Primer sequences include: Forward Primer: TGCCCTCTGTCCCAACAC; Reverse Primer: GCGGCGCCCATTGAC; Reporter 1: TAGCATCGCAGAGGTG; Reporter 2: CATAGCATCTCAGAGGTG. Specific PCR cycling conditions are as follows: 95 C, 10 min; (95 C, 30 s; 60 C, 1 min) x 40. Data were collected upon completion of PCR amplification in an end plate read protocol and were analysed using ABI Fluidigm ViiA7 Real-Time PCR System. Discovery of the cys407* mutation in Grm2 Hippocampal cDNA from 3 B&K Wistars (Bkl:WI), 'insensitive' to the selective mGlu2 receptor agonist, LY541850, defined electrophysiologically (Mercier et al., 2013), and 1 'sensitive' control Charles River Wistar (Crl:WI) was sequenced using the 4 pairs of initial primers (A-D). These provided nucleotide sequencing from nt 141-843 (A), 829-1417 (B), 1432-2153 (C) and 2013-2814 (D) and indicated no mutations within the Grm2 cDNA. The small uncovered stretch of DNA from 1417 to 1432 was subsequently sequenced using primer pair E and the resultant Sanger chromatograms for all 3 'insensitive' Wistar rats showed a single point mutation at nt1419 in exon 3; a cytosine was replaced by an adenine resulting in a stop codon (TGA) rather than the codon (TGC) for cysteine at amino acid 407 of the mGlu2 receptor protein (Fig. 1). No other mutations within the Grm2 gene were observed in any of the 4 rats. The presence of the cys407* mutation was subsequently confirmed with the primer pair F using other tissue samples, defined electrophysiogically with LY541850 (Mercier et al., 2013). Such a premature stop codon provides the explanation for the absence of protein as confirmed by Western blotting (Fig. 1C) and hence the lack of effect of a selective mGlu2 receptor agonist in some Wistar rats (Ceolin et al., 2011). This mutation is the same as reported independently in the Wistar-derived alcohol-preferring P rats (Zhou et al., 2013). Prevalence of the mutation in commercially available Wistar rats To assist in the assessment of the frequency of the Grm2 cys407* mutation in commercially available stocks of Wistar rats, tissue samples from laboratory animal suppliers were genotyped. We initially found that 100% of B&K Wistars (Bkl:WI) and Harlan HSD Han Wistars (HsdHan:WIST) were 100% homozygous mutants whereas 100% of Charles River Wistars (Crl:WI) were 100% homozygous wild-type (Table 1). The survey was extended to other Wistar strains in Europe, the United States and Israel. A simple scan of the data in Table 1 shows that Han Wistars, including RCC Han Wistars (HsdRcc:WIST), are largely homozygous or heterozygous mutants whereas Wistars without the 'Han' designation are Western blots indicating the loss of mGlu2 receptor protein in homozygous cys407* mutant rats (Mut); the blots from mGlu2 À/À mice and wild type mice demonstrate the specificity of the antibody. a-tubulin was used as a loading control. Table 1 In-house genotypic analysis for the Grm2 cys407* allele in rat lines from commercial suppliers. Genotyping was conducted using Sanger sequencing or allelic discrimination, with the latter indicated by 1 . Allelic frequency denotes frequency of the mutant cys407* allele within each sample. Han Wistar sources used in the Palm et al., (2011a,b) studies are indicated by 2 . Furthermore, the following 14 inbred strains from Harlan Labs (USA) were analysed by allelic discrimination and all were homozygous wild type: ACI/Seg, LEW/SsNHsd, F344/ NHsd, WKY/NHsd, BN/SsNHsd, LE/CpbHsd, SHR/NHsd, SS/JrHsd, SR/JrHsd, LEW/HanHsd, BN/RijHsd, F344/NHsd, WF/NHsd (n ¼ 5 per strain) and HsdCpb:WU (n ¼ 80). Two non-Wistar lines in our survey of commercial sub-strains showed the cys407* mutation, namely the Dark Agouti lines, DA/ OlaHsd and DA/HanRj (Table 1). A list of other inbred commercial lines that did not carry the mutation is presented in the legend to Table 1. Prevalence of the mutation in rat lines selectively bred for particular behavioural characteristics Because some commercial outbred Wistars lacking expression of mGlu2 receptors (Table 1) have been reported to have an anxiety-like phenotype (Ceolin et al., 2011) and an increased voluntary alcohol intake and riskerelated behaviours (Palm et al., 2011a(Palm et al., , 2011bMomeni et al., 2015), we extended the investigation of the cys407* mutation prevalence to small cohorts from lines selectively bred for particular behavioural phenotypes (Table 2). Table 2 Prevalence analysis for the Grm2 cys407* allele in rat lines selectively bred for particular behavioural characteristics. Genotypic outcome and cys407* mutant allele frequency have been calculated for each rat line. Sources of the tissue for in-house genotyping are shown, with previously published genotyping data indicated by # . The Wistar-derived Roman High-(RHA) and Low-Avoidance (RLA) rat lines were initially selected and outbred in Rome on the basis, respectively, of their good or poor acquisition of the two-way active avoidance response (Bignami, 1965) and transferred to Zürich in 1972 (Driscoll and B€ attig, 1982;Driscoll et al., 1998;Steimer and Driscoll, 2003). From these RHA/Verh and RLA/Verh lines, two inbred lines (RHA-I and RLA-I) were derived and maintained at the Autonomous University of Barcelona since 1997 (Escorihuela et al., 1999;Driscoll et al., 2009). Analysis of samples from 6 RHA-I and 6 RLA-I male rats showed that all RHA-I rats and no RLA-I rats from Barcelona expressed the mutation ( Table 2). The Alko Alcohol-preferring (AA) and Alko Non Alcoholpreferring (ANA) rats were derived from a Wistar-Sprague Dawley cross in the 1960s (Eriksson, 1968). Other non-Wistar strains were bred into them for further selective breeding for alcohol preference (Hyytia et al., 1987;Sommer et al., 2006). On genotyping the AA and ANA rats (n ¼ 12/strain), all were wild-type, except for one AA rat, which was heterozygous for the cys407* allele (Table 2). In the early 1990s in Munich, Wistar rats from Charles River (Germany) were selectively bred, based on their behaviour in the elevated plus-maze into two lines, one with High Anxiety-related Behaviours (HAB) and the other with Low Anxiety-related Behaviours (LAB; Liebsch et al., 1998). The initial HAB and LAB lines were crossed with other Wistar (Wis/Prob) rats selectively bred in Leipzig for low and high performance respectively in a shockemotivated brightness discrimination task (Hess et al., 1992). The resulting HAB and LAB breeding lines, currently maintained at the University of Regensburg, show many signs of clinical anxiety as well as abnormal aggressive behaviour (Landgraf and Wigger, 2002;Neumann et al., 2010). When genotyped for the Grm2 mutation, all HAB (n ¼ 8) and LAB (n ¼ 7) rats, verified individually for their respective anxiety-like characteristics using the elevated plus maze, were homozygous for the cys407* mutation ( Table 2). The Sardinian alcohol-Preferring (sP) and alcohol Non-Preferring (sNP) lines were developed through a selective outbreeding program starting from a stock of Wistar rats bred at Morini, San Polo d'Enza, Italy (see Colombo et al., 2006). The sP rats display increased anxiety-related behaviours relative to sNP rats (Colombo et al., 1995;Roman and Colombo, 2009). When genotyped, the sP (n ¼ 10) and sNP (n ¼ 10) rats were clearly distinguished between lines with Grm2 cys407* alleles only in the alcohol-preferring line, with no mutants in the sNP line (Table 2). Similarly the selective breeding of Warsaw High alcohol-Preferring (WHP) and Low alcohol-Preferring (WLP) rats was commenced in the early 1990s from Wistar stock (Bisaga and Kostowski, 1993;Dyr and Kostowski, 2004). The WHP rats display lower anxiety-related and depressive-like behaviours than the WLP rats (Acewicz et al., 2014). Unlike many of the above lines selectively bred for alcohol intake, the WHP rats (n ¼ 5) and WLP rats (n ¼ 5) were similar in Grm2 genotype, with both lines showing a mixture of Grm2 mutants and wild types ( Table 2). Examination of the Rat Genome Database (RGD) for the cys407* mutation revealed this mutation in a number of inbred lines (Table 3), including another Dark Agouti line (DA/BklArbNsi) and the Maudsley Reactive line (MR/N;Broadhurst 1975;Blizard et al., 2015). These MR/N rats were bred from Wistars on the basis of rates of defaecation in an open field setting, the inbred strain in the RGD being homozygous for the Grm2 mutation. On this database of 43 inbred strains, there were 9 rat lines expressing the mutation. Four of these mutant lines, BUF/N, M520, WN/N and MR/N itself, were used to derive the N/NIH heterogeneous stock rats from a total of 8 lines (Hansen and Spuhler, 1984). Discussion Our independent finding of the cys407* mutation in Grm2 (reported here), the description of the cys407* mutation in alcoholpreferring P rats (Zhou et al., 2013), and the low mGlu2 receptor expression in inbred Roman High Avoidance (RHA-I) Wistarderived rats (Klein et al., 2014) demanded a widespread survey for this Grm2 mutation among Wistar rats. This has led us to discover a preponderance of the cys407* mutation in some commercial Wistar rats and in some selectively bred lines of Wistar origin (Tables 1e3). The discussion will focus on the discovery, prevalence and origin of the mutant genotype and the implications for use in neuroscience research. We also consider how this mutation may relate to a specific behavioural phenotype. Discovery, prevalence and origin of the cys407* mutation In addition to our own observation that some strains of Wistar rats have reduced expression of the mGlu2 receptor (Ceolin et al., 2011), others have made similar observations, noting only silent mutations, reductions in transcript level and potential epigenetic changes (Lindemann et al., 2006;Klein et al., 2014). The molecular basis for the reduced mGlu2 receptor expression was not determined until our discovery (April 2013) of the cys407* mutation and, independently reported by Zhou et al. (2013) in the Wistar-derived P rats, a line selectively bred for alcohol consumption and Table 3 Results showing rat lines that contain the Grm2 cys407* allele from the Rat Genome Database (RGD) using the Variant Visualizer tool (http://rgd.mcw.edu/rgdweb/front/config. html). All lines listed are homozygous for the mutant Grm2 allele. The code and RGD ID for each rat line are indicated. The variant data resources include The Royal Netherlands Academy of Arts and Sciences (KNAW), Medical College of Wisconsin (MCW), National Institute of Health (NIH) and Atanur et al. (2013). * Indicates the 4 rat lines of the 8 used to breed the N/NIH heterogeneous stock rats. preference (Lumeng et al., 1977;Li et al., 1979). Building on this work, we have now surveyed a number of commercially available Wistar stocks including those used for behavioural profiling and assessment of voluntary alcohol intake (Palm et al., 2011a,b Momeni et al., 2015) and a few selectively-bred lines showing phenotypes that tentatively might be linked to loss of mGlu2 receptors. Lastly, we have interrogated the Rat Genome Database to further explore the prevalence of the Grm2 cys407* mutation. Our data show widespread distribution of the cys407* mutation in commercially available Wistar rats of different origin and from different suppliers (Table 1), particularly those with known historical derivation from the Hannover Institute (Zentral Institut fur Versuchstierzucht; Fig. 2). This may provide a clue to the origin of the mutation. Founders from the Hannover Institute were distributed to inter alia IFFA-Credo (later Charles River, France), Biomedical Research Laboratories (BRL; later Research Consulting Company, RCC; Switzerland) and Bury Green Farm (Glaxo, UK). These were the sources, which have given rise to commercial Han Wistar colonies expressing the cys407* mutation (Fig. 2). It can therefore be assumed that the mutation was common within the Wistar stock at the Hannover Institute. At first sight, it seems likely that the initial spontaneous mutation occurred at the Hannover Institute presumably in a single animal. An earlier event, e.g. at an establishment in the UK or even at the Wistar Institute itself (see Fig. 2) is, however, perhaps more likely; a small number of resultant mutants then being chosen by chance to act as founders at the Hannover Institute. Similarly, although Wistar rats from commercial stocks not originating via the Hannover Institute generally do not appear to contain the mutation (see Fig. 2; Tables 1 and 2), presence of the mutation in the Harlan Wistar (HsdOla:WI) and in some older inbred lines in the Rat Genome Database, including the Maudsley Reactive line (MR/N; Table 3) also suggests a pre-Hannover event. The non-Wistar Dark Agouti rats (DA/OlaHsd, DA/HanRj and DA/BklArbNsi) were surprisingly all homozygous cys407* mutants. Unfortunately, tracing the precise origins of many of these older lines is virtually impossible and so a Han Wistar lineage could not be determined, e.g. (http://www.informatics.jax. org/external/festing/rat/docs/DA.shtml). Selective breeding for a particular behavioural trait from stock containing some mutants may lead to a change in frequency of the 407* mutation, if this allele influences the sought for phenotype (see below). Why, however, did loss of the mGlu2 receptor result in such rats becoming the most numerous genotype in some commercial colonies? Was it just by chance that the rats used as founders or for revitalizing some colonies were mostly Grm2 mutant rats or does the mutation provide some phenotypic advantage in commercial breeding laboratories? Obvious explanations might be that cys407* mutant rats are more efficient in terms of growth or fecundity. Something as simple as handling characteristics or interaction with humans may also affect choice of individual rats for breeding in establishments where no phenotype is actively sought. Further behavioural studies comparing 'Han' and 'non-Han' lines of Wistar rats (e.g. Palm et al., 2011a) may eventually disclose some as yet unknown characteristic, which leads to Grm2 mutant individuals being chosen for breeding. In addition to evidence of a prevalence of the mutation in outbred stocks of Wistar rats of Han origin, we have also shown Grm2 genotypic heterogeneity between certain selectively bred lines. This raises the possibility that the presence of specific behavioural characteristic, e.g. alcohol preference, may be linked to the expression of mGlu2 receptors and thus selection of animals for such behaviours will result in lines with different prevalence of the mutation. In particular, the link between the Grm2 mutation, alcohol intake, anxiety-related and risk-related behaviours are of particular interest and are discussed in more detail below. Behavioural characteristics of sub-strains lacking mGlu2 receptors In addition to the electrophysiological differences noted with selective mGlu2 receptor agonists (Ceolin et al., 2011;Hanna et al., 2013;Lucas et al., 2013;Mercier et al., 2013;Sanger et al., 2013), differences in behaviours related to emotionality were also observed (Bert et al., 2001;Ceolin et al., 2011;Palm et al., 2011a;Honndorf et al., 2011). When tested in the multivariate concentric square field™ test (MCSF), the main segregating factors for the Han Wistar rats were lower general activity but increased activity in areas associated with risk, i.e. higher risk taking behaviour (Palm et al., 2011a). With the knowledge of distribution of the cys407* mutation in the rat strains from Table 1, this behavioural correlation with genotype can be illustrated (Fig. 3). Whilst this provides an interesting insight into the potential link between the mutation and behavioural characteristics, it should be noted that most studies have only examined a small number of strains and there is considerable behavioural variation depending on the task used and comparison groups (Palm et al., 2011a;Goepfrich et al., 2013;Momeni et al., 2015). Among lines selected for behavioural phenotype, the RHA-I rats (homozygous for the cys407* mutation) were originally selected for breeding based on rapid acquisition of avoidance responses in shuttle boxes (Bignami, 1965;Driscoll and B€ attig, 1982;Driscoll et al., 1998;Escorihuela et al., 1999). The fact that only 5 generations were needed to establish this phenotype (Bignami, 1965) suggests relatively high penetrance of the genotype. The RHA-I rats show aspects of impulsivity, risk-taking and sensation seeking (Escorihuela et al., 1999;Lopez-Aumatell et al., 2009a;Moreno et al., 2010;Klein et al., 2014), characteristics which have some parallels with the behavioural phenotype reported in commercially available Han Wistars (Palm et al., 2011a;Momeni et al., 2015). Whilst the distribution of the Grm2 mutation within different outbred Wistar populations cannot be specifically linked to emotional behaviour, there is some evidence to suggest anxietyrelated behaviour in mutant versus non-mutant animals (Ceolin et al., 2011) or risk/impulsivity-related behaviours (Palm et al., 2011a;Klein et al., 2014). However, whilst the Grm2 mutation will have particular influences on behaviour, differences in the methods used to study these behaviours (Steimer and Driscoll, 2003;Diaz-Moran et al., 2012;Klein et al., 2014) and other genetic and environmental/experimental features will overlay and complicate interpretation. Alcohol intake in sub-strains lacking mGlu2 receptors Wistar rats from commercial sub-strains lacking the mGlu2 receptor tend to consume more alcohol than non-Han Wistar rats (Palm et al., 2011b). However, there are inconsistencies in the data relative to our analysis of the prevalence of the mutation. Low voluntary alcohol intake was observed in the B&K Wistar rats Momeni et al., 2015) with RccHan:WI consuming more alcohol than other Wistar rats of Han origin (Goepfrich et al., 2013). This anomaly of low alcohol intake in B&K Wistar rats, which showed heterogeneity in frequency of the Grm2 mutation between suppliers (Table 1), unfortunately cannot be confirmed because this source of rats is no longer available. Adding to the case for a link between alcohol intake and the mutation, alcohol-preferring P but not non-preferring NP rats, selectively bred from a Wistar colony held at the Walter Reed Army Hospital (Lumeng et al., 1977) were homozygous for the cys407* mutation (Zhou et al., 2013). Likewise, the RHA-I but not RLA-I rats are also homozygous for the cys407* mutation and RHA-I rats have a higher alcohol intake that the RLA-I rats (Manzo et al., 2012;Corda et al., 2014). However, unlike the alcohol-preferring P and RHA-I rats, the Helsinki alcohol-preferring AA (Sommer et al., 2006) have few mutant alleles (Table 2). Therefore segregation between the two lines, and any influence of the Grm2 mutation on the alcohol drinking characteristic cannot be determined from the AA/ANA lines. By contrast to the AA/ANA lines, the Warsaw alcohol High-Preferring (WHP) and Low-Preferring (WLP) rats, which were also selectively bred from Wistar stock (see Dyr and Kostowski, 2004), did have a high proportion of cys407* mutant alleles. However, they could not be differentiated with respect to the Grm2 mutation, with the distribution of the cys407* allele being similar in both WHP and WLP lines (Table 2). But supporting the link between alcohol intake and lack of the mGlu2 receptor, the Sardinian alcohol-Preferring sP and alcohol Non-Preferring sNP rat lines initiated from Wistar stock in 1981 (see Colombo et al., 2006) were clearly distinguished by genotype, the mutant allele being found only in the alcoholpreferring sP line (Table 2). Thus, despite the anomaly with the B&K and Warsaw rats (see above), these findings strongly support the hypothesis that lack of the mGlu2 receptor contributes to alcohol intake, but is not a requirement (Zhou et al., 2013). Another important pair of selectively bred lines, the high and low alcohol drinking rats (HAD and LAD respectively;Li et al., 1993) have not yet been genotyped. These rats were selectively bred from the N/NIH heterogeneous stock, which in turn were produced by crossing 8 inbred sub-strains, many with some Wistar lineage (Hansen and Spuhler, 1984;Bell et al., 2012), 4 of which were Grm2 mutants (see Table 3). Interestingly the 4 Grm2 mutant lines (BUF/N, M520, WN/N and MR/N) were in the top 5 of the 8 N/NIH founders for alcohol preference and consumption (Spuhler and Dietrich, 1984). Our hypothesis above suggests that the mutant cys407* alleles from 4 of the founders will, on selection for alcohol consumption, segregate in the HAD rather than the LAD line. Note added in revision: Indeed, HADs have a cys407* mutant allelic frequency of 0.87 versus the 0.5 presumed for the original N/NIH stock (Professor Bill Muirepersonal communication). This hypothesis is also supported by pharmacological data in which mGlu2/3 receptor agonists reduce self-administration in mGlu2-competent rats (Backstrom and Hyytia, 2005) but not in mGlu2-deficient P rats (Rodd et al., 2006). Surprisingly, activation of these receptors did reduce alcohol-seeking behaviour of P rats in the latter study (Rodd et al., 2006). In addition, agonists of mGlu2/3 receptors block the discriminative stimulus effects of alcohol in non-Wistar rats (Cannady et al., 2011) and alcohol seeking behaviour (Besheer et al., 2010). Such data are in agreement with a large body of literature suggesting that mGlu2/3 agonists reduce both the rewarding value of drugs of abuse and the reinstatement of drug seeking behaviour (Moussawi and Kalivas, 2010), most likely because activation of mGlu2/3 receptors reduces dopamine release in the nucleus accumbens shell (Greenslade and Mitchell, 2004;Karasawa et al., 2006). Interestingly, P rats that contain the mutation show a greater alcohol-induced accumbens dopamine response than NP rats whereas there is little difference between AA and ANA rats (Kiianmaa et al., 1995;Bell et al., 2012). Similarly the RHA rats (Manzo et al., 2012) have a larger dopamine response to alcohol than the RLA rats (Corda et al., 2014). These data support the concept that activation of mGlu2 receptors limits the dopamine response to alcohol and other drugs of abuse (see review Moussawi and Kalivas, 2010;Kim et al., 2005;Liechti et al., 2007). Our present data and inter alia increased cocaine responsiveness in mGlu2 À/À mice (Morishima et al., 2005) and in RHA rats (Giorgi et al., 2007) indicate mGlu2 receptors as a therapeutic target in alcohol and other drug use disorders. Because anxiety trait and alcohol use disorders have been linked in humans (Morris et al., 2005;Helton and Lohoff, 2015; but see Fein, 2015) and laboratory animals (Stewart et al., 1993;Spanagel et al., 1995;Colombo et al., 1995), it is tempting to ascribe cause and effect to these two aspects of animal behaviour. However, individual rat scores on these two phenotypes within a group of 60 RCC Han Wistar rats were not associated. Instead rats with high risk assessment behaviour displayed higher voluntary alcohol intake (Momeni et al., 2014). Also, against the link between mGlu2 receptor deficit, alcohol preference and anxiety trait is the evidence from the RHA-I and RLA-I rats; the RHA-I rats have the Grm2 mutation and consume more alcohol but show more aspects of impulsivity, risk-taking and sensation-seeking than the RLA-I rats (Escorihuela et al., 1999;Steimer and Driscoll, 2005;Lopez-Aumatell et al., 2009b). Also, recently the alcohol-preferring P rats (see below), which also have the cys407* mutation (Zhou et al., 2013, Table 2), were found to be more risk-taking than their nonpreferring NP and mGlu2 receptor competent counterparts (Roman et al., 2012) in contrast to a previous report suggesting an anxious phenotype for P rats (Stewart et al., 1993). In an attempt to pursue the potential link between the Grm2 mutation and anxiety (Ceolin et al., 2011), samples from Wistar rats selectively bred for high and low anxiety-related behaviours (HAB and LAB; see Landgraf and Wigger, 2002) were genotyped. Unfortunately for advancing this discussion, all the samples from both HAB and LAB lines were homozygous for the mutation, suggesting that founders came from Wistar stocks with a high percentage of Grm2 mutants. Similarly, the epileptic GAERS line shows more anxiety-related behaviours than the non-epileptic NEC line (Jones et al., 2008), but does not contain the cys407* mutation. Implications for use of cys407* mutant rats in neuroscience research Although clearly there are large genetic differences between rat strains, the finding that the cys407* mutation of the Grm2 gene is expressed as the more frequent allele in several commercially available Wistar rat sub-strains is of major concern. The mGlu2 receptor is a key player in many aspects of synaptic transmission and plasticity throughout the CNS (Yokoi et al., 1996;Mukherjee and Manahan-Vaughan, 2013) and is widely regarded as an important target for drug development for neurological and psychiatric diseases (Niswender and Conn, 2010;Nicoletti et al., 2011;Li et al., 2015). Clearly these mutant rats will give atypical results in studies using mGlu2 receptor ligands. Because many studies have been performed with agonists and antagonists that do not separate between mGlu2 and mGlu3 receptors, both type 1 (an action via mGlu3 receptors may have been ascribed to mGlu2 receptors) and type 2 (lack of effect due to absence of the mGlu2 receptor) errors may have occurred. Published examples are difficult to find because the strains of rats are often not given in sufficient detail and negative results are not always published. It is nevertheless very important that the prevalence of this mutation is recognized by scientists investigating the role of mGlu2 and mGlu3 receptors both physiologically and therapeutically. By the same token, the cys407* mutation provides researchers with a valuable tool to separate between functions of mGlu2 and mGlu3 receptors, an issue that has slowed drug development in this field (Niswender and Conn, 2010;Nicoletti et al., 2011). As mentioned above, some mGlu2/3 agonists have been withdrawn from further development on the grounds of rat toxicity (Dunayevich et al., 2008), the Han Wistar rats will allow the offtarget toxicity to be investigated in isolation. Knock-out mice are available for investigating the influence of individual genes. However, the majority of behavioural tests have been developed in rats, which provide the bulk of rodent behavioural literature (Hanell and Marklund, 2014). Hence, the cys407* mutant rats provide a valuable resource for distinguishing between mGlu2 and mGlu3 receptor functions. Thus the mutant Han Wistar sub-strains provide the opportunity to study the role and therapeutic potential of mGlu3 receptors in the absence of complicating data from interaction of ligands with mGlu2 receptors. For example, the potential for mGlu3 agonism in neuroprotection, although widely appreciated (Bruno et al., 2001;Corti et al., 2007;Caraci et al., 2012;Motolese et al., 2015), has yet to be fully developed and could be more widely explored in these mutant rats. General conclusion Discovery of the cys407* mutation in Grm2 and its prevalence in Wistar rat sub-strains originating from the Hannover Institute has indicated the likely restraining influence of mGlu2 receptors in animal models of alcohol use disorders in particular, and substance use disorders in general. The data supports the therapeutic opportunity for mGlu2 receptor agonists in research on drug addiction mechanisms and more generally for emotional, risk-related and impulsive behaviours. Rat strains with this cys407* Grm2 mutation provide a useful model for understanding the separate roles of mGlu2 and mGlu3 receptors in physiology and pathology.
2016-10-10T18:24:48.217Z
2017-03-15T00:00:00.000
{ "year": 2017, "sha1": "3ea8afee4990dc7952fb7f0dc0000863e9fc7c75", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.neuropharm.2016.03.020", "oa_status": "HYBRID", "pdf_src": "Elsevier", "pdf_hash": "3ea8afee4990dc7952fb7f0dc0000863e9fc7c75", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55427500
pes2o/s2orc
v3-fos-license
MODIFICATION OF BORON DOPED DIAMOND ELECTRODES WITH GLUCOSE OXIDASE , CHARACTERIZATION BY ELECTROCHEMICAL TECHNIQUES In this work, we report the effect of the direct successive modifications with Glucose oxidase onto boron doped diamond electrode (BDD). The modification due to the enzyme adsorption, on the potentiodynamic response of the electrode, was evaluated using Fe(CN)6 4-/3red-ox couple on the electrolyte and the DEp variations were related with the number of modifications. Contact angle measurements and the electrochemical impedance spectra were also used to characterized the modifications and they showed variations in the same way that the potentiodynamic data. INTRODUCTION The electrochemistry of doped diamond electrodes was first studied by Pleskov 1 , and since, Boron doped diamond electrodes (BDD) have been extensively studied, from a fundamental point of view [2][3][4][5] , as well as that of applications [6][7][8] .Its more interesting feature is the high overpotential for both, hydrogen and oxygen evolution [9][10] .The physical, chemical and electronics properties affect the electrochemical behavior of BDD electrodes and these properties are dependent on quantity and kind of dopant, impurities and surface termination.The surface termination is usually generated by electrochemical methods, using the water reduction or oxidation, to produce H-termination or O-termination respectively [11][12] .Granger and Swain showed that the reversibility of certain redox couples is dependent on the surface termination of BDD 13 .Suffredini et.al. reported that Fe(CN) 6 4-/3-shows a reversible response on cathodically pre-treated BDD electrode and quasi reversible response if the electrode was anodically treated 14 , or the cathodically pre-treated BDD electrode, lose the Fe(CN) 6 4-/3-reversibility with time 5 .The BDD was especially well studied for electroanalytical applications due to the special properties of this material.Determinations of chlorophenols 15 , cysteine 16 , caffeine 17 , and others were reported.In most of the cases unmodified electrodes have been used, but nanoparticles modified electrodes also have been studied 18 . A Ru[bpy] 3 3+ modified oxidized BDD electrode has been prepared by Wu et al. 19 , in order to detect a catechin autoxidation intermediate.Notsu et al. 20 have reported a tyrosinase modified oxidized BDD electrode for determining phenols.Fortin et.al. 21 study the oxidation reactions of the nucleosides 2′-deoxyguanosine and 2′-deoxyadenosine on an oxygenated boron-doped diamond electrode were discussed.However, to date there have been few reports concerning third-generation enzyme biosensors based on the oxidized BDD electrode.Wu et.al. reported on a third-generation glucose biosensor that has Glucose oxidase (GOD) immobilized on the surface of an oxidized BDD electrode by modification with a mixture of Bovine Serum Albumine (BSA) and glutaraldehyde and works without a mediator 22 .In this work, we report the direct modification of a BDD electrode and its electrochemical characterization, in order to design a third generation biosensor. EXPERIMENTAL A BDD electrode from adamant Technologies was used as working electrode (p-doped, polycrystalline, 500 ppm boron doped).Previous to use, the electrode was washed with acetone, ethanol and water bi-distillated.A one compartment electrochemical cell was used with a saturated calomel electrode as reference electrode and a platinum coil as counter-electrode.The electrolyte used was K 4 Fe(CN) 6 10 mM y K 3 Fe(CN) 6 10 mM.The BDD electrode was pretreated applying an (±) 0.16 mA constant current, until reach 80 C of electrical charge in H 2 SO 4 0.5 M, to obtain the anodically or cathodically pretreated working electrode. The enzyme adsorption on the electrode was performed from a 4,0 UE/mL GOD type II from aspergillus niger (Sigma) aqueous solution.20mL of this solution was dipped on the electrode surface for 20 min and then washed away with distilled water.Successive modifications were performed repeating the procedure after the corresponding electrochemical characterization. Calibration curve was performed fixing the electrode potential at 1.2 V and the current was registered after addition of 100 mL of glucose 1 M in 30 mL of a buffer phosphate solution (PBS) pH 7.4 mechanically stirred. The BDD characterization was done by cyclic voltammetry at different scan rates.Besides, electrochemical impedance spectroscopy (EIS) was performed at open circuit potential between 10 mHz to 10 kHz with an 10 mV ac perturbation by means of a potentiostat CH 604 c (CH instruments) and an Ecochemie PGSTAT30 system.All solutions were kept at room temperature (approximately 23 ºC) and purged with argon. A Dataphysics OCA 20 device with a conventional goniometer and high performance video camera, controlled by SCA20 software, was used to measure the optical contact angle exposing a clean pretreated diamond surface to a 10 mL water drop measuring the angle between the drop and surface, the same procedure were performed to the modified electrodes. RESULTS AND DISCUSSION Effect of the pre-treatment on BDD electrode response Figure 1 A, shows the potentiodynamic response of an anodically (solid line) and cathodically (dashed line) pre treated BDD electrodes in the presence of the red-ox couple Fe(CN) 6 3-/4-.In this figure, for anodically pretreated electrode, two current peaks at 0.53 V, the anodic one and at -0.18 V, the cathodic one, are observed.The DEp = 0.71 V on the potentiodynamic response of this red-ox couple, indicates that the red-ox couple shows a high irreversibility on this substrate.This behavior was well studied 5 and it is due to the presence of oxygen terminated active sites.These sites could be suitable to the adsorption of different species, such as the glucose oxidase used in this study.On the other hand, the BDD electrode used has a low boron dopage, this characteristic is other reason to observe a high DEp value 5 .The I pa is dependent on the square root of the scan rate (not shown), this linear relationships is typical of reversible diffusion controlled reaction, such as the showed by the system Fe(CN) 6 3-/4-.Figure 1 , with a semi circle and after that a 45º line due to a Warburg element for both electrodes.From the semi circle, a 210 Ohm R ct can be obtained for the anodically pre-treated electrode .In this figure two current peaks at 0.53 V, the anodic one and at -0.03 V, the cathodic one, are observed.The DEp = 0.56 V on the potentiodynamic response of this red-ox couple, indicates that the red-ox couple shows a lower irreversibility on this substrate compares with the anodic pretreatment.Such as was reported, the red-ox couple Fe(CN) 6 3-/4-is very reversible in BDD electrodes cathodic pretreated, but the reversibility is a function of the boron content, and in this case the low dopage is the responsible of the irreversibility observed 5 .Again, the I pa values are dependent with the square root of the scan rate.Figure 1 B, shows the Nyquist diagram obtained at the open circuit potential.In this figure is possible to observe a semi circle and after that a 45º line due to a Warburg element.From the semi circle, a 110 Ohm R ct can be obtained.From the DEp and Rct values is possible to confirm that, on the cathodically pre-treated BDD electrode, the reactions studied are more reversible than on the anodically pretreated. Enzyme Adsorption The enzyme adsorption was performed directly on the electrode surface, without any reagents to assure the adsorption.Figure 2 A, shows the variation on the potentiodynamic profile, before and after modification.The figure shows a slightly change in the peak potential, increasing the DEp from 0.71 to 0.79, this effect is due to the lowering of the active sites by the modification.In the same way, the nyquist diagram shows in the figure 2 B shows a variation on the R ct from 210 to 250 Ohm.This result indicates that the modification process could be observed by means of electrochemical measurements, but the slight variations observed suggest evaluate the cathodic pretreatment.The same procedure was used to study a cathodically pretreated electrode and the results are shows in Figure 3. First, the potentiodynamic response in the Figure 3 A, shows that the DEp change from 0.57 V to 0.71 V as the results of the enzyme adsorption.This greater change suggests the study of successive adsorption on the same electrode and the DEp continues increasing indicating that the enzyme continues the adsorption process.From the EIS spectra showed in Figure 3 B, the R ct values were obtained and they increase in the same way that the potentiodynamic data.The response of the red-ox couple in this electrode (cathodically pretreated) is more reversible, so that could be the reason of the major variations observed with the number of the modification processes (repeating the same modification procedure).Moreover, the EIS data show variations in the R ct value from 126 ohm to 200 ohm with the first modifications and this value continues increasing with the number of modifications.Finally, from this figure is possible to conclude that the cathodically pretreated electrode is more suitable in order to study the modification of the electrode by adsorption of this enzyme, the reversibility of the red-ox couple in this electrode allows a better characterization of the modification process and the modification are stable because between the modification processes, the electrode was electrochemically study and toughly washed with water.The EIS data were simulated using the typical equivalent circuit used for the couple Fe(CN) 6 3-/4-.The double layer capacitance was simulated using a constant phase element and the a values change from 0.9 for the unmodified electrode to 0.85 for the modified ones.This parameter is very sensitive to the surface characteristic and this change is due to the presence of the enzyme on the electrode surface. The difference in the DEp and in the R ct values were related with the number of modification.These relationships are shown in figure 4 A. In this figure, is possible to observe that both values follow the same trends, like as an adsorption isotherm.So, is possible to conclude that the electrochemical measurements allow us to evaluate the modification of the BDD electrode. To confirm that the electrochemical data are representing the modification of the electrode, contact angle measurements were performed to the unmodified electrode and after the respective successive modifications.The unmodified electrode has a contact angle with the water drop of 27.9º, indicating an hydrofobic surface; the pretreatment applied to the electrode provoke an H-terminated surface, which is responsible of the hydrophobic character of the surface.After the modification the contact angle increase to 57.5º as a consequence of the modification, successive modifications increase the contact angle until to 73.1º.This variation is reflecting the modification of the electrode surface and when is representing together the DEp values in Figure 4 B, shows the same variation of the electrochemical data.So, physical and electrochemical measurements are representing the modification of the electrode surface by the enzyme like an isotherm.So, as, the physical and electrochemical data shows the modification of the electrode, a cyclic voltammetry experience was performed in glucose containing solution to confirm that the enzyme activity remains.Figure 5 A shows the calibration curve obtained by measuring the potentiostatic response of the modified (4 modifications) BDD electrode in PBS buffer solution after additions of 100 mL of 1 M glucose solution, the inset shows the response of the electrode after each addition.From the calibration plot a detection limit of 14 mM was obtained (3SD/m, SD= standard deviation, m = slope), this value is close to the detection limit found with aniline copolymers 23 .In figure 5 B the potentiodynamic profile of the unmodified and modified BDD electrodes in a phosphate saline solution pH 7.4 containing 10 mM of glucose were shown.In this figure an anodic wave is observed starting in 0.6 V compared to the unmodified electrode, this current wave is due to the glucose oxidation and indicates that at potentials more positive than 0.6 V the glucose oxidation takes place and allows to design a glucose biosensor by direct modification of cathodically pretreated BDD electrode.Studies on the application of this electrode to the glucose oxidation are in course. CONCLUSIONS From this work is possible to conclude that a BDD electrode cathodically pretreated electrode was directly modified with GOD.The contact angle measurements, DEp and R ct are parameters adequate to study the adsorption phenomena in BDD electrode and the results presented indicated that this type of methodology could be applied to design a biosensor based in BDD electrode. B, shows the Nyquist diagram obtained at the open circuit potential for both electrodes, anodically (solid circle) and the cathodically (open triangle) pre-treated electrodes.In this figure is possible to observe the typical response of the system Fe(CN) 6 3-/4-5 Figure 1 A Figure 1 A) Potentiodynamic profile of anodically (solid line) and cathodically (dashed line) pretreated BDD electrodes in 10 mM K 4 Fe(CN) 6 , 10 mM K 3 Fe(CN) 6 at 0,1 V s -1 .B) Nyquist diagram for anodically (solid circle) and cathodically (open triangle) pretreated BDD electrodes at the open circuit potential in the same electrolite Figure 1 A, also shows the potentiodynamic response of a cathodically pre treated BDD electrode in the presence of the red-ox couple Fe(CN) 6 3-/4- Figure 2 A Figure 2 A) Potentiodynamic profile of BDD electrode anodically pretreated in 10 mM K 4 Fe(CN) 6 , 10 mM K 3 Fe(CN) 6 at 0,1 V s -1 , before (dashed line) and after (solid line) modification with GOD.B) Nyquist diagram for the same electrode before (solid circle) and after (open triangle) modification with GOD, EIS spectra registered at the open circuit potential Figure 3 A Figure 3 A) Potentiodynamic profile of BDD electrode cathodically pretreated in 10 mM K 4 Fe(CN) 6 , 10 mM K 3 Fe(CN) 6 at 0,1 V s -1 , before (solid line) and after (dashed line) successive modifications with GOD.B) Nyquist diagram for the same electrode before (solid circle) and after (open circle) successive modifications with GOD, EIS spectra registered at the open circuit potential Figure 4 A Figure 4 A) R ct (close circle) and DEp (open triangle) vs modification number plot for cathodically pretreated BDD electrode.B) Contact Angle (close diamond) and DEp (open triangle) vs Modification number plots for cathodically pretreated BDD electrode. Figure 5 A Figure 5 A) calibration plot of glucose using modified BDD electrode, inset shows the potentiostatic response of modified BDD electrode in PBS solution at 1.2 V after addition of different quantities of 1 M glucose solution) B) Potentiodynamic profile of cathodically pretreated BDD electrode in PBS buffer solution pH 7.4 at 0.1 Vs -1 , in the presence (solid line) and absence (dashed line) of 10 mM glucose.
2018-12-12T19:23:16.364Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "210e1f3f8199b22ec87874798bb84d3f8407f482", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.cl/pdf/jcchems/v56n1/art22.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "be6a0c1959b1939e497296c264b76c030b20bb7e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
118638346
pes2o/s2orc
v3-fos-license
Time from quantum entanglement: an experimental illustration In the last years several theoretical papers discussed if time can be an emergent property deriving from quantum correlations. Here, to provide an insight into how this phenomenon can occur, we present an experiment that illustrates Page and Wootters' mechanism of"static"time, and Gambini et al. subsequent refinements. A static, entangled state between a clock system and the rest of the universe is perceived as evolving by internal observers that test the correlations between the two subsystems. We implement this mechanism using an entangled state of the polarization of two photons, one of which is used as a clock to gauge the evolution of the second: an"internal"observer that becomes correlated with the clock photon sees the other system evolve, while an"external"observer that only observes global properties of the two photons can prove it is static. In the last years several theoretical papers discussed if time can be an emergent propertiy deriving from quantum correlations. Here, to provide an insight into how this phenomenon can occur, we present an experiment that illustrates Page and Wootters' mechanism of "static" time, and Gambini et al. subsequent refinements. A static, entangled state between a clock system and the rest of the universe is perceived as evolving by internal observers that test the correlations between the two subsystems. We implement this mechanism using an entangled state of the polarization of two photons, one of which is used as a clock to gauge the evolution of the second: an "internal" observer that becomes correlated with the clock photon sees the other system evolve, while an "external" observer that only observes global properties of the two photons can prove it is static. "Quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio." [1] The "problem of time" [2][3][4][5][6] in essence stems from the fact that a canonical quantization of general relativity yields the Wheeler-De Witt equation [7,8] predicting a static state of the universe, contrary to obvious everyday evidence. A solution was proposed by Page and Wootters [9,10]: thanks to quantum entanglement, a static system may describe an evolving "universe" from the point of view of the internal observers. Energy-entanglement between a "clock" system and the rest of the universe can yield a stationary state for an (hypothetical) external observer that is able to test the entanglement vs. abstract coordinate time. The same state will be, instead, evolving for internal observers that test the correlations between the clock and the rest [9][10][11][12][13][14]. Thus, time would be an emergent property of subsystems of the universe deriving from their entangled nature: an extremely elegant but controversial idea [2,15]. Here we want to demystify it by showing experimentally that it can be naturally embedded into (small) subsystems of the universe, where Page and Wootters' mechanism (and Gambini et al. subsequent refinements [12,16]) can be easily studied. We show how a static, entangled state of two photons can be seen as evolving by an observer that uses one of the two photons as a clock to gauge the time-evolution of the other photon. However, an external observer can show that the global entangled state does not evolve. Even though it revolutionizes our ideas on time, Page and Wootters' (PaW) mechanism is quite simple [9][10][11]: they provide a static entangled state |Ψ whose subsystems evolve according to the Schrödinger equation for an observer that uses one of the subsystems as a clock system C to gauge the time evolution of the rest R. While the division into subsystems is largely arbitrary, the PaW model assumes the possibility of neglecting interaction among them writing the Hamiltonian of the global system as H = H c ⊗ 1 1 r + 1 1 c ⊗ H r , where H c , H r are the local terms associated with C and R, respectively FIG. 1: Gate array representation of the PaW mechanisms [9][10][11] for a CR non interacting model. Here Ur(t) = e −iHr t/ and Uc(t) = e −iHct/ are the unitary time evolution operators of the clock C and of the rest of universe R respectively. |Ψ is the global state of the system which is assumed to be eigenstate with null eigenvalue of the global Hamiltonian H = Hc + Hr (see text). [10]. In this framework the state of the "universe" |Ψ is then identified by enforcing the Wheeler-De Witt equation H|Ψ = 0, i.e. by requiring |Ψ to be an eigenstate of H for the zero eigenvalue. The rational of this choice follows from the observation that by projecting |Ψ on the states |φ(t) c = e −iHct/ |φ(0) c of the clock, one gets the vectors that describe a proper evolution of the subsystem R under the action of its local Hamiltonian H r , the initial state being |ψ(0) r = c φ(0)|Ψ (see Fig. 1). Therefore, despite the fact that globally the system appears to be static, its components exhibits correlations that mimics the presence of a dynamical evolution [9][10][11]. Two main flaws of the PaW mechanisms have been pointed out [2,15]. The first is based on the (reasonable) skepticism to accept that quantum mechanics may describe a system as large as the universe, together with its internal observers [11,12]. The second has a more practical character and is based on the observation that in the PaW model the calculations of transition probabilities and of propagators appears to be problematic [2,11]. An attempt to fix the latter issue has been discussed by Gambini et al. (GPPT) [12,16] by extending a proposal by Page [11] and invoking the notion of 'evolving constants' of Rovelli [17] (a brief overview of this approach is given in the appendix). In this work we present an experiment which allows reproducing the basic features of the PaW and GPPT models. In particular the PaW model is realized by identifying |Ψ with an entangled state of the vertical V and horizontal H polarization degree of freedom of two photons in two spatial modes c, r, i.e. (see following section) and enforcing the Wheeler-De Witt equation by taking H c = H r = i ω(|H V | − |V H|) as local Hamiltonians of the system (ω being a parameter which defines the time scale of the model). For this purpose rotations of the polarization of the two photons are induced by forcing them to travel through identical birefringent plates as shown in Fig. 2. This allows us to consider a setting where everything can be decoupled from the "flow of time", i.e. when the photons are traveling outside the plates. Nonetheless, the clock photon is a true (albeit extremely simple) clock: its polarization rotation is proportional to the time it spends crossing the plates. Although extremely simple, our model captures the two, seemingly contradictory, properties of the PaW mechanism: the evolution of the subsystems relative to each other, and the staticity of the global system. This is achieved by running the experiment in two different modes (see Fig. 2a): (1) an "observer" mode, where the experimenter uses the readings of the clock photon to gauge the evolution of the other: by measuring the clock photon polarization he becomes correlated with the subsystems and can determine their evolution. This mode describes the conventional observers in the PaW mechanism: they are, themselves, subsystems of the universe and become entangled with the clock systems so that they see an evolving universe; (2) a "super-observer" mode, where he carefully avoids measuring the properties of the subsystems of the entangled state, but only global properties: he can then determine that the global system is static. This mode describes what an (hypothetical) observer external to the universe would see by measuring global properties of the state |Ψ : such an observer has access to abstract coordinate time (namely, in our ex- perimental implementation he can measure the thickness of the plates) and he can prove that the global state is static, as it will not evolve even when the thickness of the plates is varied. In observer mode (Fig. 2a, pink box) the clock is the polarization of a photon. It is an extremely simple clock: it has a dial with only two values, either |H (detector 1 clicked) corresponding to time t = t 1 , or |V (detector 2 clicked) corresponding to time t = t 2 . [Here t 2 − t 1 = π/2ω, where ω is the polarization rotation rate of the quartz plate, since the polarization is flipped in this time interval.] The experimenter also measures the polarization of the first photon with detectors 3 and 4. This last measurement can be expressed as a function of time (he has access to time only through the clock photon) by considering the correlations between the results from the two photons: the time-dependent probability that the first photon is vertically polarized (i.e. that detector 3 fires) is p(t 1 ) = P 3|1 and p(t 2 ) = P 3|2 , where P 3|x is the conditional probability that detector 3 fired, conditioned on detector x firing (experimental results are presented in Fig. 3a). This type of conditioning is typical of every time-dependent measurement: experimenters always condition their results on the value they read on the lab's clock (the second photon in this case). The experimenter has access only to physical clocks, not to abstract coordinate time [10,17,18]. In our experiment this restriction is implemented by employing a different phase plate A (of random thickness unknown to the experimenter) in every experimental run. In super-observer mode (Fig. 2a, yellow box) the experimenter takes the place of a hypothetical observer external to the universe that has access to the abstract coordinate time and tests whether the global state of the universe has any dependence on it. Hence, he must perform a quantum interference experiment that tests the coherence between the different histories (wavefunction branches) corresponding to the different measurement outcomes of the internal observers, represented by the which-way information after the polarizing beam splitter PBS 1 . In our setup, this interference is implemented by the beam splitter BS of Fig. 2b. It is basically a quantum erasure experiment [19,20] that coherently "erases" the results of the time measurements of the internal observer: conditioned on the photon exiting from the right port of the beam splitter, the information on its input port (i.e. the outcome of the time measurement) is coherently erased [21]. The erasure of the time measurement by the internal observers is necessary to avoid that the external observer (super-observer) himself becomes correlated with the clock. However, the super-observer has access to abstract coordinate time: he knows the thickness of the blue plates, which is precluded to the internal observers, and he can test whether the global state evolves (experimental results are presented in Fig. 3b). In addition, we also test the GPPT mechanism showing that our experiment can also account for two-time measurements (see Fig. 2b). These are implemented by the two polarizing beam splitter PBS 1 and PBS 2 . PBS 1 represents the initial time measurement that determines when the experiment starts: it is a non-demolition measurement obtained by coupling the photon polarization to its propagation direction, while the initialization of the system state is here implemented through the entanglement. PBS 2 together with detectors 1 and 2 represents the final time measurement by determining the final polarization of the photon. Between these two time measurements both the system and the clock evolve freely (the evolution is implemented by the birefringent plates A). In the GPPT mechanism, the abstract coordinate time (the thickness of the quartz plates A) is unaccessible and must be averaged over [11,12,16]. This restriction is implemented in the experiment by avoiding to take into account the thickness of the blue quartz plates A when extracting the conditional probabilities from the coincidence rates: the rates obtained with different plate thickness are all averaged together. The formal mapping of the GPPT mechanism to our experiment is detailed in the appendix. As before, the time dependent probability of finding the system photon vertically polarized is p(t 1 ) = P 3|1 and p(t 2 ) = P 3|2 . However, a clock that returns only two possible values (t 1 and t 2 ) is not very useful. To obtain a more interesting clock, the experimenter performs the same conditional probability measurement introduc- : circles and squares represent p(t1) = P 3|1 and p(t2) = P 3|2 respectively, namely the probabilities of measuring V on the subsystem 1 as a function of the clock time t1, t2; circles and triangles represent P 4|1 and P 4|2 , the probabilities of measuring H on the subsystem 1 as a function of the clock time. As expected from the PaW mechanism, these probabilities are independent of the abstract coordinate time T , represented by different phase plate A thicknesses (here we used a 957µm thick quartz plate rotated by 15 different equiseparated angles). The inset shows the graph that the observer himself would plot as a function of clock-time: circles representing the probabilities of finding the system photon V at the two times t1, t2, the triangles of finding it H. (b) Super-observer mode: plot of the conditional fidelity between the tomographic reconstructed state and the theoretical initial state |Ψ of Eq. (2) as a function of the abstract coordinate time T . The fidelity F = Ψ|ρout|Ψ (which measures the overlap between the theoretical initial state |Ψ and the final state ρout after its evolution through the plates) is conditioned on the clock photon exiting the right port of the beam splitter BS. The fact that the fidelity is constant and close to one (up to experimental imperfections) proves that the global entangled state is static. ing varying time delays to the clock photon, implemented through quartz plates of variable thickness (dashed box B in Fig. 2b). [Even though he has no access to abstract coordinate time, he can have access to systems that implement known time delays, that he can calibrate separately.] Now, he obtains a sequence of time-dependent values for the conditional probability: p(t 1 + τ i ) = P τi 3|1 and p(t 2 +τ i ) = P τi 3|2 , where τ i = δ i /ω is the time delay of the clock photon obtained by inserting the quartz plate B with thickness δ i in the clock photon path. The experimental results are presented in Fig. 4, where each colour represents a different delay: the yellow points refer to τ 0 ; the red points to τ 1 , etc. They are in good agreement with the theory (dashed line) derived in the appendix. The reduction in visibility of the sinusoidal time dependence of the probability is caused by the decoherence effect due to the use of a low-resolution clock (our clock outputs only two possible values), a well known effect [10,16,22,23]. In summary, by running our experiment in two different modes ("observer" and "super-observer" mode) we have experimentally shown how the same energy- entangled Hamiltonian eigenstate can be perceived as evolving by the internal observers that test the correlations between a clock subsystem and the rest (also when considering two-time measurements), whereas it is static for the super-observer that tests its global properties. Our experiment is a practical implementation of the PaW and GPPT mechanisms but, obviously, it cannot discriminate between these and other proposed solutions for the problem of time [2][3][4][5][6]. In closing, we note that the timedependent graphs of Fig. 4 have been obtained without any reference to an external time (or phase) reference, but only from measurements of correlations between the clock photon and the rest: they are an implementation of a 'relational' measurement of a physical quantity (time) relative to an internal quantum reference frame [24,25]. Experimental setup The experimental setup (Fig. ??) consists of two blocks: "preparation" and "measurement". The preparation block produces a family of biphoton polarization entangled states of the form: by exploiting the standard method of coherently superimposing the emission of two type I crystals whose optical axes are rotated of 90 o [26]. The measurement block can be mounted in different configurations corresponding to "observer" and "superobserver" ones of PaW and GPPT scheme (Fig.1). In general, each arm of the measurement block contains interference filters (IF) with central wavelength 702 nm (FWHM 1 nm) and a polarizing beam splitter (PBS). Before the PBS the polarization of both photons evolves in the birefringent quartz plates A (blue boxes in Fig. 2) as |V → |V cos δ + i |H sin δ, where δ is the material's optical thickness. "Observer" mode in PaW scheme (Fig. 2, block a): In this mode, the polarization of the photon in the lower arm is used as a clock: the first polarizing beam splitter PBS 1 acts as a non-demolition measurement in the H/V basis of the polarization of the second photon, finally detected by single-photon avalanche diodes (SPAD) 1, 2. In this mode, the experimenter has no access to an external clock, he can only use the correlations (coincidences) between detectors: the timedependent probability of finding the first photon in |V is obtained from the coincidence rate between detectors 1-3 (corresponding to a measurement at time t 1 ), or 2-3 (corresponding to a measurement at time t 2 ): appropriately normalized, these coincidence rates yield the conditional probabilities P 3|x . The impossibility to directly access abstract coordinate time (the thickness of the plates) is implemented by averaging the coincidence rates obtained for all possible thicknesses of the birefringent plates A: the plate thickness does not enter into the data processing in any way. "Super-observer" mode in PaW scheme (Fig.1b): This mode is employed to prove that the global state is static with respect to abstract coordinate time, represented by the thickness of the quartz plates A. The 50/50 beam splitter (BS) in block b performs a quantum erasure of the polarization measurement (performed by the polarizing beam splitter PBS 1 ) conditioned on the photon exiting its right port. For temporal stability, the interferometer is placed into a closed box. The output state is reconstructed using ququart state tomography [27][28][29] (the two-photon polarization state lives in a four-dimensional Hilbert space), where the projective measurements are realized with polarization filters consisting of a sequence of quarter-and half-wave plates and a polarization prism which transmits vertical polarization (Fig.4). The fidelity between the tomographically reconstructed state and the theoretical state |Ψ is reported in Fig. 3b. GPPT two-time scheme Here a second PBS preceding detectors allows a two-time measurement. To obtain a more interesting time dependence than the probability at only two times, we delay the clock photon with an additional birefringent plate B (dashed box in Fig. 2), a 1752µm-thick quartz plate rotated at We implement PaW or GPPT as in Fig.1. In PaW superobserver mode the final state is checked by quantum state tomography [27][28][29], realised by registering the coincidence rate for 16 different projections achieved through half and quarter wave plates and a fixed analyzer (V). nine different angles, placed in the lower arm, and we repeat the same procedure described above for different thicknesses of the plate B. This represents an internal observer that introduces a (known) time delay to his clock measurements. The results are shown in Fig. 3. Appendix In this appendix we detail how our experiment implements the Gambini et al. (GPPT) proposal [12,16] for extending the PaW mechanism [9][10][11] to describe multiple time measurements. We also derive the theoretical curve of Fig. 4. Time-dependent measurements performed in the lab typically require two time measurements: they establish the times at which the experiment starts and ends, respectively. The PaW mechanism can accommodate the description of these situations by supposing that the state of the universe will contain records of the previous time measurements [11]. However, this observation in itself seems insufficient to derive the two-time correlation functions (transition probabilities and time propagators) with their required properties, a strong criticism directed to the PaW mechanism [2,11]. The GPPT proposal manages to overcome this criticism. It is composed of two main ingredients: the recourse to Rovelli's 'evolving constants' to describe observables that commute with global constraints, and the averaging over the abstract coordinate time to eliminate any dependence on it in the observables. Our experiment tests the latter aspect of the GPPT theory. Measurements of a physical quantity at a given clock time, say t, are described by the conditional probability of obtaining an outcome on the system, say d, given that clock time-measurement produces the outcome t. This conditional probability is given by [12,16] where ρ is the global state, P t (T ) is the projector relative to a result t for a clock measurement at coordinate time T and P d,t (T ) is the projector relative to a result d for a system measurement and t for a clock measurement at coordinate time T (working in the Heisenberg picture with respect to coordinate time T ). Clearly, such expression can be readily generalized to arbitrary POVM measurements. (A similar expression, but in the Schrödinger picture, already appears in [11].) The integral that averages over the abstract coordinate time T in (4) embodies the inaccessibility of the time T by the experimenter: he can access only the clock time t, an outcome of measurements on the clock system. A generalization of this expression to multiple time measurements is expressed by [12] p which gives the conditional probability of obtaining d on the system given that the final clock measurement returns t f and given that a "previous" joint measurement of the system and clock returns d i , t i . (This expression can also be formulated as a conventional state reduction driven by the first measurement [16].) In our experiment to implement the GPPT mechanism ( Fig. 2b) we must calculate the conditional probability that the system photon is V (namely detector 3 clicks) given that the clock photon is H after the first polarizing beam splitter PBS 1 (initial time measurement) and is H or V after the second polarizing beam splitter (final time measurement). The initial time measurement succeeds whenever one of photodetectors 1 or 2 click: this means that the clock photon chose the H path at PBS 1 . (Our experiment discards the events where the first time measurement at PBS 1 finds V , although in principle one could easily take into account these cases by adding a polarizing beam splitter and two photodetectors also in the V output mode of PBS 1 .) The final time measurement is given by the click either at photodetector 1 or 2: the clock dial shows t f = t 1 and t f = t 2 = t 1 + π/2ω, respectively. Using the GPPT mechanism of Eq. (5), this means that the time dependent probability that the system photon is vertical (detector 3 clicks) is given by where P d=3,t f =t k is the joint projector connected to detector 3 and detector k = 1 or k = 2 and P di,ti is the projector connected to the first time measurement. The latter projector is implemented in our experiment by considering only those events where either detector 1 or detector 2 clicks, this ensures that the clock photon chose the H path at PBS 1 (namely the initial time is t i ) and that the system photon was initialized as |V at time t i . (In principle, we could consider also a different initial time t i by employing also the events where the clock photons choose the path V at PBS 1 .) Introducing the unitary abstract-time evolution operators, U T , the numerator of Eq. (6) becomes where we use the property U T U T † = U T −T and we dropped one of the two time integrals by taking advantage of the time invariance of the global state ρ (which has been also tested experimentally in the super-observer mode). Gambini et al. typically suppose that the clock and the rest are in a factorized state [16], but this hypothesis is not strictly necessary for their theory [12]: we drop it so that we can use the same initial global state that we used for testing the PaW mechanism. Using the same procedure also to calculate the denominator of Eq. (6), we can rewrite this equation as whereρ is the time-average of the global state after the first projection, namelȳ where the averaging over the abstract coordinate time T is used to remove its dependence from the state. In our experiment such average is implemented by introducing random values of the phase plates A (unknown to the experimenter) in different experimental runs. In our GPPT experiment there are two possible values for the initial projector P di,ti : either the clock photon is projected on the H path after PBS 1 (corresponding to an initial time t i ) or it is projected onto the V path (corresponding to an initial time t i + π/2ω). We will consider only the first case, which corresponds to a click of either detector 1 or 2: we are post-selecting only on the experiments where the initial time is t i . In this case, the global initial state will be |H c |V r which is evolved into the vector |Ψ(T ) = [cos(ω(T + τ ))|H c − sin(ω(T + τ ))|V c ][cos ωT |V r + sin ωT |H r ] where H is the global Hamiltonian defined in the main text and τ is the time delay introduced by the plate B of Fig. 2b. Moreover, the projectors in Eq. (7) are P d=3,t f =t k ≡ |k c k| ⊗ |V r V | , and P t f =t k ≡ |k c k| ⊗ 1 1 r , where |k = 0 c ≡ |H c and |k = 1 c ≡ |V c . The projector P d=3,t f =t k corresponds to the joint click of detectors k and 3, while P t f =t k corresponds to the click of detector k and either one of detectors 3 or 4. In other words, Eq. (7) can be written as where P jk is the joint probability of detectors j and k clicking. For example, P 32 is the joint probability that detector 3 and 2 click, namely that both the clock and the system photon were V . Considering only the component |V c |V r of the state |Ψ(T ) , this is given by P 32 = 1 2π 2π 0 dϕ sin 2 (ϕ + ωτ ) cos 2 ϕ = 1 + 2 cos 2 ωτ 8 (11) where we have calculated the integral over T of Eq. (8) using a change of variables ωT = ϕ. Proceeding analogously for all the other joint probabilities, namely replacing the projectors (9) into (7), we find the probability for detector 3 clicking (namely the system photon being V ) conditioned on the time t f read on the clock photon as p(3|t f = t 1 ) = (1 + 2 cos 2 ωτ )/4 (12) p(3|t f = t 2 ) = (1 + 2 sin 2 ωτ )/4 , which is plotted as a function of τ in Fig. 3b (dashed line). Since t 2 = t 1 + π/2ω, we have plotted the points relative to p(3|t 2 ) as displaced by π/2 with respect to the points relative to p(3|t 1 ), so that the two curves (12) and (13) are superimposed in Fig. 3.
2013-10-17T13:06:04.000Z
2013-10-17T00:00:00.000
{ "year": 2013, "sha1": "411fad3929e7cc9de994c7881f12c2e851068ee9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1310.4691", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "411fad3929e7cc9de994c7881f12c2e851068ee9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251477538
pes2o/s2orc
v3-fos-license
Development of a High-Linearity Voltage and Current Probe with a Floating Toroidal Coil: Principle, Demonstration, Design Optimization, and Evaluation As the conventional voltage and current (VI) probes widely used in plasma diagnostics have separate voltage and current sensors, crosstalk between the sensors leads to degradation of measurement linearity, which is related to practical accuracy. Here, we propose a VI probe with a floating toroidal coil that plays both roles of a voltage and current sensor and is thus free from crosstalk. The operation principle and optimization conditions of the VI probe are demonstrated and established via three-dimensional electromagnetic wave simulation. Based on the optimization results, the proposed VI probe is fabricated and calibrated for the root-mean-square (RMS) voltage and current with a high-voltage probe and a vector network analyzer. Then, it is evaluated through a comparison with a commercial VI probe, with the results demonstrating that the fabricated VI probe achieved a slightly higher linearity than the commercial probe: R2 of 0.9967 and 0.9938 for RMS voltage and current, respectively. The proposed VI probe is believed to be applicable to plasma diagnostics as well as process monitoring with higher accuracy. Introduction Plasma, called the fourth state of matter, consists of physically energetic charged particles (electrons, positive ions, negative ions) and chemically reactive neutral particles (radicals) [1]. Due to their high physical energy and chemical reactivity, plasma has been widely used in various fields such as semiconductor fabrication, medical and environmental industries, aerospace, bio, and nuclear fusion science [2,3]. In particular, in semiconductor fabrication, plasmas significantly influence the plasma etching [4][5][6][7], ashing [8,9], and deposition [10,11] processes to realize feature sizes on the nanoscale. As feature sizes continue to shrink towards a few nanometers with improved levels of integration, process abnormalities such as arcing and leakage that reduce productivity have been regarded as serious problems [12][13][14]. These diagnostic techniques have been well studied and are commonly used in various research fields. Some of them, however, especially the Langmuir probe and microwave probes, are not suitable for plasma process monitoring, since they are invasive and as a result would distort and be perturbed by the processing. Recently, low-frequency modulation technology and non-invasive types have been proposed and are still under development [27,31,32,38,39]. Commonly implemented plasma process monitoring tools are the OES and VI probe; they are non-invasive and easy to install in the process equipment [40][41][42][43][44]. In general, an OES measures the optical emission from plasma via an optical window and is used for gas composition analysis and anomalous behavior detection. Despite their convenience, however, OESs have limitations in the following three aspects: optical window contamination, narrow spaces of process facilities, and complicated analysis. Process gases such as CF 4 , C 4 F 8 , CHF 3 , and SiH 4 cause optical window contamination that either degrades the emission intensity or cuts off some spectral bands [45], issues for which several techniques have been developed [45][46][47]. Moreover, some process chambers have no optical window since it would perturb process uniformity. Finally, the optical spectra of process gases are highly complicated and pose challenges to analysis since the atomic and molecular spectra overlap, and in certain cases there are no fundamental spectral data for some gases and their compounds [48,49]. The VI probe, in general, measures the voltage and current of the electrode (or antenna) used to generate plasma [35,43,44] and is employed for plasma parameter analysis with some circuit modeling and sensitive detection of anomalous behaviors, especially arcing. As VI probes can be conveniently installed between the electrode (or antenna) and an impedance matcher, they are free from contamination. Nevertheless, since traditional VI probes have separate voltage and current sensors, crosstalk, which is defined as capacitive coupling between the sensors, leads to a degradation of measurement linearity, or in other words, accuracy. To minimize crosstalk, one commonly employed technique is to separate the voltage and current sensors by inserting a metal shield (called a Faraday shield) between them. Lafleur et al. [50] invented a coaxial-type VI probe named the Vigilant probe, where the voltage sensor (called the D-dot antenna) has a conical shape and the current sensor has an axisymmetric groove. Since the current sensor is embedded into external grounded metal and is separated from the voltage sensor, crosstalk can be minimized . In another example, Plasmart Inc. (Daejeon, Korea) [51] developed a printed circuit board (PCB)-type VI probe with a Faraday shield located between the voltage and current sensors to block crosstalk through the inside of the PCB. Despite the Faraday shield, however, crosstalk passing over the PCB still exists. To remove crosstalk completely, Kim et al. [52] developed a VI probe with double walls designed to prohibit the crosstalk passing over as well as through the inside of the PCB . However, in a high power environment, crosstalk can penetrate the Faraday shield, and conventional blocking methods are not effective. Here, we propose a VI probe with a floating toroidal coil (FTC). Since the FTC plays a role in both voltage and current sensing, the VI probe is free from crosstalk. Through threedimensional (3D) electromagnetic wave simulation, we first demonstrate the operation principle and establish optimization conditions. Then, based on the optimization results, we fabricated the VI probe and evaluated it with a comparison to a commercial VI probe. The results demonstrate that the fabricated VI sensor has a higher linearity than the commercial probe. The rest of this paper is organized as follows. The Section 2 provides an explanation and demonstration of the operation principle of the FTC with 3D electromagnetic wave simulation. Design optimization procedures through simulation, and the resultant optimum conditions are also presented. In the Section 3, calibration and evaluation of the fabricated VI probe are investigated. Then, in the Section 4, we summarize the significant results of this paper. Principle of a Floating Toroidal Coil as a Voltage and Current Sensor In this section, the operation principle of the FTC is qualitatively explored. Figure 1a presents a schematic diagram of the FTC with a cross-sectional view of the signal rod connected to a radio frequency (RF) generator. When RF power is applied to the signal rod, RF voltage is created and RF current flows through the signal rod. For easy understanding, we initially assume two ideal cases: (i) only RF voltage (V RF ), and (ii) only RF current (I RF ). For the former case, voltage on the FTC is induced by capacitive coupling between the FTC and ground through a time-varying electric field, depicted with green arrows in Figure 1a. Here, capacitive coupling means that the FTC plays a role as a counter-electrode with respect to the rod like a capacitor. Since the RF wavelength is much longer than the dimensions of the FTC, the FTC voltage (V coil ) is uniformly distributed between points a and b (Figure 1a) at any RF phase, as shown in Figure 1b; the uniform V coil , therefore, sinusoidally oscillates with time. For the latter RF current-only case, a voltage difference between the FTC ends (a and b, Figure 1a) is induced by inductive coupling between the FTC and the rod through a time-varying magnetic field. Inductive coupling here follows Faraday's law of induction: an electromotive force is induced to disturb the time-varying magnetic field created by I RF . As shown in Figure 1c, the V coil is non-uniformly distributed. Note that the center of the FTC acts as a ground and the ends show push/pull characteristics during RF oscillation. Considering a realistic situation, V RF and I RF simultaneously exist. This means that V coil is induced by a combination of both capacitive and inductive coupling effects. Provided that these effects can be linearly combined (as proved in the next section), the spatiotemporal behavior of V coil becomes the sum of Figure 1b,c. Therefore, the center V coil and the different V coil between the ends represent V capacitive and V inductive , respectively. Here, V capacitive and V inductive mean the magnitude of their couplings, as shown in Figure 1b,c. Practical use of the FTC to estimate V RF and I RF is as follows. We assume that from two points a to b the FTC is symmetric in terms of its center, as shown in Figure 1a. Then, the center V coil is the same as V capacitive , since V inductive is zero during RF oscillation at that position (see Figure 1c). Provided that V coil is symmetrically distributed throughout the FTC, the average value can be the arithmetic mean of the voltages at the ends; hence, V capacitive is defined as where V a coil and V b coil are the voltages of the FTC at each end (a and b shown in Figure 1a). As V capacitive results from capacitive coupling, it is noted that the summation of V a coil and V b coil can be proportional to V RF and thus a good indicator to measure V RF with a coefficient, α, as With a similar perspective, measuring I RF can be explained as follows. Regarding that the voltage difference of V coil at the ends originates from inductive coupling, V inductive is defined as Similar to the above, it is worthwhile to note that here, the subtraction of V b coil from V a coil can be proportional to I RF and thus is a good indicator to measure I RF with a coefficient, Equations (2) and (4) imply that by measuring V a coil and V b coil , V RF and I RF can be assessed, provided that calibration factors α and β are known. Simulation Demonstration In this section, we demonstrate the principle introduced in the previous section via 3D electromagnetic wave simulation, CST Microwave Suite [53]. Figure 2a-c show schematic diagrams of three simulation cases: (i) capacitive and inductive coupling (with no shields), (ii) capacitive coupling only (with an inductive coupling shield), and (iii) inductive coupling only (with a capacitive coupling shield). For these three cases, the common configurations are the FTC, the coaxial cables, and the rod, as shown in Figure 2d,g. This apparatus is covered by a rectangular case that is electrically grounded (not depicted in the figure for clarity). The dimensions are listed in Table 1. The coaxial cables play a role as input and output ports for voltage and current waves. Incident waves from the input port are carried via the rod and induce V coil on the FTC. In this simulation, a voltage monitor function, which integrates the electric field along a given line, is used to calculate the voltage difference. Here, the voltage monitors V 1 and V 2 shown in Figure 2g, respectively, mean the voltage difference between the ends of the FTC, that is V inductive , and between the center of the FTC and the rectangular case, that is V capacitive . A brief explanation about the role of the inductive coupling shield (ICS) and the capacitive coupling shield (CCS) is as follows. As shown in Figure 2e,h, since the ICS is connected to the coaxial cable shields, which are electrically grounded, a closed current loop from the rod to the ICS forms. Based on Ampere's law, no net current source exists outside the ICS, since the current in the rod and the shield have the same magnitude but the opposite direction. As a result, no magnetic field outside the ICS can exist, meaning that inductive coupling is blocked. Capacitive coupling in this case exists between the rod and the FTC through the holes in the ICS, as shown in Figure 2e,h. As for the CCS shown in Figure 2f,i, this shield is connected to only one of the coaxial cable shields. In this configuration, no closed current loop can form, meaning that capacitive coupling is blocked while inductive coupling is not. Simulation results are summarized as follows. Figure 3a-f show the magnetic field vectors and magnitude of the electric field on the cross-sectional plane, respectively, at the phases where their values are maximum. Since magnetic and electric fields form with rotational and diverse directions, respectively, different figure plots (vector and contour) are used for clarity. As for simulation case (i) involving both capacitive and inductive coupling, a rotating magnetic field by RF current in the rod forms inside the FTC, as shown in Figure 3a, demonstrating that the inductive coupling is effective. Furthermore, an electric field strongly forms between the rod and the inner side of the FTC, as shown in Figure 3d, demonstrating that the capacitive coupling is also effective. Since both couplings are effective, the voltage monitors V 1 (= V inductive ) and V 2 (= V capacitive ) show a sinusoidal waveform signal (Figure 3g). For case (ii) with only capacitive coupling, no magnetic fields are created inside the FTC, since the currents in the rod and in the ICS are opposite (Figure 3b), as explained in the previous paragraph. As shown in Figure 3e, small electric fields escape through the holes (see the green area), which render capacitive coupling effective despite its small magnitude. Furthermore, it is noted that V 1 is extremely small but V 2 shows a sinusoidal waveform (Figure 3h), meaning that only capacitive coupling is present. Combining these results, we note that V 2 can be an indicator of inductive coupling, that is V inductive . As for case (iii) with only inductive coupling, Figure 3c shows that a magnetic field is well produced inside the FTC, similar to Figure 3a, while Figure 3f shows that no electric field forms between the rod and the inner side of the FTC (as electric fields are blocked inside the CCS). This implies that inductive coupling is effective but capacitive coupling is blocked by the CCS. Notably, V 1 shows a sinusoidal waveform and is much larger than V 2 , as shown in Figure 3i. Hence, V 1 can be an indicator of V capacitive . x + E 2 y + E 2 z (middle row), and voltage waveforms of V 1 and V 2 (bottom row) for (a,d,g) capacitive and inductive coupling, (b,e,h) capacitive coupling only, and (c,f,i) inductive coupling only. In the figure, V 1 means the voltage difference between the ends of the floating toroidal coil, and V 2 is the voltage difference between the center of the coil and the grounded case. Design Optimization through Simulation We demonstrated the workings of the FTC in the previous section via simulation. Before fabrication of the proposed sensor for a practical demonstration, it is highly useful to find the optimum conditions to achieve the highest sensitivity also through computer simu-lation rather than practical trials to minimize development costs. For this, the best method may be to examine all simulation cases for optimization, but this is not recommended due to the simulation cost. Instead, the following procedure is believed to be reasonable [52]. Assuming there are three parameters a, b, and c for optimization, the first step is to sweep the a parameter while fixing the other parameters at arbitrary values to find the optimum condition of a. The second step sweeps the b parameter with the optimized a and finds the optimum condition of b. The next trial sweeps the c parameter with the optimized a and b and finds the optimum condition of c. This process represents one sweeping cycle. By performing several cycles, provided that the optimized conditions of a, b, and c are the same as those of prior sweeping cycles, the final values are the optimum ones. Figure 4a shows the simulation configuration of the proposed VI probe and each component: the FTC, U-cut printed circuit board (PCB), signal output lines, rod, dielectric holder, case, and coaxial cables, as well as the parameters for optimization: the number of turns, coil distance, and coil length. The dimensions are listed in Table 2. Here, each signal output line is connected to the two ends and the center of the FTC. The three lines terminate at the end of the U-cut PCB. Three voltage monitors calculate the voltage difference between the case (grounded) and each end of the signal output lines. Based on Equation (1), the center voltage monitor (V CTR ) represents V capacitive , and based on Equation (3), the difference between the end voltage monitors (V end s) is V inductive . We introduce the center signal line for an exact measurement of V capacitive . Hence, in this optimization procedure, the optimum condition is defined in terms of the highest signal amplitude of V CTR and V end for the fabrication of sensitive VI probe. If their maximum condition is different, the optimum condition is selected with an alternative way: at first, analyzing the tendency of V CTR and V end with optimization parameters and then finding the condition where either V CTR or V end is the highest value. Figure 4b shows the amplitude of the V CTR and V end waveforms from 40 to 70 turns of the FTC with a coil distance of 1.0 mm and a coil length of 5.0 mm, which are arbitrarily selected. As their maximum conditions are different, the optimum condition is selected with the alternative way. As the number of turns increases, V CTR monotonically increases since the capacitive coupling area enlarges. On the other hand,V end is saturated at 60 turns because the effective inductive coupling area inside the FTC becomes saturated. At 70 turns, the signal lines connected to the FTC ends are close to each other, as shown in Figure 4a, while above 70 turns, they are overlapped. Accordingly, the effective number of turns is saturated, and as a result, the optimum condition is 70 turns. Figure 4c shows the optimization result for the coil distance at the optimized number of turns (70) and a coil length of 5.0 mm. Again, as their maximum conditions are different, the optimum condition is selected with the alternative way. As the coil distance increases, V CTR gradually increases because the capacitive coupling area is slightly enlarged. Conversely, V end decreases, except for at a coil distance of 1.0 mm, which results from the decrease in the number of turns per unit length. The opposite trends of V CTR and V end imply that the optimum condition is from 0.9 to 1.0 mm. Hence, we choose 1.0 mm as the optimum coil distance since the associated V end is higher, although the spike of V end at the 1.0 mm coil distance is not yet well understood. In the final procedure in one cycle with two optimum conditions (70 turns and 1.0 mm coil distance), as shown in Figure 4d, as the coil length increases, V CTR decreases while V end abruptly decreases and then gradually rises. In this case, their maximum conditions are the same, the optimum condition is selected with the highest values of them. Since the outer edges of the FTC get farther away from the rod with increasing coil length, the effective capacitive coupling area decreases, which results in the decrease in V CTR . The abrupt drop of V end can be explained with the decrease in the number of turns per unit length since the outer arc length increases. The increase in coil length also results in an enlarged area inside the FTC, leading to an enhancement of inductive coupling, which causes the increase in V end . Based on this analysis, while reducing the coil length may seem beneficial, doing so would lead to an overlap of the signal lines at the FTC ends. Hence, the optimum coil length is 5.0 mm. It is noted that the initial conditions of 1.0 mm coil distance and 5.0 mm coil length at the initial optimization procedure (sweeping the number of turns) are the same as the results from the final optimization procedure. Accordingly, the optimization process is terminated despite the single cycle, and the final conditions are 70 turns, 1.0 mm coil distance, and 5.0 mm coil length. More detailed specifications are listed in Table 3. Fabrication The fabricated PCB including the FTC, signal lines, and huge ground pads is shown in Figure 5. In the device, we removed the center signal line to minimize the number of signal ports; in fact, V capacitive can be estimated by measuring the voltages at the FTC ends based on Equation (2). It is important for the VI probe to have high sensitivity, so to minimize RF noise effects, a large grounded pad is attached near the FTC and signal lines. Furthermore, parallel capacitors are installed as a high frequency pass filter, and the signal lines are fabricated as microstrip lines with a characteristic impedance of 50 Ω. Each end of the signal lines is connected with an SMA connector that acts as a signal port. Figure 6 shows the components of the fabricated VI probe: N-type connectors, mounts, cases (top and bottom), rod, dielectric holder, and printed circuit board. The N-type connectors coupled with the rod play a role as the input and output ports of the fabricated VI probe. The assembly procedure is described in Figure 7. As shown in Figures 6 and 7, the fabricated VI probe is both easy to assemble and robust. Calibration The experimental setup to identify the coefficients α and β from Equations (2) and (4) is shown in Figure 8. Details of this setup are also described in [26]. For high power calibration, a cylindrical vacuum chamber with a turbomolecular pump (D-35614 Asslar, Pfeiffer Vacuum, Inc., Asslar, Germany) and a rotary pump (GHP-800K, KODIVAC Ltd., Gyeongsan-si, Korea) are employed as the dummy load in this calibration system. The pressure of the vacuum chamber, measured by a vacuum gauge (Baraton, MKS Instruments Inc., Andover, MA, USA), is maintained below 1 mTorr to suppress vacuum discharge causing impedance variation during the calibration procedure; here, the chamber pressure is lower than the minimum measurable range of the vacuum gauge. A cylindrical electrode with a diameter of 150 mm connected with an RF matcher (PathFinder, Plasmart Inc., Daejeon, Korea) is inserted into the vacuum chamber. To minimize impedance variation by thermal effects, coolant flows through the electrode. The fabricated VI probe is installed on the input port of the RF matcher with an N-type Tee adaptor. The two signal ports of the fabricated VI probe are connected to channel 1 and 2 of an oscilloscope (TDS3054B, Tektronix Inc., Beaverton, OR, USA) through coaxial cables with BNC-SMA adaptors. A high-voltage probe (P5100, Tektronix Inc., Beaverton, OR, USA) along with the oscilloscope measures the voltage of the open (left) port of the tee adaptor. The calibration procedure is as follows. First, we connect a vector network analyzer (E5071B, Agilent Inc., Santa Clara, CA, USA) to the input port of the fabricated VI probe with a coaxial cable with the end calibrated with a kit (SAV20201B, Saluki Technology Inc., Taipei, Taiwan) as shown in Figure 8. Then, the RF matcher is manually manipulated to match the input impedance (Z input ) as 50 Ω while the vector network analyzer measures the input impedance. Second, provided that the impedance matching is terminated, the vector network analyzer is replaced with an RF generator (YSR-06MF, Yongshin RF Inc., Hanam-si, Korea). While 13.56 MHz power from 50 W to 300 W is applied to the electrode, the reference voltage (V RF ) and current (I RF ) are measured by the high-voltage probe and calculated by I RF = V RF / Z input , respectively. Each measurement is carried out 20 times. Figure 9 shows the root-mean-square (RMS) values of the voltage and current signals from the fabricated VI probe, V voltage,rms and V current,rms , over the RMS reference voltage and current, V RF,rms and I RF,rms , respectively. Here, V voltage,probe is calculated from the RMS value of (V ch1 + V ch2 )/2, where V ch1 and V ch2 are the voltage waveforms recorded from channel 1 and 2 of the oscilloscope, respectively. Similarly, V current,probe is from the RMS value of V ch1 − V ch2 . Figure 9. Calibration results of the (a) voltage and (b) current along increasing RF input voltage and current. To avoid impedance variation by plasma formation during the calibration procedure, the pressure of the vacuum chamber is maintained below 1 mTorr (lower than the minimum measurable range of the vacuum gauge). Since RF power is dissipated as heat by each component, such as the electrode, RF matcher, etc., the impedance changes, and this affects the accuracy of calibration. To assess the impedance variance by thermal effects during the calibration procedure, Z input was measured again after the procedure. The impedance variance is considered to calculate I RF,rms as the min-max value, represented in Figure 9b as error bars on the x-axis. Comparison with a Commercial VI Probe For an evaluation of the fabricated VI probe via comparison with a commercial VI probe, the experimental setup is slightly changed, as shown in Figure 10. A commercial VI probe (Octive poly, Impedans Ltd., Dublin, Ireland) is installed between the RF generator and the fabricated VI probe for the comparison. A mass flow controller (MFC, TN280, SMTEK Co., Ltd., Seongnam-si, Korea) maintains the flow rate of argon gas at 100 sccm into the vacuum chamber to maintain the chamber pressure at 20 mTorr. The RF generator applies power to the electrode and argon plasma is generated. Since the RF matcher maintains the source impedance at 50 Ω while the plasma is sustained, the relationships of V RF and I RF to the RF power (P RF ) are P RF = V 2 RF /50 and P RF = 50I 2 RF , respectively. Figure 11a plots the square of the RMS voltage measured by the fabricated VI probe, the commercial VI probe, and the high-voltage probe with the oscilloscope over input RF power. As the input RF power increases, all probes show a linear increase. Among them, the fabricated VI probe shows a higher R 2 of 0.9967 for linear fitting than that of the commercial probe. As shown in Figure 11b, the squares of the RMS currents by the fabricated and commercial VI probes also show a linear increase. The fabricated VI probe again shows a higher R 2 of 0.9938 for the current compared to the commercial probe. In summary, the fabricated VI probe demonstrates a good linearity for both voltage and current, at slightly higher levels than the commercial VI probe. Figure 10. Experimental setup for a comparison of the fabricated VI probe with a commercial VI probe. A commercial VI probe is installed between the RF generator and the fabricated VI probe for the comparison. The RF generator applies power to the electrode and argon plasma is generated. Figure 11. Square of the root-mean-square (RMS) (a) voltage and (b) current measured by the three probes over RF input power at an argon gas injection rate of 100 sccm, pressure of 20 mTorr, and linearity factors (R-squared values (R 2 )). Here, the squares of the RMS currents from the high-voltage measurement with the oscilloscope is excluded in Figure 11b since it requires the impedance information during plasma discharge. While the RF power is applied, the impedance cannot be measured with the vector network analyzer since the internal impedance of the VNA is 50 Ω and the applied voltage is beyond the measurement limitation of the vector network analyzer. It should be noted that the voltage level of V voltage,rms is much lower than V current,rms based on Figure 9. Traditional VI probes show the opposite characteristic, where the capacitive signal is much larger than the inductive signal as in [52]. This results from the small area of capacitive coupling; traditional voltage sensors use a large area electrode, whereas the FTC consists of wire-type electrodes and naturally has a much smaller coupling area. Further development of the proposed VI probe is therefore important to enhance the capacitive coupling, such as by using other dielectric holders with higher dielectric constants, increasing (decreasing) the radius of the rod (FTC), etc. The evaluation result for RMS voltage and current does not mean the performance of the proposed probe is better than the commercial probe. The data acquisitions of ten times for each RF power condition in the evaluation process is not enough to exactly compare the proposed VI probe with the commercial probe. Nevertheless, this evaluation result means the successful operation of the prototype. Furthermore, the proposed probe is not fully optimized based on various practical tests; the simulation plays a role in bringing the probe design to near optimized conditions. There are still several practical-testbased optimizations. Later, practical optimization to enhance its performance and exact comparison with the commercial ones will be reported. Conclusions In this paper, we proposed a VI sensor based on a floating toroidal coil. The operation principle of the FTC was demonstrated and its optimum design was established through 3D electromagnetic wave simulation. Here, optimization parameters of the FTC on a printedcircuit board are the number of turns, the coil distance, and the coil length. The resultant optimum conditions are 70 turns, coil distance of 1.0 mm, and coil length of 5.0 mm. Based on the optimum conditions, the proposed VI probe with FTC was fabricated and calibrated based on the high-voltage probe measurement for voltage and the vector network analyzer measurement for the current. During calibration procedure, impedance change by plasma formation and thermal expansion of electrode are suppressed by maintaining pressure of the vacuum chamber below 1 mTorr and flowing coolant through the electrode, respectively. Then, it was evaluated by comparison with a commercial VI probe. The results demonstrated that the FTC-based probe achieved a slightly higher linearity than the commercial one, with an R 2 of 0.9967 for RMS voltage and 0.9938 for RMS current. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-08-11T15:16:33.004Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "f7abbda4d61a6d14e891cacfffd8e3173f930723", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "00e10460dda9fbd87e4e8fba84fef71eafa2aef0", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
246749001
pes2o/s2orc
v3-fos-license
A cross-sectional analysis of the association between physical activity, depression, and all-cause mortality in Americans over 50 years old Depression is estimated to be the second leading cause of disability in the United States and is associated with a 52% increased risk of death. Lifestyle components may have an important role in depression pathogenesis. The aims of this study were to analyze the association of meeting the physical activity (PA) recommendation guidelines and depression, and to analyze the all-cause mortality risk of the joint association of PA and depression. This cross-sectional study included 7201 participants from the 2007–2014 National Health and Nutrition Examination Survey aged ≥ 50 years and linked to National Death Index records through December 31, 2015. Depression was defined as a score ≥ 10 using the Patient Health Questionnaire (PHQ-9). PA was self-reported, and total PA was used to classify participants as more active (≥ 600 MET-min/week) or less active (< 600 MET-min/week). The odds ratios for depression were examined according to be more active or less active. The hazard ratios (HR) for the association of PA level and depression status with all-cause mortality were examined. Being more active was associated with reduced odds for depression. Compared with less active participants with depression, those who were more active and having depression had HR 0.45 (95% CI 0.22, 0.91, p = 0.026) for all-cause mortality. Being more active is associated with lower odds for depression and seems to be a protective factor against the increased all-cause mortality risk due to depression. www.nature.com/scientificreports/ consequences, such as suicide 13 . Biological theories about the antidepressant mechanisms of PA are mainly based on the improvement of neuroplasticity, and in the reduction of inflammation and oxidative stress, while psychosocial theories are based on the improvement of self-esteem, social support, and self-efficacy 14 . Although the benefits of PA against depression are well documented 8,10,11 , few studies have analyzed whether compliance with PA recommendations 9 is enough to obtain a preventive effect against depression 10 . Furthermore, those studies are mainly focused on young or middle-aged women, health care workers, or college students 10 , and only two studies are focused on older people 15,16 . On the contrary, the protective effect of complying with PA recommendations against all-cause mortality in the general population is widely known 9,17 . Evidence also suggests that depression is associated, by itself, with a higher mortality risk, reaching a 52% increased risk of death 18 . Despite the evidence of these associations with mortality, to our knowledge, no study has analyzed the all-cause mortality risk of the joint association of PA and depression in older adults. Therefore, the purpose of this study was to analyze the association of meeting the PA recommendations and depression and to analyze the all-cause mortality risk of the joint association of PA and depression in non-institutionalized, older American adults. Methods Study design and population. The National Health and Nutrition Examination Survey (NHANES), conducted by the National Center for Health Statistics (NCHS), is an annual national cross-sectional survey of a representative sample of non-institutionalized United States population. The survey uses a stratified, multistage sample design to randomly select approximately 7000 residents across the country each year. Participation in the survey is confidential and voluntary. Public-use linked mortality files from the National Death Index (NDI) are available for continuous NHANES 1999-2014, providing mortality data from the date of survey participation through December 31, 2015. The present study used data from 4 cross-sectional NHANES waves conducted from 2007 to 2014 and their linked mortality files. Details about linkage of NHANES data with NDI records have been published elsewhere 19 . For this analysis, sample was reduced to participants ≥ 50 years old who were followed up for mortality outcomes ≥ 12 months after the enrollment in the study to minimize bias from reverse causation (n = 10,908). Participants with missing data on PA (n = 2523), depression (n = 811), and other covariables (n = 373) were excluded, so the final sample included 7201 participants. All participants provided written informed consent, and all methods were carried out in accordance with relevant guidelines and regulations. The Ethics Review Board of the NCHS approved measurement procedures, data collection, and posting of the data online for public use. Definition and assessment of depression. Depression was assessed by means of the Patient Health Questionnaire-9 (PHQ-9), a widely-used self-report depression screener that consists of 9 items to assess depressive symptoms over the last 2 weeks 20 . The PHQ-9 score can range from 0 to 27, since each of the 9 items can be scored from 0 (not at all) to 3 (nearly every day) 20 . Scores ≥ 10 represent clinically significant depressive symptoms 21 , so for this study, depression has been defined as score ≥ 10 in the PHQ-9. This is a common cutpoint that has been used in previous studies 22 and maximized combined sensitivity and specificity 23 . Assessment of physical activity. PA was assessed by interview using the Global Physical Activity Questionnaire (GPAQ) created by the World Health Organization (WHO) 24 . This questionnaire analyzes the usual PA performed in a typical week in 3 different domains (PA at work/domestic, PA in transport/travel, and PA in leisure time), as long as it has been carried out in continuous periods of 10 min. The questionnaire also considers the intensity at which it has been performed (moderate or vigorous). The total metabolic equivalent per minute per week (MET-min/week) was calculated following the GPAQ protocol 25 . Based on PA recommendation guidelines by the WHO 9 , the subjects were classified into two different groups. Those who performed at least 150 min of moderate to vigorous PA (≥ 600 MET-min/week) and met the PA recommendations for adults compose the more-active group, and those who performed less than 150 min of moderate to vigorous PA (< 600 MET-min/week) and thus did not meet the recommendations, composed the less-active group. Mortality. Survival time was counted from the date of survey participation to the date of death or the end of the study follow-up period (December, 31,2015), whichever came first. In this study all-cause mortality was used as the main outcome for mortality, classifying participants as alive or deceased. Assessment of additional covariates. Demographic, lifestyle, anthropometric, and health data were obtained and used to adjust the results of regression models. The selection of these specific variables was based on their possible confounding role in the associations analyzed 6,26 . Lifestyle risk factors included alcohol consumption in the last 12 months classified as 0 drinks/day, < 2 drinks/ day, and ≥ 2 drinks/day, and smoking status was defined as never smoked, former smoker, and smoker. Anthropometric included body mass index (BMI), calculated as weight in kilograms divided by height in meters squared and classified in < 25.0 kg/m 2 , 25.0-29.9 kg/m 2 , and ≥ 30.0 kg/m 2 . Self-reported medical diagnosis of hypertension, dyslipidemia, or type 1 and 2 diabetes, or self-reported use of antihypertensive medication, www.nature.com/scientificreports/ lipid-lowering drugs, or hypoglycemic medication, were used to classify participants as having arterial hypertension, dyslipidemia, and diabetes, respectively. Statistical analysis. According to the NHANES analytical guidelines, all data were downloaded, merged, and analyzed, incorporating appropriated combined weights, primary sampling unit, and strata provided by NHANES 27 . Moreover, public-use linked mortality files from NDI were merged with NHANES data following the appropriate guidelines 19 . Categorical variables were expressed as frequency (%), and continuous variables were presented as mean and standard error (SE). Descriptive analyses were carried out for the overall samples and divided by PA groups. Logistic regressions models according to the PA group of the participants were conducted to examine the adjusted odds ratios (OR) for depression. The first model was unadjusted, and the second only age-adjusted. Model A was adjusted by age, sex, race/ethnicity, annual household income, and educational level. Model B was additionally adjusted by smoking status, alcohol consumption, BMI, arterial hypertension, dyslipidemia, and diabetes. Cox proportional hazards regression models were performed to examine hazard ratios (HRs) and 95% CIs for the association between PA level and depression status with all-cause mortality. Furthermore, adjusted survival curves were plotted. When the PA level and depression status joint association with all-cause mortality was analyzed, the less-active (< 600MET-min/week) and with-depression (PHQ-9 ≥ 10) subgroup was considered the reference group when hazard ratios for the three other subgroups were calculated. In this case, the model was adjusted for potential confounders, including age at baseline, sex, race/ethnicity, annual household income, educational level, smoking status, alcohol consumption, BMI, arterial hypertension, dyslipidemia, and diabetes. The proportional hazards assumption was not violated as examined by log-log survival plots and correlations of follow-up time and Schoenfeld residuals from the adjusted Cox models 28 . A two-sided p-value of 0.05 was considered statistically significant. Statistical analysis was performed using SPSS statistical software (ver. 24.0 IBM Corp., Armonk, NY, USA) and R statistical software (ver. 4.0.4). Results The overall prevalence of depression in ≥ 50-years-old non-institutionalized Americans was 7.8%. The prevalence among those who met and did not meet the PA recommendations for adults was 5.3% and 11.0%, respectively ( Table 1). The more-active group had a lower prevalence of obesity, hypertension, dyslipidemia, and diabetes, as well as higher educational level and annual household income than the less-active group ( Table 1). The likelihood of having depression was lower for those participants in the more-active group compared to those in the less-active group. The weighted odds for having depression after adjusting the results by age, sex, race/ethnicity, annual household income, educational level, alcohol consumption, smoking status, BMI, arterial hypertension, dyslipidemia, and diabetes were 0.57 (95% CI 0.44, 0.72, p < 0.001) for the more-active group compared to the less-active group (Table 2). Additionally, Supplementary Table 1 includes tests of the weighted odds for having depression among three PA levels subgroups: < 600 MET-min/week, 600-1200 MET-min/week, and > 1200 MET-min/week. In addition, if the total PA was divided according to the different domains analyzed (PA at work/domestic, PA in leisure time, and PA in transport/travel), only those who performed ≥ 600 MET-min/week of leisure-time PA had significantly lower odds for having depression compared to those who performed < 600 MET-min/week of leisure-time PA (OR 0.47, 95% CI 0.32, 0.67, p < 0.001) (Supplementary Table 2). During a median 54.0 months (interquartile range 12-108 months) of follow-up, 655 deaths occurred among 7201 individuals in the study. The percentage of deaths among those who met and did not meet the PA recommendations for adults were 4.1% and 9.4%, respectively (Table 1). Moreover, the percentage of deaths in participants with and without depression were 9.4% and 6.1%, respectively. When studying HRs for all-cause mortality, those with depression had a 1.55-fold increased HR of death (95% CI 1.18, 2.03, p = 0.002) compared to those without depression (Table 3 and Fig. 1a). Moreover, those who performed < 600 MET-min/week had a 1.73-fold increased HR of death (95% CI 1.45, 2.07, p < 0.001) compared to those who performed ≥ 600 MET-min/week (Table 4 and Fig. 1b). These HRs were adjusted by age, sex, race/ethnicity, annual household income, educational level, alcohol consumption, smoking status, BMI, arterial hypertension, dyslipidemia, and diabetes. When the joint association of depression and PA was analyzed in relation to the risk of all-cause mortality, those who were more active without depression had the lowest risk of death compared to those who were less active and with depression, HR 0.38 (95% CI 0.28, 0.52, p < 0.001). Those who were less active without depression, and those who were more active with depression, also had a lower risk of death compared to those who were less active and with depression, 0.63 HR (95% CI 0.46, 0.85, p = 0.003), and 0.45 HR (95% CI 0.22, 0.91, p = 0.026), respectively (Fig. 2). These HRs were adjusted by age, sex, race/ethnicity, annual household income, educational level, alcohol consumption, smoking status, BMI, arterial hypertension, dyslipidemia, and diabetes. Discussion This study provided evidence that performing at least 150 min/week of moderate to vigorous PA was associated with reduced odds for depression among an American population aged 50 and older. Furthermore, among those with depression, performing 150 min/week of moderate to vigorous PA was associated with a 55.1% reduced risk of all-cause mortality compared to those who performed less PA. The beneficial effect of PA in depression prevention has been analyzed in-depth through specific systematic reviews, concluding that PA may prevent depression 10,11,29 . Nevertheless, only a few studies were focused on older people 15,16,[30][31][32][33] , and in addition, only two of those studies conducted in European and Asian populations analyzed the relationship between depression and PA assessed as a dose (combined amount and intensity) 15 www.nature.com/scientificreports/ which leads to increasing the risk of depression 15 . The findings of our study are in line with those of Mc Dowell, reinforcing the idea that meeting PA guidelines can help prevent depression among persons older than 50 years. As we mentioned above, increasing evidence shows a beneficial effect of PA in depression prevention 10,11,29 . However, the causality and direction of this association have been discussed in the literature, suggesting that PA may protect against depression, and/or depression may result in decreased PA. This could be a source of concern in ascertaining the role of PA in depression prevention. Nevertheless, a meta-analysis of prospective studies, and other recent study using bidirectional mendelian randomization provide evidence to establish a causal relationship between PA and a reduced risk for depression 11,34 . Previous studies have analyzed the association between meeting the PA recommendations in adults and allcause mortality, establishing that those meeting the recommendations have a 40% decreased risk of death 9,17 . Table 2. Odds ratio (95% CI) for depression according to physical activity levels. Data are representative of non-institutionalized American population. Model A is adjusted by age, sex, race/ethnicity, annual household income, and educational level. Model B is additionally adjusted by alcohol consumption, smoking status, BMI, arterial hypertension, dyslipidemia, and diabetes. a Significant differences between Less-active and More-active groups. p-value Less active (< 600) More active (≥ 600) www.nature.com/scientificreports/ Other previous studies also have ascertained a positive association between depression and all-cause mortality 18 . The results of our study are consistent with those previous studies, establishing that 150 min of moderate to vigorous PA can be a protective factor against all-cause mortality and that having depression is associated with a higher all-cause mortality risk. However, the present study also analyzes the combined effects of meeting the PA recommendations and depression status with the risk of all-cause mortality among persons older than 50 years. As could be expected, those without depression in the more-active group had the lowest HR for all-cause mortality compared with those with depression in the less-active group. Interestingly, those with depression in the more-active group had a lesser HR than those without depression in the less-active group, compared with the reference group (with depression and less active). This fact reveals that PA could counteract the higher mortality risk due to depression. Analyzing the influence of PA as a dose (combined amount and intensity) in relation to depression and allcause mortality, and not only as frequency, as in other studies 22,31,32 , is essential. However, it could be interesting to analyze whether the type of PA influences the relationship of PA's preventive role against depression and whether the combined effect of PA and depression status on all-cause mortality is PA-type dependent. Other studies have elucidated the relationship between PA and mortality, finding that it is dependent on the type of PA 17 . Therefore, differentiating at least between endurance and resistance activities in PA quantification may determine if any type Table 4. Hazard ratio (95% CI) for all-cause mortality according to physical activity level. Data are representative of non-institutionalized American population. Model A is adjusted by age, sex, race/ethnicity, annual household income, and educational level. Model B is additionally adjusted by alcohol consumption, smoking status, BMI, arterial hypertension, dyslipidemia, and diabetes. a Significant differences between Lessactive and More-active groups. www.nature.com/scientificreports/ of PA is more protective than others against depression. One previous study has shown that regular flexibility, and no other type of exercise, such as muscular strength or walking, was independently related to depression prevention 22 . However, as mentioned above, in this study, the assessment of PA only as frequency may not show the real role of each PA type in depression prevention 22 . Maybe future studies could shed some light on this issue. To the best of our knowledge, this is the first study that analyzed the joint association between PA and depression with all-cause mortality in a representative sample of the American population aged 50 and older. However, several limitations should be acknowledged in our study. First, regarding the association of PA and depression, the cross-sectional analysis does not allow us to establish a causal, temporal link. Second, although data collection about PA has been carried out by trained interviewers, the use of self-reported information could be subject to bias 35 . Third, depression status was only assessed once (at baseline), and it was not possible to consider the course of depression. Fourth, to increase the statistical power, only two subgroups of total PA were used to test in combination with depression status, its joint association with mortality. Fifth, only non-institutionalized adults were included in this analysis, so the results can only be applied to this population. Conclusions In summary, performing 150 min/week of moderate to vigorous PA is associated with reduced odds for depression and seems to be a preventive factor against the increased all-cause mortality risk due to depression. From a population health perspective, promoting moderate to vigorous PA for at least 150 min/week among Americans aged over 50 years with depression may be an important health-promotion strategy that can reduce the increased all-cause mortality risk associated with depression.
2022-02-12T06:23:42.849Z
2022-02-10T00:00:00.000
{ "year": 2022, "sha1": "390af3f18854c0eab6962e019daff027ae34864b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-05563-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0fefe2738d7806e8328298ac9755c7ba65a77944", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
220657078
pes2o/s2orc
v3-fos-license
Real-time metabolic profiling of oesophageal tumours reveals an altered metabolic phenotype to different oxygen tensions and to treatment with Pyrazinib Oesophageal cancer is the 6th most common cause of cancer related death worldwide. The current standard of care for oesophageal adenocarcinoma (OAC) focuses on neoadjuvant therapy with chemoradiation or chemotherapy, however the 5-year survival rates remain at < 20%. To improve treatment outcomes it is critical to further investigate OAC tumour biology, metabolic phenotype and their metabolic adaptation to different oxygen tensions. In this study, by using human ex-vivo explants we demonstrated using real-time metabolic profiling that OAC tumour biopsies have a significantly higher oxygen consumption rate (OCR), a measure of oxidative phosphorylation compared to extracellular acidification rate (ECAR), a measure of glycolysis (p = 0.0004). Previously, we identified a small molecule compound, pyrazinib which enhanced radiosensitivity in OAC. Pyrazinib significantly inhibited OCR in OAC treatment-naïve biopsies (p = 0.0139). Furthermore, OAC biopsies can significantly adapt their metabolic rate in real-time to their environment. Under hypoxic conditions pyrazinib produced a significant reduction in both OCR (p = 0.0313) and ECAR in OAC treatment-naïve biopsies. The inflammatory secretome profile from OAC treatment-naïve biopsies is heterogeneous. OCR was positively correlated with three secreted factors in the tumour conditioned media: vascular endothelial factor A (VEGF-A), IL-1RA and thymic stromal lymphopoietin (TSLP). Pyrazinib significantly inhibited IL-1β secretion (p = 0.0377) and increased IL-3 (p = 0.0020) and IL-17B (p = 0.0181). Importantly, pyrazinib did not directly alter the expression of dendritic cell maturation markers or reduce T-cell viability or activation markers. We present a new method for profiling the metabolic rate of tumour biopsies in real-time and demonstrate the novel anti-metabolic and anti-inflammatory action of pyrazinib ex-vivo in OAC tumours, supporting previous findings in-vitro whereby pyrazinib significantly enhanced radiosensitivity in OAC. www.nature.com/scientificreports/ time metabolic rate and clinical patient characteristics, OCR and ECAR were divided according to nodal status, tumour stage, stage of differentiation, body mass index, age at diagnosis and gender of the patients evaluated in this study. OCR and ECAR were shown to be independent of clinical patient characteristics, whereby there was no significant difference in the levels of OCR or ECAR when assessed according to patient characteristics (Supplemental Fig. 1, 2). All metabolic readouts were normalised to biopsy protein content following this assay; thus a limitation of this study was biopsy fragments directly adjacent to the fragment used in study had to be used to confirm the pathology of the tumour. In summary, OAC treatment-naïve biopsies have a significantly higher rate of OCR than ECAR and real-time metabolic rate is not significantly associated with clinical patient characteristics, which indicates that oxidative phosphorylation is an important metabolic pathway active across all n = 17 OAC tumours evaluated in this study, regardless of clinical patient characteristics. Pyrazinib (P3) significantly inhibited oxygen consumption rate in OAC treatment-naïve biopsies. Patient characteristics of the patient cohort evaluated in this study are outlined in Supplemental Table 1. Having demonstrated in real-time that OAC treatment-naïve biopsies are metabolically active we sought to investigate the effect of our anti-metabolic agent, pyrazinib (P3), on real-time metabolic rates in OAC treatment-naïve biopsies. Pyrazinib (P3) treatment significantly inhibited OCR in OAC treatment-naïve biopsies (p = 0.0039) ( Fig. 2A). Pyrazinib (P3) induced a 35% reduction in OCR compared to the baseline OCR reading. Oligomycin, an ATP synthase inhibitor, was used as a positive control and resulted in a significant reduction in OCR (p = 0.0098) ( Fig. 2A). No significant change in metabolic rate was seen following treatment with the 0.1% dimethyl sulfoxide (DMSO) control from baseline OCR ( Fig. 2A). To determine if the reduction in OCR following treatment with pyrazinib (P3) was dependant on certain clinical patient characteristics, the percentage reduction in OCR was assessed according to the following clinical patient characteristics: nodal status, tumour stage, stage of differentiation and body mass index (Supplemental Fig. 3A,C,E,G). Importantly, the percentage reduction in OCR was independent of clinical patient characteristics suggesting that pyrazinib (P3) could function across our patient cohort. Regarding ECAR, treatment with pyrazinib (P3) did not significantly alter ECAR in OAC treatment-naïve biopsies (Fig. 2B). Percentage change in ECAR following treatment with pyrazinib (P3) was independent of patient characteristics: nodal status, tumour stage, stage of differentiation and body mass index (Supplemental Fig. 3B,D,F,H). Therefore, pyrazinib (P3) produced a significant reduction of oxygen consumption rate in OAC treatment-naïve biopsies and its function was independent of clinical patient characteristics. Pyrazinib (P3) significantly inhibited real-time metabolic rate in OAC treatment-naïve biopsies cultured under hypoxia (0.5% O 2 ). Clinical patient characteristics of the patient cohort used in this study are outlined in Supplemental Table 2. Hypoxic tumours are inherently resistant to treatment thus we investigated both real-time metabolic rate and the action of pyrazinib (P3) under hypoxic conditions of 0.5% O 2 in OAC treatment-naïve patient biopsies from 7 male patients. To determine if OAC biopsies adapt their metabolic rate to changes in oxygen levels, we evaluated real-time basal metabolic rate under normoxia and realtime metabolic rate of the same biopsies again following culture in 0.5% O 2 for 6 h. Basal metabolic rate demonstrates OCR is significantly higher than ECAR under normoxic conditions (p = 0.0156) (Fig. 3A). In contrast to OAC biopsies cultured under normoxic conditions, following culture of OAC treatment-naïve biopsies under 0.5% O 2 , no significant differences were observed between OCR and ECAR (Fig. 3A). The ratio of OCR:ECAR was significantly higher in OAC treatment-naïve biopsies at baseline under normoxic conditions compared to the same biopsies cultured under hypoxia for 6 h (p = 0.0469) (Fig. 3B). Interestingly, the shift in metabolic Figure 1. Oxygen consumption rate is significantly higher than extracellular acidification rate in OAC treatment-naïve biopsies. OCR, a measure of oxidative phosphorylation and ECAR, a measure of glycolysis were assessed in real-time in OAC treatment-naïve biopsies using the Seahorse Biosciences XFe24 Analyser. (A) Basal OCR and ECAR rates in OAC pre-treatment biopsies, (n = 17). (B) OCR is significantly elevated in OAC treatment-naïve biopsies, (n = 17), Wilcoxon signed rank test, ***p < 0.001. (C) Relative metabolic ratio OCR:ECAR compared to ECAR:OCR in OAC treatment naïve biopsies (n = 17), paired t-test, **p < 0.01. Data expressed as + SEM. OAC treatment-naïve biopsies display a heterogeneous inflammatory secretome. The clinical patient characteristics for the patient cohort used in this study are outlined in Supplemental Table 3. In addition to an altered metabolic phenotype, inflammation has been reported to play a significant role in the progression and treatment response of OAC tumours, whereby elevated levels of pro-inflammatory mediators such as LIF, C3a, C4a and IL-1β have been associated with poor treatment response 8,9,11 . To profile the inflammatory secretions of OAC treatment-naïve tumours, OAC biopsies were cultured for 24 h and the secreted levels of 54 proteins in the tumour conditioned media (TCM) were evaluated by multiplex ELISA. The secreted levels of Fig. 5E. Notably, there is a high level of variability of the secreted levels of the proteins detected in this screen, highlighting the high level of heterogeneity between OAC treatmentnaïve patient tumours. Furthermore, no significant correlations were seen when secreted inflammatory factors were divided according to clinical patient characteristics including tumour stage, nodal status, body mass index, stage of differentiation and age at diagnosis (data not shown). Real-time metabolic profiles were significantly correlated with inflammatory secretions in OAC tumour biopsies. To investigate the relationship between real-time metabolic rate (OCR and ECAR) and the inflammatory protein secretions in OAC treatment-naïve biopsies, we correlated basal metabolic rate with inflammatory secretions in 10 matched patients, as per patient characteristics outlined in Supplemental Table 1. OCR was significantly positively correlated with ECAR, (r = 0.8505, p < 0.0001). In addition, OCR was significantly correlated with the secreted levels of 3 of 54 proteins in the TCM including Vascular endothelial growth factor A (VEGF-A) (r = 0.7091, p = 0.0268), interleukin-1 receptor antagonist (IL-1RA) (r = 0.7939, p = 0.0088) and TSLP (r = 0.6727, p = 0.0390) shown in Table 1 Pyrazinib (P3) significantly altered the secretion of IL-1β, IL-3 and IL-17B from OAC treatment-naïve biopsies. Inflammation drives development of OAC, however not all types of inflammation are detrimental to the host, e.g. T helper 1 (T H 1) profiles are associated with a good response to immunotherapeutic drugs 14 , whereas myeloid cell abundance in tumours is associated with worse survival 15 . To investigate if our anti-metabolic pyrazine compound pyrazinib (P3) affects altered inflammatory secretions ex vivo, we cultured OAC treatment-naïve biopsies with 10 µM pyrazinib (P3) or 0.1% DMSO for 24 h and compared the inflammatory secretions from the OAC treatment-naïve biopsies from the same patient. Of the 54 factors screened for in the multiplex ELISA, pyrazinib (P3) treatment significantly alters the secretions of 3 proteins: IL-1β, IL-3 and IL-17B. Treatment with pyrazinib (P3) significantly reduced the secretion of IL-1β from OAC www.nature.com/scientificreports/ treatment-naïve tumour biopsies (p = 0.0377) (Fig. 6A). In contrast, pyrazinib (P3) significantly increased the secretions of IL-3 (p = 0.0020) and IL-17B (p = 0.0181) in ex vivo OAC treatment-naïve biopsies (Fig. 6B,C). Pyrazinib (P3) does not directly alter the expression of maturation markers on CD11c + dendritic cells. In addition to the anti-metabolic and anti-inflammatory activity of pyrazinib (P3), it was critical to investigate its effect on immune cells, including dendritic cells which play a critical role in anti-tumoural immunity. Thus, we investigated whether pyrazinib (P3) altered the expression of dendritic cell maturation markers. Dendritic cells are professional antigen presenting cells which play a key role in orchestrating antitumour immune responses, via T cell polarisation and activation. In the development of a new compound, it is crucial to determine if such treatment will affect the function of important anti-tumour immune cells such as www.nature.com/scientificreports/ dendritic cells, as this could hinder clinical potential. Patient information is outlined in Supplemental Table 3. www.nature.com/scientificreports/ early apoptotic (Fig. 8B), late apoptotic (Fig. 8C) and necrotic cells (Fig. 8D). The percentage of live cells treated with the varying concentrations of pyrazinib (P3) at 24 h and 48 h incubation periods did not differ, with a maintained range of 94-98% Jurkat cell viability. There was some variance seen in the proportion of cells that underwent early apoptosis (Annexin V + only), but at very low levels (≤ 3%), therefore no significant difference in the percentage of cells that underwent early apoptosis. The percentage of cells that underwent necrosis (PI + only) remained at very low levels (≤ 3%) with no significant difference observed between samples. Late apoptosis (Annexin V + , PI + ) was detected in ≤ 2% of cells, and there was no significant difference between different treatment conditions. In summary, pyrazinib (P3) is not toxic to Jurkat T cells, at 0-10 μM concentrations after 24 h and 48 h treatment. Discussion This study highlights a novel method for measuring the real-time metabolic profiles of OAC treatment-naïve tumour biopsies which could be applied to multiple cancer types. Ex vivo, real-time metabolic profiling demonstrated that oxidative phosphorylation was significantly higher in OAC treatment-naïve biopsies compared to glycolysis. This supports previous findings by our group which have reported the importance of oxidative phosphorylation in OAC, and its previous association with radiation resistance 10 . The metabolic rate of OAC biopsies was shown to be independent of clinical patient characteristics. Whilst Warburg initially found cancer cells to be reliant on aerobic glycolysis, numerous studies also support the functional role of oxidative phosphorylation in tumorigenesis 1 . Real-time metabolic profiling of OAC biopsies supports previous findings at the in vitro level, which demonstrates that mitochondrial respiration is a predominant metabolic pathway used by OAC cancer cells 10 . Numerous studies have evaluated the mitochondrial function of tumour cells and reported that tumour cells predominantly have functional mitochondria which have retained the ability to carry out oxidative phosphorylation 16 . Our novel small molecule compound pyrazinib (P3) significantly inhibited OCR in OAC treatment-naïve biopsies and no significant upregulation of ECAR occurred following treatment with pyrazinib (P3). Importantly, the activity of pyrazinib (P3) was independent of clinical patient characteristics, indicating that pyrazinib (P3) can maintain its anti-metabolic activity irrespective of patient's clinical characteristics such as tumour stage, nodal status, tumour differentiation or patient BMI. The significant effect of pyrazinib (P3) on OAC tumour oxidative phosphorylation rate is not only likely to have an anti-cancer effect given the prominent utility of oxidative phosphorylation in OAC tumours, it may also enhance the radioresponse of these tissues, as seen previously in vitro in an isogenic model of OAC radio-resistance 7 . Targeting oxidative phosphorylation has been reported in a number of studies as a novel mechanism to enhance radiosensitivity and reduce tumour growth 5,17 . A study by Benej et al. demonstrated that targeting mitochondrial respiration with the ergot alkaloid papaverine significantly inhibited mitochondrial respiration and enhanced tumour oxygenation and subsequently enhanced radiosensitivity in pulmonary adenocarcinoma cells 17 . An interesting study carried out by Bol et al. also demonstrated reprogramming of tumour mitochondria improves responses to radiation, a mitochondrial dysfunctional cell line which were exclusively glycolytic were found to be more radiosensitive than wild type oxidative phosphorylation proficient cells 18 . A novel in vivo model of oncogenic ablation-resistant pancreatic cancer cells which were responsible for tumour relapse were reported to depend on mitochondrial function for survival 5 . Targeting oxidative phosphorylation in this subpopulation of surviving pancreatic cells was shown to significantly decrease tumour spheroid growth 5 . Furthermore, oxidative phosphorylation is significantly upregulated in breast cancers deficient in RB1, a protein lost in 20-30% of basal-like breast cancers 19,20 . Tigecycline, a mitochondrial translational inhibitor attenuated growth of RB1-deficient breast tumours in vivo 20 . Taken together with the findings in this study, targeting oxidative phosphorylation in the neo-adjuvant setting could enhance radiosensitivity in OAC, but also oxidative phosphorylation could be targeted in the adjuvant setting to specifically target any remaining surviving cancer cells. Given the predominance of the oxidative phosphorylation 22 . Interestingly, this study found the levels of both malic acid and citric acid were significantly lower in more advanced SCC tumours when compared to early stage tumours which may be associated with a downregulation of the tricarboxylic cycle in late stage tumours 22 . Furthermore, a metabolomics study which utilised urinary samples from SCC patients and healthy controls demonstrated oesophageal SCC was associated with alterations in fatty acid β-oxidation and the metabolism of purines, amino acids, and pyrimidines 23 . These studies amongst others highlight the importance of metabolic alterations in oesophageal cancer compared to healthy controls and highlight the potential for biomarker development for disease diagnosis and progression 24,25 . It would be of interest to employ a similar approach in OAC treatmentnaïve tumour tissue across the various stages of disease progression to further elucidate the role of altered energy metabolism in OAC and compare the findings to real-time metabolic analysis. Hypoxia promotes the transformation of tumour cell metabolism from oxidative metabolism to anaerobic glycolysis, which protects tumour cells, promotes tumour growth and the development of treatment resistance in tumour stem cells 26 . We sought to investigate if OAC treatment-naïve biopsies adapt their metabolic profile to conditions of hypoxia and if pyrazinib (P3) could inhibit mitochondrial respiration under hypoxic conditions of 0.5% O 2 . Under normoxic conditions, OAC tumours had a significantly higher rate of OCR, but adapted their www.nature.com/scientificreports/ metabolic rate to cope with hypoxic conditions, therefore there was no significant difference in OCR compared to ECAR. The OCR:ECAR ratio was significantly higher in OAC tumour under normoxia versus hypoxia and the ECAR:OCR ratio was significantly higher in hypoxic versus normoxic biopsies, highlighting the metabolic adaptation of OAC tumours to their environment. Pyrazinib (P3) significantly inhibited both oxidative phosphorylation and glycolysis. The ability of pyrazinib (P3) to inhibit both oxidative phosphorylation and glycolysis is a critical finding, which shows even in OAC tumours which can adapt their metabolic profiles to hypoxic conditions, pyrazinib (P3) can still inhibit both OCR and ECAR. In a study by Wang et al., in genetically modified macrophages overexpressing HIF-1α the OCR:ECAR ratio was dramatically decreased compared to non-HIF-1α overexpressing macrophages demonstrating a shift to glycolysis metabolism compared to mitochondrial oxidation in HIF-1α overexpressing macrophages 27 . Oxygen is a potent radiosensitiser and solid tumours with areas of hypoxia are the most aggressive and difficult tumours to treat 26 . A number of strategies which have attempted to increase oxygen delivery to the tumour have failed in the clinic largely due to the heterogenic nature of tumour vasculature 28 . In a pancreatic xenograft, the selective HIF-1α inhibitor PX-478 was found to potentiate the effect of fractioned chemoradiation therapy 29 . Targeting intra-tumoural oxygen consumption with compounds targeting oxidative phosphorylation may present a novel means to overcome tumour hypoxia and enhance anti-cancer activity in tumours where oxidative phosphorylation is upregulated, but also to improve treatment response rates in hypoxic treatment resistant tumours 17 . Notably, elevation of oxygen by as little as 2% is sufficient to produce oxygen enhancement 16 . Targeting oxidative phosphorylation in mammary tumours with papaverine was found to enhance hypoxic tumour oxygenation, sensitise tumours to radiation therapy and significantly reduce tumour growth 17 . Taken together with our current and our previous findings in vitro which demonstrated the anti-metabolic and radiosensitising activity of pyrazinib (P3) suggests pyrazinib (P3) has the potential to function as an anti-cancer agent in vivo 7 . Of note, the fresh patient samples used in our hypoxia metabolism study were all male, whilst OAC is a male dominant disease, previous studies have suggested a gender bias may exist in oesophageal cancer patients in relation to treatment response, thus it would be of importance to address the influence of gender on hypoxia metabolism in a much larger prospective study across multiple sites 30 . Tumour metabolism is tightly linked with both the local and systemic inflammatory response. OAC is an inflammatory driven upper gastrointestinal cancer 11 , thus we sought to characterise the inflammatory profile of the tumour conditioned media from OAC treatment-naïve biopsies. A multiplex ELISA demonstrated the heterogeneity of secreted factors from OAC biopsies including inflammatory, angiogenic and vascular injury, chemokine, cytokine and T H 17 related proteins. To investigate a potential relationship between metabolic rate and the OAC inflammatory secretion profile, we correlated protein secretions with baseline real-time metabolic rate in matched patient samples. OCR was significantly correlated with ECAR in all patients at baseline. Both OCR and ECAR were significantly positively correlated with the secretion of VEGF-A, IL-1RA and TSLP in OAC treatment-naïve biopsies. In addition, ECAR was positively correlated with IL-13, MIP-3α and TNF-α. Oxidative phosphorylation and glycolysis are known to be influenced by systemic inflammation thus it is not surprising that metabolic rate correlated with a number of inflammatory mediators 31 . The significant correlation between VEGF-A secretion and OCR and ECAR highlights the tight links which exist between the two biological processes of angiogenesis and metabolism, whereby there are elevated levels of the angiogenic mediators VEGF-A and TSLP in tumours with higher levels of oxidative phosphorylation 32,33 . TNF-α was only significantly correlated with ECAR and not OCR in OAC treatment-naïve biopsies. TNF-α was previously shown to induce aerobic metabolism in prostate epithelial cells and glycolytic reliance in mammary carcinoma cells 34,35 . In addition, in a previous study, MIP-3α was shown to be significantly correlated with the levels of HIF-1α, a mediator of glycolytic induction, in Barrett's oesophagus tissue 36 . Treatment of OAC treatment-naïve biopsies with pyrazinib (P3) significantly inhibited IL-1β secretion and increased IL-3 and IL-17B secretion. The significant reduction of IL-1β secretion following pyrazinib (P3) treatment is a critical finding which may contribute to pyrazinib's (P3) anti-cancer effect. Previous work by our department found elevated levels of IL-1β in tumour samples compared to squamous epithelium from the same patients, and IL-1β levels were significantly decreased in the TCM generated from post treatment biopsies, compared to the TCM generated from pre-treatment biopsies in matched patients who achieved a complete pathological response to neoCRT 9 . In addition, IL-1β was previously shown to be significantly correlated with clinical outcome in oesophageal SCC, whereby patients with IL-1β positive tumours had a poor response to treatment compared to patients with IL-1β negative tumours, the suggested underlying mechanism of this difference in tumour response was increased epithelial-mesenchymal transition aggressive tumour growth in IL-1β-positive tumours 37 . Inhibition of IL-1β was shown to attenuate tumour growth and invasion and ameliorate treatment resistance 37 . Furthermore, in a study by Deans et al., tumoural IL-1β expression levels were significantly correlated with systemic inflammation as measured by C-reactive proteins levels, which is a marker of reduced survival in oesophagogastric cancer patients 38 . In an in vivo melanoma model, IL-1β inhibition was shown to stably reduce tumour growth by limiting inflammation and inducing the maturation of immature myeloid cells into M1 macrophages. Furthermore, in an in vitro model of pancreatic chemoresistance, administration of an IL-1 receptor blocking antibody, as a means of targeting IL-1β signalling, reduced NF-κB activation and the acquisition of chemoresistance in these cells 39 . Reports from the literature suggest the significant inhibition of IL-1β in response to pyrazinib (P3) is a positive effect which may contribute to the anti-cancer activity of this drug in addition to its effects on oxidative phosphorylation in vitro and ex vivo and radiosensitivity in vitro. IL-3 has been reported to exert paradoxical effects in cancer including pro-tumourigenic as well as anti-tumourigenic cellular responses 40,41 . Importantly, IL-3 has been reported to play an important role in anti-tumoural immunity. IL-3 is able to enhance antigen presentation by dendritic cells and activate macrophages to increase the expression of class II MHC molecules and IL-1 42 . Elevated gene expression of IL-3 in fibrosarcoma xenografts (FSA-JmIL-3 tumours) was associated with enhanced response to radiation compared to parental tumours, where FSA-JmIL-3 tumours were associated with increased lymphocyte infiltration and elicited immune responses 41 www.nature.com/scientificreports/ that the enhanced secretion of IL-3 following pyrazinib (P3) treatment is a positive effect of this small molecule compound. Dendritic cells are professional antigen presenting cells which are responsible for induction of antigen specific T cell responses, thus it is critical that function of dendritic cells remains intact even in the presence of our small molecule compound pyrazinib (P3). In this study, we investigated both the effect of the secretions from the TCM and pyrazinib (P3) on the expression of dendritic cell maturation markers. Increased expression of several cell surface markers including CD54, PD-L1, CD40, CD83, and HLA-DR on dendritic cells is associated with dendritic cell maturation and T cell activating ability 43 . Direct treatment with pyrazinib (P3) showed no effect on CD83, CD54, PD-L1, CD40 and HLA-DR expression in response to LPS, whereas both control and pyrazinib (P3) treated TCM significantly reduced the expression of CD83, suggesting that mediators secreted from the tumour microenvironment specifically exert an inhibitory effect on dendritic cell maturation. Pyrazinib (P3) treatment does not negatively affect the expression of dendritic cell maturation markers, indicating the function of these cells remains intact even in presence of this compound. This is a critical finding, as a previous study by our group demonstrated that both control and bevacizumab treated colorectal conditioned media significantly inhibited LPS-induced maturation and function of dendritic cells 43 . The adaptive immune system is associated with tumour control and elimination, particularly the T H 1 phenotype 14,44,45 . Pyrazinib (P3) did not significantly alter the viability of a Jurkat T cell line. This is an important finding, which highlights the potential to use pyrazinib (P3) within the clinical setting because it does not deplete or kill T cells. We also examined the effect of pyrazinib on Jurkat T cell activation status, using pre-activated Jurkats. Pyrazinib (P3) has been previously shown to enhance radiosensitivity in vitro and a study by Voos et al. demonstrated that radiation doses of ≥ 2 Gy activate Jurkat T cells and stimulates pro-inflammatory immune responses, through upregulation of IL-2, IFN-γ and CD25 surface expression 46 . Importantly, pyrazinib (P3) did not affect expression of T cell activation markers by activated or unactivated Jurkat cells. CD8 + T cells are more susceptible to becoming exhausted upon constitutive activation than CD4 + T cells 47 and one of the limitations to this study is the use of Jurkat T cells in this in vitro setting. Further research is required in relation to this study, both in patient-derived PBMCs and in the in vivo setting at multiple timepoints to gain a better understanding of the effect pyrazinib (P3) may have on other immune cells within the tumour microenvironment. In summary, we report a new method for profiling the metabolic rate of human OAC tumour biopsies in real time, highlighting the importance of the oxidative phosphorylation pathway in OAC tumours, and that these tumours can adapt their metabolic profiles in line with changes in oxygen tension. We have demonstrated the novel anti-metabolic and anti-inflammatory action of pyrazinib (P3) in ex vivo OAC treatment-naïve biopsies, in addition to its radiosensitising properties. It will be critical to further evaluate the anti-cancer potential of pyrazinib (P3) now in a murine model of OAC. written informed consent, diagnostic biopsy specimens were taken from OAC patients being treated with curative intent, by a qualified endoscopist, prior to neo-adjuvant therapy. Histologic confirmation of tumour tissue in biopsies was performed by a pathologist using routine haematoxylin and eosin staining. All patient tumour tissue used in this study was taken prior to the initiation of neo-adjuvant treatment (treatment-naïve tissues). All experimental protocols were approved by the joint St James's Hospital/AMNCH ethical review board and carried out in accordance with the relevant guidelines of the joint St James's Hospital/AMNCH ethical review board. Methods Real-time metabolic profiling of OAC tumour biopsies. Three biopsies per patient were collected at endoscopy, immediately placed on saline-soaked gauze and were transported to the laboratory within 10 min. Each biopsy was placed into a separate well of a 24 well XF24 Islet Capture Microplate (Agilent Technologies, Santa Clara, CA, USA) containing 1 mL M199 medium (Gibco) supplemented with 10% FBS (Gibco), 1 μg/mL insulin (Sigma) and 1% penicillin/streptomycin (Gibco). 1 mL of complete M199 was placed in four background control wells. A XF24 capture screen was placed over each biopsy to prevent the biopsy touching utility plate probes during assay. The XF24 microplate was placed at 37 °C for 30 min to allow biopsies to equilibrate. Three baseline measurements of OCR and ECAR were taken over 24 min consisting of three repeats of mix (3 min), wait (2 min), measurement (3 min) to establish basal respiration, using a Seahorse Biosciences XFe24 analyser (Agilent Technologies, Santa Clara, CA, USA). Basal respiration of each patient was established by taking the average OCR and ECAR readout from the three individual biopsies obtained from same patient. Following basal metabolic profiling of biopsies, capture screens were removed and biopsies and corresponding media were transferred to a new XF24 islet capture microplate and treated with one of the following; 0.1% DMSO (control), 6 µM of oligomycin (positive control) or 10 µM of pyrazinib (P3). Following treatment biopsies were cultured for 24 h at 37 °C in 5% CO 2 /95% air. Following 24 h culture, a capture screen was placed on each biopsy and three basal measurements of OCR and ECAR were taken over 24 min consisting of three repeats of mix (3 min), wait (2 min), measurement (3 min) to establish the effect of drug treatment with our novel small molecule pyrazinib (P3) on OCR and ECAR. The effect of treatment was determined as the percentage change in metabolic rate readout from the baseline reading of each individual biopsy to the reading following treatment of that individual biopsy. The metabolic rate of each biopsy was normalised to tumour protein content using the BCA assay Scientific RepoRtS | (2020) 10:12105 | https://doi.org/10.1038/s41598-020-68777-7 www.nature.com/scientificreports/ (Pierce) and tumour biopsies were snap frozen and stored at − 80 °C. Tumour conditioned media (TCM) was collected and stored at − 80 °C. Real-time metabolic profiling of OAC tumour biopsies cultured under hypoxic conditions. Basal metabolic rate was determined as described as stated above. Following evaluation of basal OCR and ECAR using the Seahorse Biosciences XFe24 analyser; biopsies and corresponding media were transferred to a new XF24 islet capture microplate and cultured in the Whitley H35 hypoxystation (Don Whitley Scientific) at 0.5% O 2 at 37 °C in 5% CO 2 for 6 h. Following 6 h culture, capture screens were placed over each biopsy in each well and the plate was transferred to Whitley i2 workstation containing the XFe24 Seahorse analyser maintained at 0.5% O 2. Real-time OCR and ECAR were assessed under 0.5% O 2 , three measurements of OCR and ECAR were taken over 24 min consisting of three repeats of mix (3 min), wait (2 min) and measurement (3 min), to establish the effect of a 6 h hypoxia culture on real-time metabolic rate in OAC patient biopsies. Following metabolic profiling of biopsies, capture screens were removed and biopsies and corresponding media were transferred to a new XF24 islet capture microplate and treated with one of the following; 0.1% DMSO (control), 6 µM of oligomycin (positive control) or 10 µM of pyrazinib (P3) for 14 h under 0.5% O 2 at 37 °C in 5% CO 2. Following 14 h treatment, real-time OCR and ECAR measurements were taken over 24 min consisting of three repeats of mix (3 min), wait (2 min), measurement (3 min) whilst the Seahorse Biosciences XFe24 analyser was maintained under 0.5% O 2 to establish the effect of treatment on metabolic rate of OAC biopsies maintained under 0.5% O 2. Following metabolic profiling, capture screens were removed, biopsies were snap frozen and both biopsies and TCM were stored at − 80 °C. The metabolic rate of each biopsy was normalised to tumour protein content using the BCA assay (Pierce).
2020-07-21T15:01:31.856Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "45607249071b93e27fcc058cb9ae9256d3ed1d51", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-68777-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45607249071b93e27fcc058cb9ae9256d3ed1d51", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
164789299
pes2o/s2orc
v3-fos-license
The Epilogue in Doctor Faustus: The Petrarchan Context Metaphors used in Epilogue in Doctor Faustus, particularly the cut branch and Apollo‟s burned laurel bough, are indicative of Marlowe‟s intellectual involvement with Petrarch and the former‟s role in the literary circle centered on the Countess of Pembroke. His Latin epistle to Mary Sidney in Thomas Watson‟s Amyntas (1592) repeats similar metaphors, and the combination in the Epilogue of these images with that of the “forward wits” point both to Petrarch‟s Sonnet 269 (“Rotta l‟alta colonna e ‟l verde lauro”) and Sonnet 307 (“I‟ pensava assai destro esser sul l‟ale”). In fact, lines in the Epilogue are strongly evocative of some verses in Sonnet 307, where Petrarch ponders the theme of overreaching. The Epilogue thus would document a continuation of the interest in Petrarch that is so evident in the Tamburlaine plays. These powerful lines establish what Harry Levin termed the play"s "celestial-infernal antithesis" 2 by juxtaposing images of learning and ascent against images of punishment and downfall. Marlowe describes the condemnation of overreaching ambition and the clash of irreconcilable forces involved in a manner recalling a well-known Renaissance emblem featuring a scholar with wings attached to his raised right arm and a heavy rock in his left. The image with the accompanying four-verse Latin poem was first printed by Andrea Alciati in 1531 and repeated by Geoffrey Whitney in 1586. 3 By sheer coincidence, another variant of the emblem is also printed on the frontispiece of the 1604version of Doctor Faustus, being the personal emblem of the tragedy"s printer. In addition to bearing on the protagonist"s desperate state in his final soliloquy, where the scholar is torn precisely between heaven and hell, Marlowe"s verses also point to his own ambitions as a poet and his choice of literary modelsin this case Petrarch, who seems to have been one of his favourite poets. 4 Of course, Marlowe did not leave us any sonnet sequence and the chances are very slight indeed that we shall ever come across sonnets by Marlowe. 5 After surveying the critics who have written on Marlowe and the sonnet, Patrick Cheney conjectures that "we may at least have three Marlovian sonnets." 6 Possibly, D. Nicholar Ransom's suggestion that Marlowe hides behind "Phaeton" in the 2 Levin 1961: 154. 3 The first emblem in Alciati 1531: Sig A8ro, "Paupertatum svmmis ingeniis obesse ne provehantur" ["That poverty is an obstacle to great talents, to stop them advancing, "], see also Whitney 1969: 40-41. Although mentioning the scholar"s proverbial poverty as an obstacle, Whitney"s verses are more Faustian in its mention of the scholar"s "desire" to win "immortal fame" (2) and "will ... mount aloft (7), 4 Eriksen 1986: 13-25. 5 If we consider Marlowe"s propensity to repeat his own conceits in different texts (cf. Levin 1961: 148-149), it is not inconceivable that he even may have composed his own but no longer extant version of Sonnet 269 to commemorate Sir Philip. As for the unlikeliness that Marlowe was the author of 16 sonnets by in a ms. by "C.M," see Chauduri 1988: 199-216. 6 Cheney 1997 dedicatory sonnet to John Florio's First Fruites (1591) is the most likely one, but it is neither a Petrarchan sonnet in form nor content. 7 Petrarch"s sonnets were imitated by several generations of Elizabethan poets, between the early formal experimenters Wyatt and Surrey, through Sackville and Gascoigne, to accomplished sonneteers like Sidney, Spenser, Daniel and Shakespeare. 8 Moreover, sonnets did not appear solely in individual collections of poetry, or "sequences," the term derived from George Gascoigne"s terza sequenza of sonnets in The Aduentures of Master F.J. (1573). Sonnets also figured as dedications, epitaphs and other types of paratextual materials, or as included in anthologies, in novelle such as The Unfortunate Traveller (1594), and in plays such as Romeo and Juliet and As You Like It. Of course, Shakespeare is not the first dramatist to incorporate sonnets or sonnet material into plays. His famous contemporary Marlowe did, whose extant sonnets are few and incomplete, but left us no such sequence, unless we accept the probable assumption that his translation of Ovid"s Elegies is his main contribution to the vogue? 9 In this he followed his friend Thomas Watson in the latter"s unorthodox, but at the same time unorthodox, response to Petrarch in his two sonnet sequences. 10 Paul H. Kocher was the first to point out that Marlowe had integrated sonnets into his drama, when in 1945 he identified a blank verse sonnet embedded in a speech in Tamburlaine, Part One,V,ii, Then in 1976 James Robinson Howe argued that Marlowe drew on Giordano Bruno"s sonnet sequence De gl'heroici furori (1585) for its metaphorical mode of speech, 12 and further evidence that Marlowe used the philosopher"s sonnet sequence in Doctor Faustus appeared in 1987. 13 A third stage in the search for Marlowe the sonneteer was Nicholas Ransom"s reasonable proposal that Marlowe is the "Phaeton," who 7 D. Nicholas Ransom, "A Marlowe Sonnet?" in Publications of the Arkansas Philological Associations, 1979, vol. 5, 1-8. 8 See e.g. Mortimer 2005and Kirkpatrick 1995. 9 Cheney 1997: 331 and Brown 2003. 10 Watson 1582 and 1593. 11 Kocher 194511 Kocher :1945 Robinson Howe 1976. 13 Eriksen 1987. See also Thomas and William Tydeman 2001: 176. contributed a commendatory sonnet to John Florio"s Second Frutes. 14 In actual fact, Marlowe"s concern with the Italian sonnet also extends to Petrarch himself, when the Canzoniere furnished metaphors for Tamburlaine"s speeches to Zenocrate. 15 Even so, it is surprising in that some of Marlowe"s best known linesthe opening lines of the Epilogue in Doctor Faustusemulate the opening of a well-known sonnet by the Italian poet: Cut is the branch that might haue growne full straight, And burned is Apollo's Lawrell bough, That some time grew within this learned man, Apart from noting the elegiac tone of voice and the strong emphasis on loss, we recognize in Marlowe"s verses reworked versions of Petrarch"s striking imagery: the cut branch and the laurel. The "burned ... Lawrell bough" echoes the "verde lauro" that refers to the loss of Laura and, as I shall explain in greater detail below, the "cut branch" echoes the "Rotta ... colonna" [broken column] that refers to the death of Petrarch"s patron. 14 See Ransom"s proposal (1979: 1-8) that Marlowe is the "Phaeton," who in 1592 contributed a sonnet to Florio"s Second frutes. 15 Eriksen 1986: 13-25. 16 Durling 1981: 442. However, in Marlowe"s text it also marks the loss of hope of redemption, as the branch often expressed hope. 17 In fact, the laurel per se was commonplace for learning and virtue and was used as such in emblems from Alciati onwards, 18 a fact cited by modern editors of Doctor Faustus. 19 Earlier critics have argued in favour of another source for the metaphor of the branch, proposing that it derives from Thomas Churchyard's use of a couplet containing treemetaphors in "The Tragedy of Shore's Wife" in A Mirroure for Magistrates: They brake the bowes and shakte the tree by sleight And bent the wand that might haue growne full streight. 20 Jump notes that the dramatist "evidently allowed his moving line to be suggested ... by [his] minor contemporary" (p. 179). Although Churchyard has the phrase "bent the wand," where Marlowe writes "cut is the branch," it is conceivable that Marlowe echoes the full phrase "that might haue growne full streight" in the Epilogue, but that the similarity ends there. For even though the distinctly tragic mode of the passage in "The Tragedy of Shore"s Wife" supports a general link between the two relative clauses, the metaphorical and structural parallels between Sonnet 269 and the Epilogue are far closer and more specific. Petrarch presents Laura throughout the Canzoniere as Daphne who is metamorphosed into a laurel to escape the wanton embraces of Apollo. Still, Sonnet 269 is not only a commemorative sonnet on Petrarch"s beloved, the poem also has a political context by being a meditation on the death of his patron. The sonnet does in fact draw on a tradition of 17 I am grateful to Peter Young who pointed out that power is added to Marlowe"s lines because in emblems the branch often asserts hope, as the commonplace of new life, as in John 15.1-7; cf. O"Brien 1970: 1-11. Another example is Pericles, Sc. 6, 44ff., where "A withered branch" is Pericles"s impresa with the motto "In hac spe vivo." See also Wells 2006: 222. In Petrarch, too, the broken laurel signals loss of hope. 18 Henkel and Schöne 1996: 202-204. 19 Gill 1990: 87 and Bevington and Rasmussen 1993 See Campbell 1938: 378;Thaler 1923: 89-92;Martin 1950:182, andJump 1988: 179. poems lamenting the untimely death of political leaders of great, but unfulfilled, promise. One such poem that directly influenced Sonnet 269 is the widely used treatise De poetria nova (c. 1210) by the Norman grammarian and rhetorician Geoffrey de Vinsauf. In the treatise the author incorporates an elegy on the death of Richard II of England, the "Luctus Ricardi," in which the murdered king is likened to a broken column, whose fall will cause "Anglia" to mourn: Iam cito rumpetur speculum, speculatio cuius Gloria tanta tibi; sidus patietur eclipsim, A quo fulges; nutabit rupta columna, Unde trahis vires; ... "For soon will the mirror, from which you shine, will suffer eclipse; the broken column, from which you drew your strength, will totter; ..." [340-343]) 21 de Vinsauf"s striking metaphor of "the broken column"[rupta columna] in "Luctus Ricardi" clearly is the source of the "rotta colonna" in Petrarch"s sonnet and one that serves his special case particularly well, for the metaphor of the column [It. colonna] points to a member of one the oldest aristocratic families of Rome. 22 The powerful Colonna had held important offices within the Church for centuries and rose to the Holy See when Odo Colonna was elected pope at the Council at Konstanz with the name of Pope Martin V (1417-31). In Sonnet 269, however, the metaphor refers to his earlier relative, Cardinal Giovanni Colonna, whose sudden death in 1348 marking the end of the poet"s hopes for a politically unified Italy. It was no doubt the popularity among poets of De poetria nova and the widespread circulation of Petrarch"s Il Canzoniere that caused Sir Thomas Wyatt to reshape the "political" Sonnet 269 so that it became a lament on the death of his own patron, Thomas Cromwell. 23 His motivation being purely political, he consequently leaves out any 21 Gallo 1971: 32-33. 22 The poet deploys the metaphor at least four times in the Canzoniere (10, 266, 268, and 269), most strikingly perhaps in Sonnet 269. 23 See Muir 1963: 172-210. mention of Petrarch"s favourite metaphor for his beloved Laura, Apollo" s "onorata e sacra fronde": The pillar perish'd is whereto I leant, The strongest stay of my unquiet mind ; The like of it no man again can find, From east to west still seeking though he went, To mine unhap. ... 24 It is not unlikely that Wyatt would have known not only Petrarch but also the "English" origin of the metaphor in the "Luctus Ricardi." In Sonnet 269 the "verde lauro" is said to have comforted the poet when the loss of his patron frustrated his hope of preferment ("facean ombra al mio stanco pensero"), a point which would make the parallel with "Apollo' s Lawrell bough,/ That some time grew within this learned man" more exact. In Marlowe's Epilogue, too, the laurel comes close to being an image of lost spiritual sustenance, in addition to being an emblem of fame, learning and art, alluding to Faustus"s scorching mistress by means of the verb "burn." 25 Then, too, the "Lawrell bough" recalls the epithet "coniurer Laureate" which appears in the A-version (1. 276 A). In addition to these points of resemblance, the two passages share a parallel semantic and syntactic structure which comprises two parallel relative clauses, leading to an acoustically well-balanced verse (2117). 26 The idea of adapting Sonnet 269 to a new poetic context may have been prompted by the fame of Wyatt"s successful poem, but we should not 24 Yeowell 1904: 18. 25 Helen not only "burnt the toplesse Towres of Ilium" (1. 1875), she also is the one whom Faustus calls upon to sack Wittenberg (1. 1882), the source of his learning and fame. 26 Both passages display an inverted past tense in initial position ("rotta è" a against "cut is"), while Petrarch"s implied rotto è [broken is] corresponds syntactically to Marlowe"s "burned is." These inverted forms are followed by simple tense forms signifying duration ("facean ombra" as against "some time grew"), which in turn latch on to yet another present perfect tense ("perduto ho"[I have lost] versus "is gone") in the lines that follow, the tense alterations being intriguingly close forget that the poet-dramatist had first hand knowledge of Petrarch. 27 Marlowe seized on the laurel as an emblem of learning, changing the colonna-image to accord with the laurela change of metaphor possibly in keeping with the imagery in Petrarch's Sonnet 10 ("Gloriosa columna in cui s" appoggia"), in which Petrarch aligns another member of the family, Stefano Colonna the Elder, with tall trees as he remarks that only Colonna"s exile from Rome cuts, or curtails, ("tronchi") the poet"s happiness. Then, too, Sonnet 269 is a lament addressed to Death personified ("Morte;" 5); Petrarch perceives the sudden deaths of his patron and of his idolized mistress as his personal punishment for his own careless living and pride ("viver lieto e gire altero;" 7). He places this punishment within the irrevocable scheme of fate, and he despairs because he knows that there exists no remedy: "Ma se consentimento è di destino,/ che poss" io più ..." (9-10). The sonnet concludes with a brief meditation on the swiftness with which calamity strikes: Uom" perde agevolmente in un mattino quel ch' n molti anni a gran pena s" acquista (13-14) ("How easily man loses in one morning what is bought with great pain in many years!"[author"s translation]) Such ideas can serve as a comment on the scholars" discovery of Faustus in the morning. Although four related imagesthe cut column, or branch, the laurel, the references to a loss implying death and destinyappear in both texts in the same order, there is no simple one-to-one relationship between sonnet and epilogue. Only the Epilogue"s striking initial images link it firmly to Petrarch"s highly wrought poem. As Dino Provenzal has remarked in his edition of I1 Canzoniere, Petrarch turns his moralizing sonnet monologue into a tangle of artificial similarities ("tutto un intrico di similitudini artificiose"). 28 It has an "oblique" structure where the many similies (1-12) lead to the application in the two last lines. Marlowe who takes his cue from Petrach"s initial 27 See above at note 4. 28 Petracrh 1954: 341. images, joins them together in a new and balanced construction that gives added weight to the paradoxial ideas conveyed. The first part of that new structure is kept together by two parallel main clauses followed by two parallel relative clauses (2114-2116), and an application in an acoustically chiastic verse: "Faustus is gone, regard his hellish fall" (2117). In the Epilogue"s second part he balances this structure against two expanded parallel relative clauses introduced by an anaphoric "whose" (2118, 2120), and in both parts he uses rhetrical repetitions to establish and to cement internal relationships. Like Petrarch, then, he marshalls his rhetorical skills to reinforce the paradoxical and enticing attraction of forbidden knowledge. On turning to the actual phrasing of ideas in the second half of the Epilogue, it should now come as no surprise that Marlowe here, too, reveals his reading in Petrarch. The poem is Sonnet 307, "I" pensava assai destro esser sul l"ale"("I thought I was skillful enough in flight") in which the metaphor of the ill-fated branch figures prominently. In fact, some of the lines in the second part of the Epilogue read almost like translations of Sonnet 307, which I here quote in full: another set of verses that metaphorically continue the imagery of Petrarch on the theme of overreaching. The most striking example is, perhaps, Marlowe"s warning to "forward wits" not "to practise more then heauenly power permits" (11. 2120; 2121). The lines are strongly evocative of some verses in Sonnet 307: I" pensava assai destro esser sul l"ale (non per lor forza, ma chi le spiega) per gir cantando a quell nodo eguale onde Marte m"assolve, Amor mi lega. Mai non porìa volar penna d"ingegno, non che stil grave o lingua, ove Natura volò tessendo il mio dolce ritengo; sequilla Amor con sì mirabil cura in adorarlo, ch"i" non era degno pur de la vista; ma fu mia ventura. 29 Furthermore this happens to be the second sonnet on failed ambition in which the overreacher is compared to a branch. Actually, this ill-fated branch, too, is not allowed to grow full straight ("un picciol ramo cui gran fascio piega"), because it is "bent by a great burden". Interestingly, when we read on in the quatrain we see that the metaphor of the branch is followed by what may be seen as the source for the Epilogue"s concluding words on Faustus: Whose fiendfull fortune may exhort the wise Onely to wonder at vnlawfull things: Whose deepnesse doth intice such forward wits To practise more then heauenly power permits. (11. 2118-2121). The warning to "forward wits" not "[t]o practise more then heauenly power permits" closely parallels Petrarch"s verse "né si fa ben da uom quel che "1 ciel nega" ("nor can a man well do what heaven does not permit"; or to render the line more closely: "as a human [né da uom] you cannot easily do [né fa ben] what heaven forbids"). For as the next line informs us, "man" here refers to a "penna d"ingegno" (literally "feather of 29 In Durling"s translation (1981: 486) this reads: I thought I was skilful enough in flight (not by my own power, but by his who spreads my wings) to sing worthily of that lovely knot from which Death looses me, with which Love binds me. I found myself much more slow and frail in operation than a little branch bent by a great burden, and I said. He flies to fall who mounts too high, nor can a man well do what the heavens deny him. Never could any pinion of wit, let alone a heavy style or tongue, fly so high as Nature did when she made my sweet impediment; Love followed Nature with such marvelous care to adorn her that I was not worthy even to see her: but my good fortune willed it. wit"). It is a term that very likely inspired Marlowe"s suggestive metaphor of "forward wits." Then, too, the forward, and fallen, wit in the Epilogue of Doctor Faustus recalls Icarus in the Prologue, whose "waxen wings did mount aboue his reach"(B 21), thus the two references to transgressive flying frame the tragedy, as it were. And if we accept the likely identification of Marlowe as Shakespeare"s rival poet, then Shakespeare"s Sonnet 78 may provide indirect confirmation of Marlowe's translation of Petrarch"s "penna d"ingegno" as "forward wits:" Southampton"s eyes, Shakespeare tells us, "have added feathers to the learned"s wing" (7), 30 so that he would appear to apply Petrarch"s phrase "penna d"ingegno" to the poetry of his Icarus-like rival, or to the self-styled "Phaeton" who authored the sonnet in Florio"s Second Fruites. 31 As the probable date of the rival poet sonnet is as early as the autumn of 1592 and the likely date of composition of Doctor Faustus is about 1589, the Petrarchan image would provide an interesting context. 32 It may well be that Shakespeare"s phrase reflects a desire to pinpoint his rival"s reshaping of Petrarch"s conceits in Doctor Faustus, 33 but that he thereby possibly opened himself to an attack by Robert Greene for sporting borrowed "feathers," 34 because feathers also were associated with plagiarism. Although Petrarch and Marlowe make quite plain that there is a limit to transgression, both allow aspiring minds the freedom to speculate about forbidden things as long as they do not attempt to convert theory into practice. Petrarch's "penna d"ingegno" may therefore soar only in learned writing or speech ("stil grave o lingua"), which is also what he does, whereas Marlowe"s "forward wits" may "onely ... wonder" at the 30 Durling"s rendering "a heavy style or tongue" is infelicitous and obscures the fact that Petrarch refers to written or spoken compositions in the high or learned style. 31 See above at note 10. 32 See Jump 1968: 104 and Gill 1971: 389. 33 Rowse (1981 sees Sonnet 78 as one of the sonnets that support the hypothesis that Marlowe was Shakespeare"s rival. See the recent presentation in Wells (2006: 75-195) of Marlowe and Shakespeare as likely associates or friends, also Eriksen 2008: 191-200. 34 Eriksen 2008 deepness of things unlawful. Both in terms of semantics and syntax the relationship of "I" pensava assai destro esser sul l"ale" and the Epilogue"s final lines is close, 35 suggesting the sonnet context that prompted it. Naturally, Marlowe adds a new and sinister dimension to the feelings of human inadequacy lamented by Petrarch. Petrarch"s persona and Faustus have both been overconfident as regards their ability to their obtain goals, but the difference between the degree of their ambitions reminds us that Marlowe"s magus who wishes to "mount, and ascend to heauen" (2064), is a contemporary of Bruno. 36 The resulting "petrarchan" Epilogue, therefore, owes its coherence and suggestiveness to Marlowe"s imagination and ability to reshape what he borrowed. The probable echo from Shore's Wife in the Epilogue's first line may serve to explain why the Italian provenance of its main conceits have escaped notice by critics. The echo of Churchyard and a focus on native antecedents could be said to have blocked further investigation into the matter. Marlowe was to return to the imagery employed in the Faustus Epilogue in another of his lesser known paratexts, one written in 1592the text in question being his Latin prose epistle to the Countess of Pembroke prefaced to Thomas Watson"s Latin epic Amintae gaudia. 37 Here he addresses the Countess as "laurigera stirpe prognata Delia; Sydnaei vatis Apollinei genuina soror," which Mark Eccles renders as: "Delia born of a laurel-crowned race, true sister of Sidney, the bard of Apollo." The phrases "laurigera stirpe" and "vatis Apollinei" call to mind "Apollo's Lawrell bough" which figures so prominently in the Epilogue. Then, too, when Mary Sidney appears as another Laura, "laurigera stripe prognata Delia," Marlowe"s Latin evokes the opening lines of Sonnet 269, and to readers acquainted with both Sir Philip 35 Again we notice the use of similar constructions ("non che" versus "onely") and that Petrarch's "grave" (solenm, learned, deep, etc.) easily could have prompted the word "deepness" in Marlowe's text. 36 Eriksen 1985a: 463-65, 1985b: 49-74, 1987, and Gatti 1989. 37 I quote Maclure 1968 almost becomes an English equivalent to Giovanni Colonna, the prematurely deceased statesman and man of letters. In the epistle"s combination of metaphors it would seem that Marlowe presents himself as another Petrarch, an Ovidian poet par excellence. This is admittedly speculative, but it is certain that the epistle belongs in a sonnet context, although not that alone, because we here also witness a Marlowe who does his utmost to present himself as a Latin poet. 38 It is characteristic of this point in his career and life, I think, that the bid for patronage to Mary Sidney, the "Delia" in Samuel Daniel"s sonnet sequence, also should include the only direct reference to Marlowe"s own sonnets. 39 After having alluded to his Ovidian poetry in the phrase "litorea Myrtus Veneris" ("the seashore myrtle of Venus"), he chooses to refer to another group of poems as "Nymphae Peneiae semper virens coma," that is, as "the Peneian nymph"s ever-green (or evergrowing) hair." 40 It is easy to see in these phrases allusions to poetry written in imitation of Petrarch"s sonnets to Laura, often described as Daphne or a laurel, or less likely to his own translations of Ovid. 41 This being the case, the Epilogue in Doctor Faustus presents itself as the poetic creation in which we get the clearest indication of Marlowe's continued intellectual involvement with Petrarch. Works Cited Alciati, Andreas. 1531. Emblematum Liber, Augsburg. Whitney, Geoffrey. 1586. A Choice of Emblemes, Leyden. 38 Cheney 1997: 331. 39 I discuss what could be blank verse versions of two of these sonnets in Eriksen 1986. 40 I suggest that Marlowe alludes to Mary Sidney in his "Phaeton to his Friend Florio," as well: "So when that all our English wits lay dead (Except the laurel that is evergreen)"(ll. 10-11). The poet and Marlowe"s would-be patron here, make an appearance as a wit crowned by laurels; compare the phrases laurigera stirpe and of semper virens coma. 41 Brown 2004: 106-126
2019-05-26T14:26:50.018Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "cbb25bcabb943c461150c47bcdf53338b1a79983", "oa_license": "CCBY", "oa_url": "http://njes-journal.com/articles/10.35360/njes.208/galley/208/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "590a2d51b88030b1fc8f03dde2dfac22bffd529d", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Art" ] }
253157293
pes2o/s2orc
v3-fos-license
Has JWST already falsified dark-matter-driven galaxy formation? The James Webb Space Telescope (JWST) discovered several luminous high-redshift galaxy candidates with stellar masses of $M_{*} \gtrsim 10^{9} \, \rm{M_{\odot}}$ at photometric redshifts $z_{\mathrm{phot}} \gtrsim 10$ which allows to constrain galaxy and structure formation models. For example, Adams et al. identified the candidate ID 1514 with $\log_{10}(M_{*}/M_{\odot}) = {9.8}_{-0.2}^{+0.2}$ located at $z_{\mathrm{phot}} = 9.85_{-0.12}^{+0.18}$ and Naidu et al. found even more distant candidates labeled as GL-z11 and GL-z13 with $\log_{10}(M_{*}/M_{\odot}) = 9.4_{-0.3}^{+0.3}$ at $z_{\mathrm{phot}}=10.9_{-0.4}^{+0.5}$ and $\log_{10}(M_{*}/M_{\odot}) = 9.0_{-0.4}^{+0.3}$ at $z_{\mathrm{phot}} = 13.1_{-0.7}^{+0.8}$, respectively. Assessing the computations of the IllustrisTNG (TNG50-1 and TNG100-1) and EAGLE projects, we investigate if the stellar mass buildup as predicted by the $\Lambda$CDM paradigm is consistent with these observations assuming that the early JWST calibration is correct and that the candidates are indeed located at $z \gtrsim 10$. Galaxies formed in the $\Lambda$CDM paradigm are by more than an order of magnitude less massive in stars than the observed galaxy candidates implying that the stellar mass buildup is more efficient in the early Universe than predicted by the $\Lambda$CDM models. This in turn would suggest that structure formation is more enhanced at $z \gtrsim 10$ than predicted by the $\Lambda$CDM framework. We show that different star formation histories could reduce the stellar masses of the galaxy candidates alleviating the tension. Finally, we calculate the galaxy-wide initial mass function (gwIMF) of the galaxy candidates assuming the integrated galaxy IMF theory. The gwIMF becomes top-heavy for metal-poor starforming galaxies decreasing therewith the stellar masses compared to an invariant canonical IMF. INTRODUCTION The formation of the first galaxies in the observed universe is a key question in modern astrophysics and one of the most important science goals of the recently launched James Webb Space Telescope (JWST). The Near Infrared Camera instrument (NIRCam; Rieke et al. 2005) of JWST observes the universe in the ≈ 0.6−5 µm regime. This allows the detection of objects at redshifts z 12, thus, revealing the evolutionary stage located at z phot = 16.0 +0.6 −0.6 but a secondary redshift solution of z ≈ 5 cannot be excluded (see their figure 1 and table 3). In this contribution, we aim to investigate if ID 1514, ID 14924, GL-z11, GL-z13, and CEERS-1749 are consistent with the hierarchical buildup of stellar mass as predicted by the ΛCDM paradigm (Efstathiou et al. 1990;Ostriker & Steinhardt 1995) using the Illustris The Next Generation (TNG; Pillepich et al. 2018a;Nelson et al. 2019b;Pillepich et al. 2019) and Evolution and Assembly of GaLaxies and their Environments (EAGLE; Crain et al. 2015;Schaye et al. 2015;McAlpine et al. 2016) projects. The EAGLE project is consistent with the Planck-2013 (Planck Collaboration I 2014) cosmology being H 0 = 67.77 km s −1 Mpc −1 , Ω b,0 = 0.04825, Ω m,0 = 0.307, Ω Λ,0 = 0.693, σ 8 = 0.8288, and n s = 0.9611 (see also table 1 of Schaye et al. 2015). Its simulations run with a modification of the GADGET-3 smoothed particle hydrodynamics code (e.g., Springel 2005) starting also at z = 127 and self-consistently evolving the baryonic and dark matter particles up to the present day. The publicly available subhalo catalogs ( We use the two high-resolution realization runs RefL0025N0752 and RecalL0025N0752, and the two lower resolution runs RefL0050N0752 and RefL0100N1504 at z = 15.13 (snapnum = 1) and z = 9.99 (snapnum = 2). The two high-resolution runs have a box size of 25 cMpc with an initial baryonic particle mass of m b = 2.26 × 10 5 M , and a dark matter particle mass of m dm = 1.21 × 10 6 M . RefL0050N0752 and RefL0100N1504 have a size of 50 cMpc and 100 cMpc, respectively, and both have an initial baryonic particle mass of m b = 1.81 × 10 6 M and a dark matter particle mass of m dm = 9.70 × 10 6 M ( RESULTS The galaxy stellar mass function (GSMF) at redshifts z = 14. 99, 11.98, 10.98, and 10.00 in the TNG runs and at z = 15.13 and 9.99 in the EAGLE runs are presented in Figure 1. The global peak of the distribution depends on the resolution and/or box size of the simulation runs such that the formation of low massive galaxies depends on the particle resolution but also because small simulation boxes lack large-scale density fluctuations. As a consequence, not-large-enough simulation boxes would not allow the formation of large galaxy clusters, therefore hampering the growth of central (but also noncentral) galaxies. Thus, we mainly focus on the larger simulation boxes TNG100-1 and RefL0100N1504. In the following, we compare the stellar mass buildup as predicted by the ΛCDM simulations with the masses of the observed high-redshift galaxy candidates ID 1514 (Adams et al. 2022), ID 14924 (Labbe et al. 2022) 3 , GL-z11, GL-z13 (Naidu et al. 2022b), and CEERS-1749 (Naidu et al. 2022a). For the TNG runs, ID 1514 located at z phot = 9.85 +0.18 −0.12 and ID 14924 at z phot = 9.92 are compared with simulated galaxies at z = 10.00. GL-z11 at 10.9 +0.5 −0.4 , GL-z13 at 13.1 +0.8 −0.7 , and CEERS-1749 at 16.0 +0.6 −0.6 are compared with the simulations at z = 10.98, 11.98, and 14.99, respectively. The comparison with GL-z13 and CEERS-1749 is therewith more conservative because snapshots corresponding to lower redshifts than observed are addressed allowing the galaxies to grow in stars for a longer time span than in the observed cases. The above stellar masses refer to all stellar particles bound to the considered subhalo depending therewith on the subhalo-finding algorithm. The identification of subhalos can be disturbed, e.g. by merger events, which frequently occur especially at high redshifts. Therefore, we also assess the maximum stellar masses of halos, which accounts for both the fact that the subhalo finder can split a galaxy in clumps underestimating the total mass of the galaxy, and for the inclusion of observationally unresolved satellite galaxies. The maximum stellar masses of halos are log 10 (M * /M ) = 7.32 (7.05), 8.10 (7.88), 8.43 (8.24), and 8.78 (8.59) at z = 14.99, 11.98, 10.98, and 10.00, in the TNG100-1 (TNG50-1) run, respectively. Thus, using the most massive halo instead of the most massive subhalo in terms of its stellar mass does not significantly effect the results of the TNG runs. In the EAGLE runs, ID 1514/ID 14924 and CEERS-1749 are compared with the GSMF at z = 9.99 and 15.13, respectively. Unfortunately, the EAGLE database does not list snapshots that match the observed redshifts of GL-z11 and GLz-13. Thus, the EA-GLE analysis only focuses on ID 1514, ID 14924, and CEERS-1749. The RefL0050N0752 and RefL0100N1504 snapshots contain galaxies reaching up to log 10 (M * /M ) ≈ 7.70 at z = 15.13 and log 10 (M * /M ) ≈ 9.06 at z = 9.99, which is ≈ 79 and ≈ 5.5 times lower than the observed stellar mass of CEERS-1749 and ID 1514, respectively. The stellar mass of ID 14924 is 74 times higher than the most massive simulated galaxy at z = 9.99. The evolution of the stellar mass growth is summarized in Figure 2 and Table 1 by showing the maximum stellar mass of a subhalo in dependence of redshift for different simulation runs. The observed high-redshift galaxy candidates are, by more than one order of magnitude, more massive than the most massive simulated galaxies in the ΛCDM framework. The inferred stellar mass of observed galaxies is sensitive to the adopted star formation history (SFH) and initial mass function (IMF). In the following sections, we first investigate if the tension reported here of the stellar mass buildup in the early universe can be resolved if different SFHs of the observed galaxy candidates are assumed. Secondly, the effect of a varying IMF on the observed stellar masses is discussed. The minimum inferred galaxy masses for different star formation histories In order to calculate the minimum possible mass that the observed high-redshift galaxy candidates CEERS-1749, GL-z11, GL-z13, and ID 1514 can have for an invariant IMF, different sets of models of galaxies with different SFHs are constructed. Using stellar population synthesis models, we let the age of the modeled galaxies vary in the range of [≈ 4, 400 Myr] to investigate their UV-band (1500Å) stellar mass (including remnants)-tolight ratio, M * /L UV , for an invariant canonical IMF (Kroupa 2001;Kroupa et al. 2013). The lower limit is set by the implemented stellar evolution tracks of the Padova group (Marigo & Girardi 2007;Marigo et al. 2008, see also Zonoozi et al. 2019) and is roughly comparable to the mean stellar age (≈ 1 − 20 Myr) of observed high-redshift galaxies (5 z spec 8) as found by Carnall et al. (2022). Since the spectral energy distribution fitting analysis of high-redshift star-forming galaxies shows more consistency with increasing SFHs, here we adopt the delayed-τ model (e.g., Kroupa et al. 2020a) and an exponentially increasing SFH where ψ(t) is the star formation rate (SFR), t is the age since star formation started, ψ 0 is the normalization parameter, and τ is the e-folding time scale. The mass and light of a galaxy are calculated by an integral over the SFR. Note that, using the invariant IMF, at a given t, the mass-to-light ratio is independent of the total mass that is converted into stars. This is because of cancellation of the normalization parameter, RefL0050N0752 RefL0100N1504 Figure 2. The most massive subhalo in terms of the stellar mass in dependence of redshift in the TNG50-1 (red), TNG100-1 (green), and EAGLE (blue) runs. The black error bars are the observed galaxy candidates by JWST as listed in Table 1. The gray errobar shows GN-z11 (Oesch et al. 2016). panels of Figure 3. We set that galaxies start forming stars 200 Myr after the Big Bang, with an averaged metallicity of [Fe/H] = −2, and we assume that the mass loss from galaxies is only through stellar evolution in the form of ejected gas. Since the stellar loss due to dynamical evolution is significant only for systems with initial stellar mass less than M * = 10 6 M , no stars are lost by the dynamical evolution of galaxies. As can be seen, the minimum mass-to-light ratio that can be considered for these galaxies assuming different SFHs is M * /L UV ≈ 3. These lower stellar mass limits just resolve the discrepancy for GL-z11, GL-z13, and ID 1514 (see the vertical dashed-dotted lines in Figure 1). In the case of CEERS-1749, the maximum stellar mass obtained in the ΛCDM simulation is ≈ 6.8 lower than its inferred lower limit. Galaxy masses for a varying IMF In the previous section we applied an invariant IMF but recent observations (e.g., Schneider et al. 2018;Zhang et al. 2018;Senchyna et al. 2021) suggest that the mass distribution of a stellar population may depend on its local star forming environment. Especially metal-poor Population III stars are expected to follow a top-heavy IMF. A theoretical framework to describe the stellar population of an entire galaxy is the integrated galactic initial mass function (IGIMF) theory, which adds up all the IMFs of starforming regions (embedded clusters) within a galaxy (Kroupa & Weidner 2003;Weidner & Kroupa 2006). The resulting galaxy-wide IMF (gwIMF) systematically varies with the global SFR and averaged metallicity of the galaxy, i.e. the gwIMF becomes top-heavy for galaxies with SF R 1 M yr −1 and metallicities [Fe/H] < 0 (see figure 2 of Jeřábková et al. 2018) compared to the canonical IMF. In order to study the effect of a varying IMF on the high-redshift galaxy candidates, we calculate the stellar population of galaxies assuming the latest IGIMF formalism by Yan et al. (2021) and using an IGIMF Fortran code developed by Akram Hasani Zonoozi. For simplification, we assume that all galaxies start to form stars 200 Myr after the Big Bang, with a constant SFR over time, and an average metallicity of [Fe/H] = −2. Since the gwIMF is time dependent, we require that realistic IGIMF models have to match the observed M UV within ±1 mag in the 1σ interval of the observed redshift of the corresponding galaxy candidate. The left panel of Figure 4 shows the time evolution of the absolute UVband magnitude for IGIMF models that fulfill these constraints for observed galaxy candidates CEERS-1749, GL-z13, GL-z11, and ID 1514. These models have constant SFRs in the range of ≈ 2 − 30 M yr −1 and thus a top-heavy IMF. The stellar masses at the observed redshifts of the galaxy candidates are shown in the right panel of Figure 4 and are systematically lower than the derived stellar masses of Adams et al. (2022), Naidu et al. (2022b), andNaidu et al. (2022a) because of the top-heavy gwIMF compared to the canonical IMF. DISCUSSION AND CONCLUSION While the redshifts of the galaxy candidates need to be spectroscopically verified, we use these JWST observations to quantify how quickly galaxies form in the currently most advanced cosmological simulations. Using state-of-the-art ΛCDM simulations of the IllustrisTNG and EAGLE project, we showed that the stellar mass buildup is much more efficient in the early universe than predicted by these ΛCDM models (see also, e.g., Boylan-Kolchin 2022; Lovell et al. 2022). In particular, the stellar masses of ID 1514 (Adams et al. 2022), ID 14924 (Labbe et al. 2022), GL-z11, GL-z13 (Naidu et al. 2022b), and CEERS-1749 (Naidu et al. 2022a) analyzed in Section 3 are higher by about one order of magnitude than the most massive galaxies formed in these simulations. In particular, massive high-redshift candidates appear more frequent at z 10 than expected in the ΛCDM framework. For example, Boylan-Kolchin (2022) argued that a volume of ≈ 10 8 cMpc 3 is required to explain ID 14924 with log 10 (M/M ) = 10.93 at z = 9.92 (Labbe et al. 2022). However, the survey covers ≈ 10 5 cMpc 3 at z = 10 ± 1 (see section 3 of Boylan-Kolchin 2022). The TNG100-1 and RefL0100N1504 simulations have a box volume of ≈ 10 6 cMpc 3 suggesting that the absence of massive galaxies in these runs is not because of a too small simulation volume. The discrepancy between the observed and simulated stellar mass buildup could be caused by several reasons. First of all, high photometric redshifts can emerge due to dust reddening. For example, Zavala et al. (2022) demonstrated that Lyman-break galaxy candidates at z phot 12 can resemble dusty star-forming galaxies at z 6 − 7. Secondly, it could be that the high observed stellar masses are caused by an erroneous calibration of JWST. Furthermore, it has been argued that star formation could be much more efficient in the early universe (e.g., Naidu et al. 2022b;Harikane et al. 2022;Mason et al. 2022). Assuming that the ΛCDM model is the correct description of the Universe, this would mean that the underlying galaxy formation and evolution ISM models of the EAGLE and IllustrisTNG runs must be improved in order to reproduce such galaxies. For example, these simulations assume that gas above a given density threshold (e.g., Schaye et al. 2015;Nelson et al. 2019b) is able to form stars, which is likely a too simplified implementation especially for describing high-redshift galaxies. Boylan-Kolchin (2022) showed that even a 100% star formation efficiency in ΛCDM would not be enough to explain the stellar mass density measured by Labbe et al. (2022). Another possibility is that the IMF systematically varies with the galactic properties. The IllustrisTNG and EAGLE simulations and the analysis of the Sections 3 and 3.1 assume an invariant IMF, but it is expected that metal-poor starforming stellar populations follow a top-heavy IMF. Using the IGIMF theory we calculated the gwIMF of the observed galaxy candidates in dependence of the metallicity and SFR of forming galaxies resulting in lower stellar masses compared to an invariant canonical IMF. For this, the IGIMF theory has to be included in cosmological simulations (see, e.g., Ploeckinger et al. 2014) in order to make a firm conclusion if a top-heavy IMF can resolve the reported tension. Finally, the present findings can also imply that structure formation is much more efficient and/or that the observed universe is even older than predicted by ΛCDM. The existence of these massive galaxies ≈ 300 − 400 Myr after the Big Bang also questions the hierarchical (bottom-up) structure formation suggesting that late-type galaxies begin to form early through the initial monolithic collapse of rotating post-Big-Bang gas clouds (Wittenburg et al. 2020) while early-type massive galaxies and associated formation of supermassive black halos form by the monolithic collapse of post-Big-Bang gas clouds with little net rotation (e.g., Kroupa et al. . Left panels: cosmic time evolution of the stellar M * /LUV ratio for stellar populations constructed assuming an invariant canonical IMF and using a delayed-τ (Eq. 1; top panels) and an exponentially increasing SFH (Eq. 2; bottom panels). We assume that star formation starts 200 Myr after the Big Bang (dashed vertical line). The minimum mass-to-light ratio for these galaxies assuming different SFHs is M * /LUV ≈ 3.2 × 10 −8 M /L . Right panels: cosmic time evolution of the total stellar mass of the galaxy candidates CEERS-1749 (green hatched area), GL-z13 (orange area), GL-z11 (red area), and ID 1514 (blue area) calculated based on the inferred mass-to-light ratios of the left panels. The colored areas cover the stellar mass range for SFHs with τ values between 0.01 Gyr (upper limit) and 200 Gyr (lower limit; see the left panels). The filled circles with error bars show the observed values as quantified by Adams et al. (2022), Naidu et al. (2022a), and Naidu et al. (2022b). The most massive subhalos in terms of stellar mass in the ΛCDM simulations are shown as horizontal lines. 2020b; Wittenburg et al. 2020;Yan et al. 2021;Eappen et al. 2022). Evidence for an enhanced growth of structures has been reported at different astrophysical scales and redshift ranges in the observed Universe. For example, Steinhardt et al. (2016) showed that the observed number density of luminous galaxies at 5 z 10 is much higher than predicted by the ΛCDM model (see their figure 1). However, their analysis relies on the stellarto-halo mass relation from Leauthaud et al. (2012) measured only at z = 0.2 − 1, while, e.g., Behroozi et al. (2019) suggest that there is a strong evolution at z 5. Furthermore, the existence of the massive interacting galaxy cluster El Gordo (ACT-CL J0102-4915; Marriage et al. 2011) at z = 0.87 and the Keenan-Barger-Cowie void (Keenan et al. 2013) at z 0.07 both individually falsify the hierarchical ΛCDM structure formation with more than 5σ (Haslbauer et al. 2020;Asencio et al. 2021). An enhanced growth of structure compared to the ΛCDM paradigm is expected in Milgromian dynamics (Milgrom 1983;Angus 2009;Malekjani et al. 2009;Famaey & McGaugh 2012;Kroupa et al. 2012;Haslbauer et al. 2020;Banik & Zhao 2022). Assuming the cosmic microwave background as the z = 1100 boundary condition and because of the reduced power on < 1 Mpc scales compared to ΛCDM (Angus & Diaferio 2011) due to the missing cold dark matter (CDM) component, it may be impossible to form galaxies in the early universe. This work indicates that the currently available most advanced ΛCDM simulations cannot form galaxies as massive as observed at z phot 10. This tension needs to be readdressed for extreme SFHs and/or if the gwIMF was top-heavy which would reduce the stellar mass buildup through more intense feedback. Upcoming ultradeep and wider-area JWST observations will reveal more light on the number density of such luminous high-redshift galaxies over redshift required to evaluate the significance of the here-reported tension of the stellar mass buildup of high-redshift galaxies in more detail. We thank an anonymous referee for helpful comments that significantly improved the manuscript. This project was largely conducted at the Charles University in Prague and we acknowledge the "DAAD-Eastern-European" exchange program for financing visits at the Charles University in Prague during which we discussed several aspects of the formation and evolution of highredshift galaxy candidates in the ΛCDM and Milgromian frameworks. We are also grateful to Nils Wittenburg, Nick Samaras, Ingo Thies, and Elena Asencio for helpful discussions on structure formation in Milgromian dynamics.
2022-10-28T01:15:56.194Z
2022-10-26T00:00:00.000
{ "year": 2022, "sha1": "48df872a0e93cb6b8db4d4c859bbcbcdcf99dccf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "48df872a0e93cb6b8db4d4c859bbcbcdcf99dccf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
68177365
pes2o/s2orc
v3-fos-license
Molecular Phylogeny and Phenotypic Characterization of Yeasts with a Broad Range of pH Tolerance Isolated from Natural Aquatic Environments In this study, yeasts with broad range of pH tolerance were isolated and characterized from natural aquatic environments in Japan. Only a few basic and application studies of alkali-tolerant yeasts have been reported, despite the unmet industrial needs. First, we surveyed alkali-tolerant yeasts from natural aquatic environments at pH 7.6 9.4. We isolated 35 yeast strains that grew in pH 9.0 medium, from seven genera and nine species: 25 strains (N1, N2, through N6, N9, K1, K3 through K19) were Rhodotorula mucilaginosa; one (N7) was Rhodosporidium fluvial; one (N8) was Scheffersomyces spartinae; two (N10 and N13) were Wicherhamonyces anomalus; one (N11) was Cyberlindnera saturnus; one (S1) was Candida sp.; two (S2 and S4) were Candida intermedia; one (S3) was Candida quercuum; and one (K2) was Cryptococcus liquefacience. We examined the effects of pH on the growth of representative yeast strains. Strains K12 and S4 showed high growth at pH 3 10. Strains N7, N8, N10, N11, and S3 showed high growth at pH 3 9. Strains K2 and S1 showed high growth at pH 4 8. All nine of these strains had neutralizing activities from acidic media at pH 3 5 to pH 6 8. We previously isolated acid-tolerant yeasts (Cryptococcus sp. T1 [1] and Candida intermedia CeA16 [2]) from extremely acidified environments; they showed high growth at pH 3 9 and neutralizing activities of acidic media by releasing ammonium ions. Thus, alkali-tolerant yeasts and acid-tolerant yeasts were found to be similar species and have both high growth at a broad range of pH and neutralizing activities of acid media. Previously, we also isolated acid-tolerant, acid-neutralizing yeasts How to cite this paper: Urano, N., Shirao, A., Naito, Y., Okai, M., Ishida, M. and Takashio, M. (2019) Molecular Phylogeny and Phenotypic Characterization of Yeasts with a Broad Range of pH Tolerance Isolated from Natural Aquatic Environments. Advances in Microbiology, 9, 56-73. https://doi.org/10.4236/aim.2019.91005 Received: December 19, 2018 Accepted: January 13, 2019 Published: January 16, 2019 Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access DOI: 10.4236/aim.2019.91005 Jan. 16, 2019 56 Advances in Microbiology Introduction Various types of yeasts and yeast-like microorganisms are widely spread in nature, and some have been used since 2000BC for fermented products (e.g.alcoholic beverages and the leavening of bread).Over the past several decades, suitable yeasts for different food processes have been repeatedly isolated and bred separately according to the type of fermentation desired.These efforts revealed that all industrial-use yeasts should have similar characteristics of high fermentative activity and high tolerance activities under various types of stress.Identification methods for yeasts were developed after the 18th century, and most of the yeasts used in the fermentation industries were found to be a species of Saccharomyces cerevisiae.From ancient to modern times, S. cerevisiae has been the most important microorganism species in the history of humans. However, with the progress of the bioethanol industry in recent years, the breeding of novel yeast strains other than S. cerevisiae with higher fermentative activity under several stress pressures is needed, because various types of wasted biomass materials are used as fermentation substrates, and from an economical point of view the concentration of the biomass should be higher.Above all, in bioethanol production, acids or alkalis are used for the hydrolysis of a cellulosic biomass, and the direct fermentation of hydrolysates using yeasts without neutralizing themselves is desirable.Fermentative yeasts with pH tolerance are thus considered potential beneficial candidates for efficient bioethanol production. Several studies of acid-tolerant yeasts have been conducted, as S. cerevisiae and other high-fermentative yeasts show high activity in neutral or acid environments (pH 4 -7) [4] [5] [6].Only a few basic and application studies of alkali-tolerant yeasts have been reported, despite the unmet industrial needs [7]. In previous investigations, we isolated and characterized two strains of acid-tolerant yeasts from extremely acidic environments.One strain was Cryptococcus sp.T1 from Lake Tazawa in Japan's Akita prefecture, is a caldera lake polluted with hydrochloric acid from an upstream hot spring [1].The other yeast strain is Candida fluviatilis CeA16 from Agatsuma River in Japan's Gunma prefecture, which is polluted with sulfuric acid from Mount Kusatsu-Shirane [2].Both strains T1 and CeA16 showed high growth at pH 3 - We also previously isolated and characterized 26 strains and 12 species of acid-tolerant yeasts that had neutralizing activities of acidic media from neutral environments in the city of Yokohama, Japan [3].The acid-tolerant and acid-neutralizing yeasts were found to exist in both acidic and neutral environments. In the present study, we attempted to isolate alkali-tolerant yeasts from alkali environments (pH 7.6 -9.4) in Japan's Kanto region.We characterized the yeast strains' growth and fermentation activities, and we constructed the strains' phylogenetic trees and compared them with those of acid-tolerant yeasts. Collection of Environmental Samples We surveyed alkali aquatic environments near metropolitan areas and selected four stations in Japan in order to isolate alkali-tolerant yeasts.In May 2016, we collected water and sediment samples from four aquatic environments in Japan's Medium Culture For the cultivation of yeasts in the water and sediment samples, we used a YPD medium consisting of 1.0% w/v yeast extract, 2.0% w/v proteose peptone (Becton Dickinson, Lincoln Park, NJ, USA), and 2.0% w/v D-(+)-glucose (Kokusan Chemicals, Tokyo).For the isolation of yeasts from the environments, 0.01% w/v chloramphenicol (Wako Pure Chemical Industries, Tokyo) was added to the YPD medium to prevent the growth of bacteria.The pH of the media was adjusted with sulfuric acid or sodium hydroxide (Wako).Solid media were constructed by adding 2.0% w/v agar (Kanto Chemicals, Tokyo) to the YPD liquid medium. Isolation of Alkali-Tolerant Yeast Strains The water samples were filtered through a 0.45-μm FTFE membrane-filter (Advantec, Tokyo), and microorganisms were trapped on the filter.The microorganisms were dispersed into the portion of the filtrate by a mixer; thus, we obtained an approx.100-fold-concentrated population of the microorganisms in the water samples.To obtain a moderate population of the microorganisms, we DOI: 10.4236/aim.2019.91005diluted sediment samples to 10-fold with physiological saline (0.8% w/v NaCl). For the first screening of alkali-tolerant yeasts, a 200-μl volume of each preparation was spread on the YPD solid medium containing chloramphenicol at pH 8.0 and incubated at 25˚C.After several days' cultivation, growing yeast-like colonies were picked up, and we observed their cells under a light microscope. Cells that were yeast-like morphologically were isolated and stored at −80˚C. For the second screening of alkali-tolerant yeasts, the isolates obtained by the first screening were inoculated on the YPD solid medium at pH 9.0 and incubated at 25˚C.The growing colonies were obtained as candidate alkali-tolerant yeasts, stored at −80˚C, and numbered as members in the yeast collection. Yeast Identification The 28S rRNA genes of the isolates in the yeast library were amplified by polymerase chain reactions (PCRs) using the forward primer NL-1 (5'-GCATATC-AATAAGCGGAGGAAAAG-3') and the reverse primer NL-4 (5'-GGTCCGTG-TTTCAAGACGG-3') and Premix Ex Taq (Takara Bio, Shiga, Japan).The 28S rRNA phylogenetic tree was constructed in the molecular evolutionary genetics analysis (MEGA) tool 6.06 using the maximum likelihood method with a 1000 replicate bootstrap resampling.The D1/D2 domain sequences of the 28S rRNA genes in the yeasts were deposited in DDBJ, EMBL, and GenBank. Fermentation Activities of the Yeasts Each isolate in the yeast library was inoculated into 10 ml of YPD liquid medium with a Durham pipe in a test tube and then anaerobically incubated at 25˚C.After 7 days' cultivation, the yeast fermentative activity in the medium was examined by the naked eye based on the storage of gas (CO 2 ) evolving from the cells into the Durham pipe. Yeast Growth Tests Each strain in the yeast library was precultured in YPD medium at 25˚C for 24 hr.The growing cells were precipitated by centrifugation at 3000 rpm for 5 min and then washed with physiological saline.The centrifugation/washing procedure was conducted three times, and the cell precipitates were obtained.A 100-μg portion of the wet cells was inoculated into 10 ml of YPD liquid medium with the pH value 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, or 10.0.We measured the growth curves of the yeasts at 25˚C for 72 hr using a bio-photorecorder (temperature-gradient incubator, Advantec, Japan). Measurement of pH in the Yeast Cultures In the growth tests of the yeasts in the YPD liquid media with the initial pH values 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, and 10.0, we measured the change of pH values in the media after 3 days' cultivation at 25˚C by using a LAQUA pH meter (F-71, Horiba Scientific, Fukuoka, Japan). Isolation and Identification of Alkali-Tolerant Yeasts Table 1 shows the pH, temperature, and NaCl concentration of the sampling areas and the isolation of alkali-tolerant yeast-like microorganisms.The first screening by the isolation of growing colonies on the YPD solid medium containing chloramphenicol at pH 8.0 identified 22, 229, 37, and 21 yeast-like isolates from both water and sediment samples taken from Stations #1, #2, #3, and #4, respectively.In the second screening at pH 9.0 of the isolates identified by the first screening of growing yeast-like cells, 19, 12, and four strains were obtained from Stations #2, #3, and #4, respectively.Our microscopy observations of the cell morphology suggested that all 35 of the second-screening strains were yeast species with alkali tolerance.We numbered the strains as K1-K19 from Station #2, as N1-N11 and N13 from Station #3, and as S1-S4 from Station #4. No alkali-tolerant yeasts were isolated from Station #1, even though the pH value of the water at that station was 9.4.At that station, we observed highly concentrated microalgae growing in the pond, the pH of which was 9.4 due to ammonium ions being released from the microalgae, and few alkali-tolerant yeasts seemed to be living at this station. Effect of pH in the Media on the Growth of Alkali-Tolerant Yeasts Using the representative strains of nine species described above in Section 3.1, we examined the effect of pH on the growth of the strains.The growth curve of each of the nine strains is provided in Figures 1-9.Two strains (Rhodotorula mucilaginosa K12 and Candida intermedia S4) showed high growth at pH 3 -10 (Figure 1 and Figure 2).Five strains (Rhodosporidium fluviale N7, Scheffersomyces spartinae N8, Wicherhamonyces anomalus N10, Cyberlindnera saturnus N11, and Candida quercuum S3) showed high growth at pH 3 -9 (Figures 3-7).Two strains (Cryptococcus liquefacience K2 and Candida sp.S1) showed high growth at pH 4 -8 (Figure 8 and Figure 9).Thus, all nine species isolated as alkali-tolerant yeasts in this study were found to have both alkali tolerance and acid tolerance. Changes in the pH Values in the Yeast Cultures We cultured representative strains of nine species at the initial pH values of 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0 and 10.0 and then measured the changes of the pH values after 3 days' cultivation (Table 4).At initial pH 3.0, the pH of three strains acid-neutralizing activities by releasing ammonium ions (data not shown). Polygenetic Tree of pH-Tolerant Yeasts The nine species of alkali-tolerant yeasts had pH tolerance over a broad range of pH, and their genera and species were very similar to those of the acid-tolerant yeasts that were isolated previously [1] [2] [3].The yeasts that we used for construction of the polygenetic tree are summarized in Table 5.The polygenetic DOI: 10.4236/aim.2019.91005Based on these results, both acid-tolerant yeasts and alkali-tolerant yeasts were situated in the same classification position.Of the nine strains, K2 was the sister group of the clade that includes Filobassidium magnum mi-w16.K12 and N7 were the sister group that includes Rhodotolura sp.n-w29, si-w12, Leucosporidium golubevii si-w13, and Microbotryozyma collariae sm-w40.The strains S1, S3, S4, N8, N10, and N11 were the sister group that includes Candida sp.om-w46, h-m7, Meyerozyma guilliemondii mr-w1, Candida parapsiosis h-m7, and Candida oleophila n-w33.Thus, similar yeast species with a broad range of pH tolerance were found to be living in natural aquatic environments at a broad DOI: 10.4236/aim.2019.91005 Figure 10.The polygenetic tree of both acid-tolerant yeasts (26 strains) and alkali-tolerant yeasts (nine strains).It was constructed on the 28s rRNA gene sequences by the maximum likelihood algorithm in MEGA ver.6.06. range of pH from alkali to acid. Discussion We have been studying the characterization and application of aquatic yeasts that could be suitable for bioethanol production, and we developed several types of bioethanol production systems that use cellulosic biomass such as seaweeds, i.e.Undaria pinnafida [10] [11] [12], Ulva spp., and Costaria costata [10] [11] [12], an alien aquatic plant in Japan, Eichhornia crassipes [13] [14]; and paper or wood scrap [11] [15] along with aquatic yeasts.In bioethanol production from a DOI: 10.4236/aim.2019.91005cellulosic biomass, acids or alkalis are used for the hydrolysis of the materials, and thus the direct fermentation of hydrolysates containing oligosaccharides by yeasts without neutralizing themselves is desirable, and the concentration of the biomass should be higher from an economical point of view.However, as highly concentrated hydrolysates contain high levels of oligosaccharides and rich salts and since their pH values are acid or alkali, the yeasts should have high fermentative activities under osmotic pressure stress, salt stress, and pH stress. We previously surveyed yeasts from the coast of Tokyo Bay to isolate highly fermentative yeasts under concentrated substrates, and most of the superior yeast strains were Saccharomyces cerevisiae [16] [17] [18] [19].We later isolated Citeromyces matritensis M37 from Tokyo Bay as a high salt-tolerant yeast that produces ethanol, and we observed its effective fermentation from salted algae [20]. The remaining problem is the application of pH-tolerant yeasts.We have surveyed pH-tolerant yeast strains in several types of environment.We isolated acid-tolerant yeasts from acidic or neutral streams in Japan; their classification is shown in Figure 10 [21].Thus, there seemed to be strong similarity regarding the types of yeast species in the acidic environments between Japan and Portugal.In the present study we obtained alkali-tolerant yeasts with pH tolerance that was very similar to that of acid-tolerant yeasts.Acid-tolerant yeasts and alkali-tolerant Table 5.The yeasts used for construction of the polygenetic tree in Figure 10. We examined the effect of pH on the growth of representative strains of the nine species.Strains K12 and S4 showed high growth at pH 3 -10.Five strains, i.e.N7, N8, N10, N11, and S3 showed high growth at pH 3 -9.Strains K2 and S1 showed high growth at pH 4 -8.All nine strains had neutralizing activities from acidic media at pH 3 -5 to pH 6 -8.The alkali-tolerant yeasts and acid-tolerant yeasts [1] [2] [3] were found to be similar species, to grow well in a broad pH range, and to have neutralizing activities from acid media to pH 6 -8.We constructed the phylogenetic trees of the acid-tolerant strains and the alkali-tolerant strains, and all of the strains were situated in the same classification position.Similar yeast species with a broad range of pH tolerance were found to be living in natural aquatic environments at pH values from alkali to acid.Most importantly, Candida intermedia S4 showed high growth at pH values from 3.0 to 10.0, had fermentative activity, and seems to be a candidate for bioethanol production from cellulosic biomass. Kanto region: Station #1 (a pond on the campus of Tokyo University of Marine Science and Technology, Tokyo), Station #2 (at the coast of Kasairinkai Park, Tokyo), Station #3 (at the coast of Senbongi Park in Shizuoka Prefecture, Japan), and Station #4 (at the coast of the Shioiri River in Tateyama of Chiba Prefecture, Japan).All samples were collected in sterile plastic tubes.We measured the samples' temperature, pH values, and NaCl concentration.The samples were immediately transferred to and stored at Tokyo University of Marine Science and Technology at <4˚C. had high growth activities at the broad range of pH (3.0 to 10.0), and strain S4 also had fermentation activities.Saito et al. reported that C. intermedia 4-6-4T2, an acid-tolerant mutant from parent strain C. intermedia 10601, was obtained from strain 10,601 by repeated fermentation and cultivation under the addition of acetic acid.They observed effective ethanol production by C. intermedia 4-6-4T2 from hemicellulose hydrolysate containing both glucose and xylose. Figure 1 . Figure 1.The growth of Rhodotorula mucilaginosa K12 in YPD medium at initial pH values ranging from 3.0 to 10.0.The values are the mean of triplicate cultures.The standard deviation (SD) values were omitted in order to clarify the growth curves. Figure 2 . Figure 2. The growth of Candida intermedia S4 in YPD medium at initial pH values ranging from 3.0 to 10.0.The values are the mean of triplicate cultures.The SD values were omitted in order to clarify the growth curves. Figure 3 . Figure 3.The growth of Rhodosporidium fluviale N7 in YPD medium at initial pH values ranging from 3.0 to 10.0.The values are the mean of triplicate cultures.The SD values were omitted in order to clarify the growth curves Figure 4 . Figure 4.The growth of Scheffersomyces spartinae N8 in YPD medium at initial pH values ranging from 3.0 to 10.0.The values are the mean of triplicate cultures.The SD values were omitted in order to clarify the growth curves. Figure 5 . Figure 5.The growth of Wicherhamonyces anomalus N10 in YPD medium at initial pH values ranging from 3.0 to 10.0.The Values are the mean of triplicates (Standard deviation was omitted to clarify the growth curves). ( K2, K12, and N11) increased to >4.0 in the media.At the initial pH 4.0, the pH of two other strains (N7 and N11) increased to >7.0 in the media.At the initial pH 5.0, the pH of eight strains (S1, S4, K2, K12, N7, N8, N10, and N11) increased to >7.0 in the media.At the initial pH 6.0, the pH of all nine strains increased to >7.0 in the media.Therefore, all nine strains were observed to have DOI: 10.4236/aim.2019.91005N.Urano et al. Figure 6 . Figure 6.The growth of Cyberlindnera saturnus N11 in YPD medium at initial pH values ranging from 3.0 to 10.0.The values are the mean of triplicate cultures.The SD values were omitted in order to clarify the growth curves. Figure 7 . Figure 7.The growth of Candida quercuum S3 in YPD medium at initial pH values ranging from 3.0 to 10.0.The values are the mean of triplicate cultures.The SD values were omitted in order to clarify the growth curves. Figure 8 . Figure 8.The growth of Cryptococcus liquefacience K2 in YPD medium at initial pH values ranging from 3.0 to 10.0.The values are the mean of triplicate cultures.The SD values were omitted in order to clarify the growth curves. Figure 9 . Figure 9.The growth of Candida sp.S1 in YPD medium at initial pH values ranging from 3.0 to 10.0.The values are the mean of triplicate cultures.The SD values were omitted in order to clarify the growth curves. found to be situated in the same classification position. Table 3 summarizes the fermentation activities of the 35 strains of alkali-tolerant Table 1 . pH, temperature, and NaCl concentration of sampling area and isolation of alkali-tolerant yeasts. Station #1: A pond in Tokyo university of marine science and Technplogy in Tokyo, Japan; Station #2: A coast of the Kasairinkai Park in Tokyo, Japan; Station #3: The coast at Senbongi Park in Shizuoka Prefecture, Japan; Station #4: The coast of the Shioiri River in Tateyama of Chiba Prefecture, Japan. Table 2 . Identification results for 35 strains of alkali-tolerant yeasts. Table 4 . Change of pH after 3 days' cultivation of alkali-tolerant yeasts. [3][2][3].Gadanho et al. described the yeast diversity in the extreme acidic environments of the Iberian Pyrite Belt, and they tested the yeast community in terms of high, intermediate, and low environmental stress.
2019-03-06T14:02:24.252Z
2019-01-16T00:00:00.000
{ "year": 2019, "sha1": "9eada377be9fe60603910e034ab976a447adf841", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=89932", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9eada377be9fe60603910e034ab976a447adf841", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
133819536
pes2o/s2orc
v3-fos-license
A new type of highly-vaporized microtektite from the Transantarctic Mountains We report on the discovery of microtektites (microscopic impact glass spherules) in a glacial moraine near Larkman Nunatak in the Transantarctic Mountains, Antarctica. The microtektites were identified based on their physical and chemical properties. Major and trace element compositions of the particles suggest that they may be related to the Australasian strewn field. This would further extend the current strewn field (cid:1) 800 km southward. Depletion in volatiles and enrichment in refractory elements in Larkman Nunatak microtektites fit the volatilization trend defined by Australasian microtektites, suggesting that they may represent a new highly vapor fractionated end-member thereof. This observation is supported by their low vesicularity and absence of mineral inclusions. This discovery has significant implications for the formation of microtektites (i.e. their evolution with respect to the distance from the source crater). Finally, the discovery of potentially old (i.e. 0.8 Ma) microtektites in moraine has implications for the stability of the East Antarctic Ice Sheet in the Larkman Nunatak area over the last (cid:1) 1 Ma and, as a consequence, the high efficiency of such moraines as traps for other extraterrestrial materials (e.g. micrometeorites and meteoritic ablation debris). 2018 INTRODUCTION Microtektites are the microscopic counterpart of tektites, which are glass objects resulting from the melting and vaporization of the Earth's crust during hypervelocity impacts of extraterrestrial bodies (Glass, 1990;Koeberl, 1994;Artemieva, 2008;Glass and Simonson, 2013). They are usually scattered over regions distal to impact craters called strewn fields (e.g., Glass and Simonson, 2013). To date, four major strewn fields have been discovered on the Earth's surface (i.e. Australasian, Central European, Ivory Coast and North America; Glass and Simonson, 2013). The Australasian strewn field is characterized by its large geographical extent that is at least an order of magnitude greater than that of other strewn fields (i.e. 14,000 km; Fig. 1; Folco et al., 2008;Glass and Simonson, 2013) and its relatively young age (0.8 Ma;Izett and Obradovich, 1992). Despite its recent formation, the source crater of this strewn field has yet to be found. Several studies based on the distribution and/or geochemical properties of tektites and microtektites suggest that the source crater may be located in South East Asia, and probably in Vietnam (e.g., Glass and Pizzuto, 1994;Lee and Wei, 2000; https://doi.org/10.1016/j.gca.2018.02.041 0016-7037/Ó 2018 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Ma et al., 2004;Glass and Koeberl, 2006;Prasad et al., 2007;Folco et al., 2010a. Australasian microtektites have been found in deep sea sediments of the Indian and Pacific Ocean (hereafter AUS/DSS; e.g., Prasad and Sudhakar, 1999;Glass et al., 2004) and more recently on top of nunataks of the Transantarctic Mountains, Victoria Land, Antarctica (hereafter AUS/TAM; Folco et al., 2008;Folco et al., 2009). The current southernmost limit of the strewn field has been established after the discovery of Australasian microtektites in glacial sediment collected at a low relief crest next to Allan Hills, Victoria Land, Antarctica, which is situated approximately 11,000 km away from the hypothetical source crater location . Here we describe the discovery of microtektites in glacial moraine collected next to the Larkman Nunatak in the Transantarctic Mountains, Antarctica. We first describe their geochemical affinities with Australasian microtektites (both AUS/DSS and AUS/TAM), suggesting that these two materials are paired. Subsequently, we will show that the microtektites from Larkman Nunatak expand the volatilization trends observed within AUS/DSS and AUS/TAM and, as a result, represent a new highly volatile depleted end-members for these trends. Samples The samples were recovered in 2006 by one of us (MG) from a glacial moraine near Larkman Nunatak (hereafter LKN; 85°46 0 S, 179°23 0 E), along with hundreds of micrometeorites (Fig. 1b, c and d;Van Ginneken et al., 2016). At the time of recovery, the moraine was covered by a 4 cm thick snow cover. The moraine is oriented East-West and extends ca. 1.5 km with a width of 700 m. It rises up to 30 m above the surrounding meteorite-rich blue ice and it is separated from the nunatak by a depression of up to 500 m wide. Samples were collected from the southern edge of a boulder ridge approximately 40 m into the moraine and located approximately half way through the moraine along an East-West traverse. Detailed information on the bedrock and lithology of the moraine is provided in Van Ginneken et al. (2016). In the laboratory 250 g of moraine samples were first washed in water and hydrogen peroxide to remove evaporite incrustations that prevented the identification of microtektites. They were subsequently dried and size separated using 106, 250, 425, 850 and 2000 lm sieves. Glacial sediment >2000 lm in size was not included. Fifty-two microtektites-like particles >106 lm in size were subsequently hand-picked from the sieved material under a stereomicroscope. Samples were identified on the basis of their spherical shape, pale yellow color and transparency. Petrography and major element analyses The microtektites were first mounted on clear adhesive tape and observed using a LEO 1455 environmental Scanning Electron Microscope (SEM) at the Imaging and Analysis Centre (IAC) of the National History Museum (NHM), London, United Kingdom, in order to gather information on their external features. Subsequently, a set of 13 microtektites were embedded in epoxy, sectioned, polished and carbon coated at the NHM. The remaining 39 particles were consumed in unsuccessful Ar-Ar analyses. The major element composition of the microtektites was determined using a Cameca SX100 electron microprobe at the IAC that is equipped with five wavelength dispersive spectrometers. Bulk compositions were calculated by averaging four point analyses for each particle. A defocused beam 10 lm in size was used in order to reduce the loss of volatile elements. Operating conditions were an accelerating voltage of 20 kV, a 20.0 nA beam current. A number of synthetic and natural standards were used for instrumental calibration. Standards include but were not limited to: forsterite (Mg 2 SiO 4 ) (San Carlos olivine) for calibration of Mg, hematite (Fe 2 O 3 ) for Fe calibration, wollastonite (CaSiO 3 ) for Si and Ca calibration and corundum (Al 2 O 3 ) for Al calibration. The built-in PAP-algorithm (e.g., Pouchou and Pichoir, 1991) was used for correction. The detection limits (in wt.%) are: Si = 0.01; Ti = 0.03; Al = 0.01; Fe = 0.03; Mn = 0.03; Mg = 0.01; Ca = 0.03; Na = 0.03; K = 0.04. Trace element analyses Trace element compositions of 11 microtektites were determined by Laser Ablation Inductively Coupled Plasma Mass Spectrometry at the IAC. The instrument was a Agilent 7500 ICP quadrupole mass spectrometer coupled with a ESI NWR193 ArF excimer laser source. The laser was operated at a repetition rate of 10 Hz, the spot size was 45 lm, and the energy at 3.2 mW. Signals for the analytical masses reported in Table 3 were acquired in peak hopping mode with 10 ms dwell time. Analyses consisted of the acquisition of 30 s background signal and one minute ablation signal. Data reduction was performed with the software LamTrace (Jackson, 2008). NIST SRM 612 (Hinton, 1999) and 43 Ca were adopted as external and internal standards, respectively. Precision and accuracy were assessed via repeated analysis of BCR-2 g, resulting better than 7% and ±10%, respectively, at the mg/g concentration level. Mean detection limits at 45 lm spot size for the quadrupole instrument are reported in Table 3. Overall description The 52 samples are transparent glass spherules (Fig. 2). The color of all spherules is pale yellow and in all cases their shape has a high degree of sphericity. Only one microtektite exhibits a bubble 10 lm in diameter in its interior (Fig. 2c). The surface of most particles is smooth and featureless ( Fig. 3a), but 8% of the particles show weathering pits ( Fig. 3b) identical to those observed on the surfaces of V-type (i.e. glassy) cosmic spherules extracted from the same glacial moraine (Van Ginneken et al., 2016). The particles also lack microscopic impact craters reported on an Australasian microtektite (Prasad and Sudhakar, 1996). The SEM backscattered images of sectioned samples show constant Z-contrast, suggesting that their chemical compositions are homogeneous ( Fig. 3c and d). The size of the samples varies between 107 and 388 lm. The size distribution of the particles (Fig. 4) is normal, except for a depletion at around 200 lm that is probably a statistical bias owing to low count statistics. Size fractions of glacial moraine larger than 400 lm were searched for microtektites but none were found, suggesting this is their maximum size limit in the deposit. Bulk chemistry The major element bulk compositions of 13 samples is reported in Table 1. Major element concentrations vary from one particle to another, but compositional trends are observed. Most major oxides are inversely correlated to SiO 2 (Fig. 5) that shows concentrations ranging from 43.7 to 64.5 wt%. MgO, Al 2 O 3 , CaO and TiO 2 range from 4.36 to 11.9, 18.9 to 35.1, 3.27 to 9.35 and 1.04 to 1.79 wt %, respectively. Conversely, FeO is positively correlated with SiO 2 and ranges from 1.06 to 3.95 wt%. The Na 2 O and K 2 O contents also slightly increase with SiO 2 and range from 0.07 to 0.39 and 0.10 to 0.81 wt%, respectively. When compared to known microtektites populations classified according to Glass et al. (2004), only four samples have major element compositions overlapping those of normal AUS/DSS and AUS/TAM microtektites ( Fig. 5; i.e. ''normal microtektites" have major oxide compositions similar to Australasian tektites; Glass et al., 2004). The nine remaining particles show silica contents significantly lower than that of normal AUS/DSS and AUS/TAM microtektites. Conversely, an enrichment in Al 2 O 3 , TiO 2 and CaO is observed. About half of the particles show MgO contents that overlap normal AUS/DSS and AUS/TAM microtektites compositional fields, the rest being enriched. FeO is depleted in only four samples. The low Na 2 O content overlaps that of normal AUS/TAM microtektites in all particles except for particle #LK06-1159, in which it is significantly depleted at 0.07 wt%. Eight samples show K 2 O that is lower than microtektites from other collections. Our samples show major element compositions that plot between the compositional field of normal AUS/DSS and AUS/TAM microtektites and a high-Al AUS/DSS microtektite . Two similar high-Al AUS/TAM microtektites have major element composition overlapping that of our samples . Comparison with differentiated cosmic spherules (i.e. having non-chondritic chemical composition) show that our samples plot in distinct compositional fields, especially when considering FeO and Al 2 O 3 that show systematically higher and lower values, respectively (Fig. 5). Table 2 lists the trace element compositions of 11 samples that was determined by LA-ICP-MS. The major and trace element compositions of our samples, Nunatak microtektites collected so far. microtektites. The microtektites were collected in the 106-2000 lm size fraction, so the upper limit of this diagram is the actual upper size limit of the microtektites collected. On the other hand, the lower limit is mainly due to the smallest mesh size used for sieving and may not represent the lower size limit of the microtektites. AUS/DSS and AUS/TAM microtektites and of the Upper Continental Crust (UCC) were normalized to CI chondrites (Fig. 6). The geochemical patterns of the samples are broadly similar to those of the UCC and AUS/TAM and AUS/TAM microtektites. However, their chemical composition exhibit variations from the UCC common to AUS/DSS and AUS/TAM microtektites. Refractory elements Sc, Cr, Y, Zr, REE, Hf and Th are consistently enriched with respect to the UCC, whereas other refractory and some moderately volatile elements Li, Be, Mg, Al, Si, Fe, Ca, Mn, Nb, Ba and Ta plot close to the UCC, except for Fe in particle #LK06-1159 that is depleted. Conversely, volatile to highly volatile elements Na, K, Zn, Rb, Sr and Cs are significantly depleted compared to the UCC. A new southernmost extension to the Australasian microtektites strewn field The glassy spherules collected in the glacial moraine near LKN are identified as microtektites on the basis of several criteria: (1) their pale-yellow color allows their distinction from V-type cosmic spherules found within the same sediment, which usually exhibit darker colors (Genge et al., 2008;Van Ginneken et al., 2016). Furthermore, this pale-yellow color is typical of normal AUS/DSS and AUS/TAM microtektites Folco et al., 2009). However, V-type cosmic spherules exhibit a wide range of color and transparency (Genge et al., 2008), so this criterion should be used in addition with the following criteria; (2) Their major and trace element chemistry is broadly similar to that of the Upper Continental Crust (Taylor and McLennan, 1995), which is also typical of the microtektites from the main known strewn fields ; (3) Their major element chemistry is clearly distinct from that of differentiated cosmic spherules (Taylor et al., 2007;Cordier et al., 2011;Cordier et al., 2012), which represent potential alternatives to microtektites as glassy micro-spherules exhibiting non-chondritic compositions; (4) The total alkali content (Na 2 O + K 2 O) of all samples is lower (0.18-1.20 wt%) than in volcanic glasses for a given silica content, and K 2 O/Na 2 O is always >1. These geochemical features are characteristic of tektitic material (Koeberl, 1990). All these criteria suggest that these glassy spherules recovered at LKN are indeed microtektites. Pairing microtektites originating from different sampling locations based on their geochemistry alone can be challenging because of important chemical overlaps between populations originating from different strewn fields (Koeberl, 1990;Glass et al., 2004). The major element compositions of normal Australasian, Ivory Coast and North American microtektites, for example, overlap significantly (Fig. 5). However, normal AUS/TAM microtektites are Table 1 Major element bulk compositions (in oxide wt%) of microtektites from Larkman Nunatak. Bulk compositions were determined using EPMA and by averaging four defocused beam (typically 10 lm in diameter) point analyses in each sample. enriched in the refractory element Ca and depleted in alkali elements compared to normal AUS/DSS microtektites due to their more distal deposition relative to the hypothetical source crater and resulting increased volatilization (Folco et al., 2010a). Consequently, AUS/TAM microtektites plot in distinct compositional fields from Ivory Coast and North American microtektites while still overlapping those of AUS/DSS microtektites. Out of 13 LKN microtektites, 9 have major element compositions plotting in-between the composition fields Folco et al. 2009Folco et al. , 2016 and achondritic V-type cosmic spherules (Taylor et al., 2007;Cordier et al., 2011;Cordier et al., 2012). All values are in wt%. (Folco et al., 2010a), which is clearly distinct from the trends observed for Ivory Coast and to a lower extent North American microtektites. Similar chemical trends are observed with trace elements in Harker diagrams of volatile against refractory elements (Fig. 8), with LKN microtektites showing affinities with high-Al Australasian microtektites mainly. More importantly, for identical volatile content, refractories La, Hf and Th in LKN microtektites overlap the compositional fields of AUS/DSS and AUS/TAM microtektites only. The chemical similarity of LKN and Australasian microtektites suggest both were generated in the same impact event and have the same source crater. In particular, that LKN spherules lie on the extension of trends in alkali metals, refractory elements and iron observed only in Australasian microtektites is strong evidence for a genetic relationship. The LKN microtektite data testifies to a continued evolution of microtektite compositions, through the same chemical processes that control Australasian microtektites, making it very likely these represent distal ejecta from this impact event. Until now Allan Hills has been the southernmost extension of the Australasian strewn field, however, the discovery of the LKN microtektites increases it by 800 km, with a new maximum extension from the putative crater of 12,000 km (Fig. 1a). This strengthens the hypothesis by Folco et al. (2016) that the distribution of Australasian microtektites in Antarctica is continental and not only limited to Northern Victoria Land. The maximum diameter of 388 lm of LKN spherules are also compatible with their identity as the southernmost extension to the Australasian strewn field. Although AUS/TAM microtektites are present in the same size range (Folco et al., 2009;Folco et al., 2016), their size distribution peaks at 450 lm, with particles up to 560 lm in diameter in Allan Hills. This decrease in size from AUS/TAM to LKN microtektites further strengthens that the latter were deposited further away from the source location and represent the current maximum extension of the Australasian strewn field. Expanding the volatilization trend of the Australasian microtektites A common volatilization trend exists between AUS/ DSS and AUS/TAM (Folco et al., 2010a) and is typified by the depletion in volatiles and enrichment in refractory elements with increasing distance from the hypothetical source crater. If the microtektites from Larkman Nunatak are indeed an extension of the Australasian strewn field, translating to a 7% increase in distance from the hypothetical source crater, then it might be expected that the geochemistry of our microtektites will show a notable increase in volatilization with respect to AUS/DSS and AUS/TAM particles. The most refractory major elements Ti and Al (Lodders, 2003) are not affected by volatilization during microtektite formation and thus, increased chemical fractionation due to volatilization at high temperature will not modify the TiO 2 /Al 2 O 3 ratio, which is representative of the target material. As a result, on a TiO 2 versus Al 2 O 3 diagram, Australasian microtektite values plot along a linear trendline (e.g. Folco et al., 2010a). Fig. 7a shows that TiO 2 /Al 2 O 3 values of LKN microtektites plot along the same trendline of normal AUS/DSS and AUS/TAM and have a TiO 2 /Al 2 O 3 ratio of 0.052 (±0.003), similar to that of AUS/DSS and AUS/TAM of 0.056 (±0.005) (Folco Fig. 7. Al 2 O 3 vs. TiO 2 (a) and Na 2 O/Al 2 O 3 K 2 O/TiO 2 (b) diagrams showing how Larkman Nunatak microtektites fit into the volatilization trend of AUS and AUS/TAM microtektites defined by refractory and volatile/refractory elements (Folco et al., 2010a). Symbols for LKN, AUS/DSS and AUS/TAM microtektites of Fig. 6 were used. et al., 2016). More importantly, eight of our particles significantly extend the trendline towards more refractory compositions and in the compositional field previously defined by the high-Al AUS/DSS and AUS/TAM microtektites. Additionally, the Ti and Al contents of LKN microtektites are inversely correlated to the content in alkali Na 2 O and K 2 O (Table 1 and Fig. 7b). Once again, LKN microtektites fall along the trend defined by AUS/DSS and AUS/TAM microtektites, with most of our particles plotting towards more volatile-poor contents. In particular, three particles exhibit the lowest values on the trendline, indicating that they are severely depleted in volatiles and enriched in refractory elements. This is consistent with the constant depletion in K 2 O and Na 2 O content with distance from the hypothetical source crater (Fig. 9), which is particularly pronounced in microtektites recovered in Antarctica. Thus, major elements support the contention that LKN microtektites are highly vapor fractionated and extend the common volatilization trend observed in Australasian microtektites. Regarding trace elements, the LKN microtektites are depleted in volatile (Rb, Cs and Zn) and significantly enriched in refractory trace elements (La, Hf, Th) with respect to normal AUS/DSS and AUS/TAM microtektites (Fig. 8). The only exception being the refractory element U that is strongly enriched in most normal AUS/DSS microtektites. However, the range of U content in LKN particles is similar to that of normal AUS/TAM particles and strongly depleted compared to normal AUS/DSS and UCC (Figs. 6 and 8). Wasson et al. (1990) explained this depletion in U by volatilization, with possible enhancement by high pO 2 and preferential location of U in the reduced target's carbonaceous matter. Consistently with the major element discussed above, LKN microtektites plot in the Fig. 8. Harker diagrams of volatile against refractory trace elements in Larkman Nunatak micrometeorites compared to data from the literature Folco et al. 2009Folco et al. , 2016. All values are in lg/g. Symbols for LKN, AUS/DSS and AUS/TAM microtektites of Fig. 6 were used. same compositional fields of high-Al AUS/DSS and AUS/ TAM particles, except for Zn that is significantly higher in the latter and extremely depleted in LKN particles. Glass et al. (2004) argued that the high Zn content of high-Al AUS/DSS microtektite suggests that they may not represent a severely vapor fractionated end-member of the AUS microtektites, but rather a different population of microtektites, similarly to high-Mg ''bottle green" microtektites (Glass, 1972;Folco et al., 2009). However, the severely depleted Zn content in LKN particles suggests that they may not be related to high-Al microtektites recovered closer to the hypothetical source crater and represent a new type of highly volatile depleted microtektites. The occurrence of only one vesicle amongst recovered LKN microtektites (Fig. 2) indicates that vesicularity in this population is extremely low. This further supports a link with Australasian microtektites, as vesicularity has been observed to decrease significantly in AUS/DSS and AUS/TAM with the distance from the source crater (Folco et al., 2010b possibly owing to ''bubble-stripping" Cassidy et al. (1969), Glass et al. (2004), Glass and Koeberl (2006) and Folco et al. (2010a)) with distance from the hypothetical source crater in the Indochina region. mechanism that was proposed to explain the loss of volatile in tektites and microtektites (Melosh and Artemieva, 2004). No partially dissolved silica-rich lechatelierite-like inclusions, that are commonly observed in AUS/DSS and AUS/TAM microtektites (Glass, 1990;Folco et al., 2009;Folco et al., 2010b) were observed in the LKN spherules. Although their presence cannot be entirely excluded, they are at most extremely rare and may suggest that in LKN particles, possible target material was completely melted and digested during microtektite formation, consistently with more intense and/or longer heating compared to AUS/DSS and more particularly AUS/TAM microtektites. Constant depletion of volatile and enrichment in refractory elements coupled with an extremely low vesicularity and likely absence of mineral inclusions suggest that LKN microtektites may represent the most vapor fractionated and intensely heated end-members of the Australasian strewn field discovered so far. Accumulation mechanism of microtektites in the moraine Knowing the accumulation mechanism of microtektites is critical to understanding whether they were deposited directly within the moraine at the time of formation or rather transported from another locality via the ice and/ or wind. A recent study of the accumulation mechanisms of micrometeorites found within LKN moraine alongside the microtektites suggests that we cannot exclude the possibility that LKN microtektites were first deposited at a different locality before being windblown and trapped within the moraine (Suttle et al., 2015). The relatively high concentration of microtektites within the moraine (200 particles/ kg) rules against this hypothesis, as we would except such a scenario to quickly dilute the particles over large areas. Another possible accumulation mechanism is the recent release of microtektites in the moraine by sublimation of ice present directly underneath it. The age of the surface ice in the vicinity of LKN is unknown however, as well as the terrestrial age of meteorites recovered from the moraine. However, this would require ice at least as old as the microtektites, that is 0.8 Ma old considering that they are related to the Australasian strewn field. Such old ice surfaces have not been identified in the Transantarctic Mountains, which usually exhibit ice younger than 200-150 ka based on modeling of ice flow and exposure age of meteorites (Grinsted et al., 2003). Two scenarios are therefore envisioned: (1) direct infall of microtektites in the moraines at the time of their formation and (2) sublimation of earlier generations of ice that contained microtektites, before their preservation in the moraine. Similarly to Australasian microtektites recovered from Allan Hills , the lack of evidence of abrasion on any of the LKN microtektite suggests that a likely accumulation scenario is direct infall at the time of formation. The observation of abundant weathering pits on some LKN microtektites (Fig. 3b) suggests that they have been exposed to liquid water over long periods of time, similarly to V-type cosmic spherules (Van Ginneken et al., 2016), suggesting a long-lasting presence within the moraine, which is in agreement with a direct infall. Implications This new type of highly volatile-depleted microtektites puts new constraints on the formation and deposition of the Australasian strewn field. As mentioned earlier, LKN microtektites represent the southernmost extension of the Australasian strewn field. This strewn field is characterized by its tri-lobed shape (or butterfly pattern; Glass and Simonson, 2013). Larkman Nunatak microtektites thus currently represent the maximum extension of the main ejecta ray oriented toward SSE. This is in contrast with the two lateral lobes that extend WSW and ESE and have only been observed in deep sea cores in the Indian and Pacific Oceans (Glass and Simonson, 2013). Such an extension from the source crater further confirms that the impactor most likely came from a very oblique (i.e. <45°) NNW trajectory (e.g. Artemieva, 2008;Artemieva, 2013). Furthermore, cratering models suggest that ejecta thrown furthest from the source crater along ballistic trajectories were extracted nearest from the target area (Melosh, 1989). Assuming that distal ejecta are normally graded in terms of particle-sizes (e.g. Glass and Simonson, 2013), this would suggest that LKN microtektites, which are notably smaller than other Australasian microtektites (including AUS/TAM microtektites), currently represent the ejecta that was ejected closest to the target area (i.e. contact surface). Thus, they suffered the highest temperature regimes during their formation. This is consistent with their smaller size compared to other Australasian microtektites, high-temperature melts having smaller surface tensions that results in smaller particles (e.g. Artemieva et al., 2002). Higher temperature regimes are also consistent with the severe volatile depletion, the almost complete lack of vesicularity and mineral inclusions of the LKN microtektites. This, in turn, confirms the suggestion by Folco et al. (2010aFolco et al. ( , 2016) that volatile-content in microtektites decreases with increasing distance from the source crater (Fig. 9). In conclusion, the presence of Australasian microtektites as far as 12,000 km from the hypothetical source crater give new constraints for future cratering models aiming at finding the actual crater location and/or studying the deposition of the Australasian strewn field (e.g. Artemieva et al., 2002). Another important implication concerns the age of the newly discovered collection of micrometeorites that were found along with microtektites in LKN glacial moraine (Suttle et al., 2015;Van Ginneken et al., 2016). If we consider that LKN microtektites are part of the Australasian strewn field, that would imply that they were deposited in the moraine 0.8 Ma ago. This would make the LKN micrometeorite collection as old as the TAM collection, which is consistent with the similar ranges of weathering states observed in both collection (Van Ginneken et al., 2016). Furthermore, the discovery of Australasian microtektites in sediments recovered from a low relief crest sampled at Allan Hills suggest that the East Antarctic Ice Sheet has been extremely stable in this area over the last 1 Ma . Note that this is consistent with the discovery of numerous meteorites in blue ice fields close to LKN (e.g., Corrigan et al., 2014). More importantly, this would imply that glacial moraines located close to nunataks in the Transantarctic Mountains are efficient sampling sites for micrometeorites and other extraterrestrial materials (e.g., meteoritic ablation debris). For example, similar areas could be surveyed to confirm the continental distribution of meteoritic ablation debris related to a large airburst event 480 ka ago that were discovered in the TAM, Dome Fuji and Dome Concordia (Van Ginneken et al., 2010). CONCLUSIONS We report the discovery of microtektites in a glacial moraine near Larkman Nunatak. Geochemical evidence, both in term of major and trace elements, suggest that they may be related to the Australasian strewn field. This would further extend the Australasian strewn field by 800 southward. Continuous depletion in volatiles and enrichment in refractory elements in Larkman Nunatak microtektites prolong the volatilization trend defined by Australasian microtektites, suggesting the former represent a new type of highly vapor fractionated microtektites. The fact that they are a new end-member to the volatilization trend is further supported by their very low vesicularity (i.e. almost complete loss of volatiles through boiling of the silicate melt) and absence of mineral inclusions (i.e. complete melting of target material due to extremely high and/or long temperature regimes). This discovery has strong implication for tektites/microtektites formation. Another important implication of the discovery of Australasian microtektites in the Larkman Nunatak area is that the East Antarctic Ice Sheet in this area may have been stable over the last 1 Ma, confirming a similar observation the Allan Hills area, in the Transantarctic Mountains. Finally, old age and stability of glacial moraine at Larkman Nunatak suggest that such glacial moraines related to nunataks in the Transantarctic Mountains may be efficient sampling sites for infalling microscopic extraterrestrial matter, such as micrometeorites and meteoritic ablation debris.
2019-04-27T13:09:09.823Z
2018-03-06T00:00:00.000
{ "year": 2018, "sha1": "29e6ae1b3ae341e0c3a9152903a3379711da7b16", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.gca.2018.02.041", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b16f566a75d781ac29edc0f6bce7d731ca06bd73", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
16098821
pes2o/s2orc
v3-fos-license
Ultrasonic Vocalizations as a Measure of Affect in Preclinical Models of Drug Abuse: A Review of Current Findings The present review describes ways in which ultrasonic vocalizations (USVs) have been used in studies of substance abuse. Accordingly, studies are reviewed which demonstrate roles for affective processing in response to the presentation of drug-related cues, experimenter- and self-administered drug, drug withdrawal, and during tests of relapse/reinstatement. The review focuses on data collected from studies using cocaine and amphetamine, where a large body of evidence has been collected. Data suggest that USVs capture animals’ initial positive reactions to psychostimulant administration and are capable of identifying individual differences in affective responding. Moreover, USVs have been used to demonstrate that positive affect becomes sensitized to psychostimulants over acute exposure before eventually exhibiting signs of tolerance. In the drug-dependent animal, a mixture of USVs suggesting positive and negative affect is observed, illustrating mixed responses to psychostimulants. This mixture is predominantly characterized by an initial bout of positive affect followed by an opponent negative emotional state, mirroring affective responses observed in human addicts. During drug withdrawal, USVs demonstrate the presence of negative affective withdrawal symptoms. Finally, it has been shown that drug-paired cues produce a learned, positive anticipatory response during training, and that presentation of drug-paired cues following abstinence produces both positive affect and reinstatement behavior. Thus, USVs are a useful tool for obtaining an objective measurement of affective states in animal models of substance abuse and can increase the information extracted from drug administration studies. USVs enable detection of subtle differences in a behavioral response that might otherwise be missed using traditional measures. ACOUSTIC AND FUNCTIONAL CHARACTERISTICS OF RAT ULTRASONIC VOCALIZATIONS (USVs) Rats produce vocalizations in sonic and ultrasonic frequencies that can be defined by their acoustic and functional properties, as well as their method of production. Sonic vocalizations, ranging from 0-18 kHz, are produced through slow vibrations of the vocal folds and are emitted when rats encounter a threat. Sonic vocalizations have been observed in the laboratory when rats experience pain or when they are handled by experimenters [1,2]. Separately, rats can emit vocalizations in ultrasonic frequencies which are appropriately termed "ultrasonic vocalizations" (USVs; [1]). Among other functions, USVs can signal alarm to conspecifics or indicate reward reception. Rats are capable of producing sonic vocalizations and USVs that can be characterized by the acoustic parameters of the emission. USVs serve as a method of intraspecies communication in rats [3,4]. Given that USVs are acoustically heterogeneous in nature (for examples, see [5]), one might expect that different categories of vocalizations serve somewhat different signaling *Address correspondence to this author at the National Institute on Drug Abuse, Neuronal Networks Section, 251 Bayview Boulevard, Baltimore, MD 21224; Tel: 443-740-2708; E-mail: David.Barker@nih.gov functions. Researchers have categorized USVs based on the presence and number of pitch modulations during a single emission. For example, individual USVs that are maintained around a single frequency throughout the entirety of emission are referred to as "fixed-frequency calls" (FF). Individual USVs that are observed to change in frequency throughout emission (i.e. > 3-kHz shift in frequency) [6][7][8][9][10] are referred to as "frequency-modulated calls" (FM). Lastly, individual USVs that are observed to have more than one frequency modulation throughout emission are referred to as "trills". Frequency modulations are often described as being a "sweep" or "jump/step" from one pitch to another, and the frequency modulations that characterize trills can consist of multiple cycles of either frequency modulation type or a combination thereof [5]. The acoustic parameters of individual emissions have been used by researchers to classify USVs into unique call types. USVs naturally dichotomize into two frequency ranges. Specifically, USVs in the 18-33 kHz frequency range, collectively referred to as "22-kHz USVs", can be long or short in duration. Long duration (300-3400 ms; [11]) 22-kHz calls are emitted during aversive stimulation such as social isolation [12], predatory odor exposure [13], foot-shock [14] or anxiogenic drug administration [15]. Furthermore, 22-kHz USVs have been shown to function as alarm cries to conspecifics [2]. Short duration 22-kHz USVs have been observed during the formalin footpad pain test [16], experimenter handling and foot-shock [17,18], and following the injection of muscarinic agonists into the anterior hypothalamus [17,18], which is known to be an aversive stimulus. Interestingly, studies have shown that shorter duration USVs carry more information for conspecifics and elicit the greatest defensive responses [19]. Unlike 22-kHz USVs, USVs in the 38-80 kHz frequency range, collectively termed "50-kHz USVs", are emitted during rewarding and appetitive states such as social contact [20,21], psychostimulant conditioned place preference (CPP; [22]) and copulation [23]. Another difference from 22-kHz calls is that 50-kHz USVs do not exhibit the wide range of durations but are relatively restricted to short durations ranging from 20-80 ms [24]. Thus, given that 22-kHz USVs occur in the presence of aversive stimuli while 50-kHz USVs occur in the presence of rewarding stimuli, virtually without exception, it is generally accepted that USVs provide insight into opposing affective states of rats [24]. In the present review, we illustrate ways in which USVs emitted by rats have been used as a tool for studying substance abuse. Accordingly, we review studies of inferred affective processing in response to the presentation of drugrelated cues, experimenter-and self-administered drug, drug withdrawal, and during tests of relapse/reinstatement. We will focus on data collected from studies of psychostimulants (e.g., cocaine and amphetamine), where a large body of evidence has been collected. Nevertheless, similarities and differences to other drugs are considered throughout. A summary of the data presented in the review can be found in Table 1. It is difficult to objectively define emotional processing, and certain components of affective state changes are internalized and perhaps immeasurable. However, we are often able to detect emotions by observing their externalized manifestations. USVs in the rat are one such manifestation. Other examples might include changes in heart rate, facial expression, or skin temperature. With this in mind, we propose that rats' USVs provide an objective measure of emotional state in preclinical models. Indeed, unlike human self-reports, USVs are passively measured and avoid extraneous influences on measured emotion. Moreover, studies of individual differences in USV production suggest that animals' emotional responses provide predictive power for identifying individuals with phenotypes and genotypes which are at risk for addiction. Lastly, USVs are-in many cases-independent of traditional behavioral measures (e.g., locomotion or lever responding) and therefore provide more information to experimenters. Thus, USVs are a useful tool for inferring affective states in the rat and can be incorporated in research paradigms modeling important human conditions such as drug abuse [5][6][7][8][9][25][26][27][28][29][30][31][32][33][34][35]; depression, fear and anxiety disorders [36][37][38][39][40]; or Parkinson's Disease [41][42][43]. Moreover, USVs are a useful tool for examining the neural substrates of reward processing [44][45][46]. AFFECT AND SUBSTANCE DEPENDENCE Addiction is thought of as a chronically relapsing disorder with three major characteristics: 1) compulsion to seek and use drug, 2) loss of control over drug intake (i.e. excessive drug consumption), and 3) negative emotional states following cessation of drug use [47][48][49]. One goal of current clinical and preclinical models is to focus on understanding these characteristics in order to develop therapies which prevent relapse. Affective responses from human addicts are recorded using retrospective self-reports (e.g., [50]). While the accuracy of these measures is sometimes questioned (e.g., [51]), self-report data suggest that both positive and negative affective states play a role in drug relapse [52][53][54]. Specifically, it has been argued that drug-seeking behavior can be driven by positive recollections of previous drug experiences [52] or by the desire to alleviate a negative affective withdrawal state [55,56]. Moreover, it has been reported that affective states shift from positive to negative just prior to drug use [57] or just after drug administration [50], suggesting that these mood states may be potent contributors to drug-seeking behavior and ultimately to the maintenance of addiction. Notably, this same duality is present in preclinical data and theories derived from animal models of addiction [58][59][60][61]. Some of the negative symptoms self-reported during psychostimulant withdrawal include dysphoria, irritability, paranoia, insomnia, and depression [53,62], while positive symptoms experienced following psychostimulant administration include euphoria, alertness, and increased confidence [63]. Given the relationship between psychostimulant use and both positive and negative affective states, modeling affective states in preclinical studies has recently gained attention [5-9, 25-34, 60, 61, 64-69]. These models allow for effective comparisons between data collected from animal models and human self-reports. AFFECTIVE RESPONSES TO THE ADMINIS-TRATION OF PSYCHOSTIMULANTS AND OTHER DRUGS Studies of drug administration are important for determining the reinforcing efficacy of drugs of abuse, secondary and peripheral effects induced by drugs of abuse, and the neural mechanisms which underlie these processes. In preclinical models, USVs emitted by rats are capable of extending current knowledge by providing insights into affective processing. Indeed, preclinical studies have employed various behavioral paradigms to detect preferences or to determine the rewarding or aversive properties of certain drugs. Nevertheless, while often correlated with the production of USVs, behavioral effects of drugs also may be separable from inferences about animals' affective responses [e.g., 7,44,64,66,70]. For example, our laboratory has observed USVs in 30-and 60-day cocaine reinstatement tests with no clear correspondence to drug-seeking behavior (i.e. operant lever presses; [7]). Therefore, these differences may provide insights into subtle differences between the roles of various circuits implicated in reward processing and drugseeking motivation, which may subsequently improve our understanding of the factors that mediate relapse propensity. Experimenter-Administered Drugs Seminal work on animals' responses to psychostimulants performed by Burgdorf and colleagues [44] demonstrated Reinstatement method (following ≤9 that 50-kHz USVs can be evoked via intracranial injections of amphetamine into the nucleus accumbens (NAcc) core and shell subregions. In line with previous studies on the effects of stimulants, the rate of elicited vocalizations followed a canonical inverted-u dose-response function. Moreover, intracranial injections into the caudate-putamen (i.e., dorsal striatum) failed to increase rates of USV production [44], consistent with the suggestion that mesostriatal and nigrostriatal circuitry process motivation/emotion and sensorimotor information, respectively (but see [71]). These effects were later replicated by Thompson and colleagues [27], and it was further demonstrated that microinjections into the NAcc shell were more effective at eliciting 50-kHz USVs than injections localized to the NAcc core. Thus, it has been suggested that 50-kHz USVs are mediated by dopaminergic signaling in the ventral striatum. Consistent with these effects and with the more general hypothesis that 50-kHz USVs are mediated by dopamine transmission, multiple studies have demonstrated that dopaminergic antagonists can affect psychostimulantinduced USVs. Accordingly, D 1 receptor antagonists, such as SCH23390, SKF-83566 and SCH39166, inhibit psychostimulant-induced 50-kHz USVs (i.e., amphetamine and cocaine [27,66,70]). Similarly, D 2 receptor antagonists, such as raclopride, haloperidol, and pimozide, have been shown to inhibit 50-kHz USVs in animals treated with psychomotor stimulants [27,66,70]. Interestingly, Wright and colleagues [70] demonstrated that D 1 or D 2 antagonists attenuate 50-kHz USVs primarily by reducing the number of emitted FM USVs while having little effect on FF USVs, demonstrating that only the FM positive affective USVs are dopamine-dependent. In apparent contrast, attempts to reproduce psychostimulantelicited 50-kHz USVs using D 1 receptor-like or D 2 receptorlike agonists have proven unsuccessful [66]. Also, neither the dopamine transporter (DAT) inhibitor GBR 12909 nor the norepinephrine transporter (NET) inhibitor nisoxetine were shown to mimic the effects of amphetamine when examining USV emission [5]. Lastly, atypical antipsychotics have been shown to have mixed effects on psychostimulantinduced 50-kHz USV production. Specifically, pre-treatment with clozapine or risperidone was shown to inhibit 50-kHz USVs in both saline-and amphetamine-treated animals while sulpiride failed to have any effect on 50-kHz USV emissions [70]. Nevertheless, the combined D 1 /D 2 receptor agonist apomorphine has been shown to elicit 50-kHz USVs at rates similar to those observed following cocaine administration [66]. Moreover, the combined D 2 /D 3 agonist quinpirole has also been shown to increase 50-kHz USVs when injected intracranially into the NAcc [144]. Thus, it appears that both D 1 -and D 2 -like receptors may be necessary for eliciting 50-kHz USVs following psychostimulant administration, but clarifying the precise role of dopamine in the production of USVs requires further study and is likely to be circuit-specific. Dopamine depleting lesions cause changes in the acoustic features of USVs (i.e. reduce the number and quality of FM USVs) but do not eliminate their emission (e.g., [43]). Therefore, other systems have been evaluated for their role in the production of 50-kHz USVs. For example, Wright and colleagues [32] demonstrated that multiple noradrenegeric drugs affect amphetamine-elicited 50-kHz USVs. Namely, the α 1 antagonist prazosin and the α 2 agonist clonidine were shown to dose-dependently reduce amphetamine-elicited 50-kHz USVs. Similar to dopaminergic manipulations, it was shown that these agents predominantly affected FM USVs and trills. On the other hand, the α 2 antagonist atipamezole and the β 1 /β 2 blocker propranolol failed to affect amphetamine-elicited rates of 50-kHz USVs. Propranolol did, however, qualitatively change the profile of the observed USVs such that more FF calls were observed. Concordant with this observation, multiple other β 1 or β 2 antagonists and mixed β 1 /β 2 blockers failed to have any effect on amphetamine-elicited 50-kHz USVs (i.e. betaxolol, ICI 118,551, and nadolol). Still, it was shown that the combination of a β 1 antagonist (Betaxolol) and β 2 antagonist (ICI 118,551) was able to affect the qualitative properties of the observed USVs (i.e., the complexity or number of observed modulations in frequency; [32]). Finally, it has also been shown that neurotrophic factors may play a role in modulating USV production. Specifically, blockade of the Trk-B brain-derived neurotrophic factor (BDNF) receptor with intracerebroventricular injections of K252A reduced the rate of cocaine-elicited 50-kHz USVs while causing no change in the cocaine-elicited locomotor response [66]. Thus, multiple systems may be affected by psychostimulants and combine to modulate animals' emotional response. Repeated drug exposure is known to produce changes in animals' behavioral responses to further administration of the drug (e.g., [72]) in conjunction with neuroanatomical changes (e.g., [73]). Concordant with these findings, rates of USVs sensitize across repeated cocaine or amphetamine exposure [28,64]. Notably, Ahrens and colleagues [28] demonstrated that increases in rates of FM USVs and trills predominantly account for the increase in USV emissions across repeated amphetamine injections. Moreover, it was shown that USVs remain sensitized for at least two weeks following amphetamine exposure despite a period of abstinence [28]. Finally, the time course of sensitization for USVs (i.e. the number of sessions) roughly corresponds to the canonical sensitization of locomotor activity [64]. However, USV emissions within early sessions were transient, while locomotor activity remained elevated throughout the testing period, suggesting that these two behaviors are dissociable [64]. Nevertheless, this dissociation disappeared across protracted treatment, illustrating that changes in behavioral responses occur as drug use continues. Interestingly, Mu and colleagues [64] also demonstrated that individual differences exist in the degree of sensitization based on animals' baseline USV rates. Indeed, USV rates are known to show high inter-individual variability but low intra-individual variability [34]. Most importantly, individual differences seem to correspond to a variety of behavioral attributes that may prove important for targeting individuals at risk for addiction. Studies of individual differences have shown that animals with high baseline rates of USVs show more 50-kHz USVs in anticipation of drug and develop a stronger place preference for drug [74,75]. Also, animals with high baseline USVs show a greater escalation of stimulant-induced 50-kHz USVs than animals with low baseline USVs [75]. Moreover, subjects that emit the greatest numbers of 50-kHz USVs also demonstrate a greater-than-average latency to show a response in a hotplate test for pain, and also spend more time in the open arms of an elevated plus maze, suggesting a decrease in anxiety [34]. Finally, when examining Long-Evans rats selectively bred for high or low rates of USV emission in response to 'tickling' by experimenters [76], it was shown that high-vocalizing animals exhibit more robust locomotor responses to amphetamine than animals bred for low rates of vocalization (although both groups exhibit drug-induced increases in locomotion). Thus, inherent differences in emotionality may also relate to individual differences in animals' behavioral responses to abused drugs as well as susceptibility for abuse. It has been demonstrated that a number of behavioral variables can affect USV emission. For example, behavioral manipulations can affect the production of 50-kHz USVs and further modulate the effects produced by psychomotor stimulants. Namely, Wright and colleagues [5] demonstrated that social interaction further increased the number of observed 50-kHz USVs produced by amphetamine administration when compared to singly tested animals. Furthermore, Natusch & Schwarting [77] demonstrated that animals emit greater numbers of amphetamine-induced 50-kHz USVs when testing in cages with bedding material as opposed to traditional testing chambers without bedding, and that animals exhibit a place preference for a bedding-covered floor. Overall, these results suggest that 1) environmental familiarity or social contact facilitates the production of 50-kHz USVs, 2) negative emotional states (e.g., those produced by a novel environment) may sum with positive affect produced by drugs of abuse to mediate the animal's net emotional output, and 3) animals' ongoing behaviors may contribute to rates of USV emission, although multiple sources demonstrate that these behaviors do not directly produce USVs. Overall, results from multiple studies suggest that psychostimulant administration increases rates of 50-kHz USVs. Both cocaine and amphetamine are capable of producing such an increase, with amphetamine producing a slightly greater effect than cocaine [32]. While USV analysis has proven fruitful for studies of psychomotor stimulants, it is clear that these results are not consistent across all abused drugs. For example, experimenter-administered caffeine (an 'atypical' stimulant) fails to increase rates of 50-kHz USVs over saline controls but does produce differences in the qualitative parameters of individual vocalizations [29]. Along these same lines, morphine administration has been shown to either suppress 50-kHz USVs [32] in experimental subjects or produce no difference when compared saline controls [9,30,78]. Nevertheless, morphine produces elevations in locomotor activity and induces a CPP ( [32]), both of which are also observed for psychomotor stimulants. Finally, MDMA [145] and nicotine administration did not elicit 50-kHz USVs but returning animals to the drug paired environment in the days following drug exposure did evoke 50-kHz USVs in drug-treated animals [30]. Thus, administration of different drugs of abuse cause different reward profiles as characterized by USVs and supplemental behavioral tasks (e.g., CPP). Overall, this suggests that the pharmacological effects of the drug may differ from the behavioral or emotional response when anticipating drug or in response to drug paired cues. Self-Administered Drugs There are only a few studies of USVs during selfadministration. Such studies are important as they capture the influences of both learning and pharmacology on the development of drug addiction. Models of psychostimulant self-administration provide robust face validity when measuring affective responses in anticipation of impending drug availability, in response to the presentation of drugrelated cues, or when measuring differences in affective responses between short-and long-access paradigms or short-and long-term drug exposure. More importantly, USVs also provide predictive and construct validity, as it has been shown that the emotional response to drug relates to an individual's propensity for consumption and USVs provide a passive measure of emotion which is free from the extraneous influences described above. In the first study to examine USVs during cocaine self-administration, Barker and colleagues [6] trained animals to self-administer cocaine under a variable-interval schedule in a long-access self-administration paradigm. This schedule was specifically chosen, as it can be used to manipulate rates of responding and drive animals to respond perseveratively. Specifically, low doses of cocaine on a variable-interval schedule cause high rates of responding and prevent animals from attaining drug 'satiety' [8,[79][80], whereas higher doses or fixed ratio 1 schedules produce more steady rates of responding by allowing animals to achieve satiety. When comparing animals receiving either high (~0.71 mg/kg/infusion) or low (~0.355 mg/kg/infusion) doses of cocaine under this schedule, it was observed that animals in the high dose group emitted predominantly 50-kHz USVs, while animals in the low-dose group emitted predominantly short 22-kHz USVs [6]. Thus, while not directly tested in the experiment, these results suggest that high doses of cocaine produce positive affect. Moreover, sub-satiety doses produce a negative affective state that is observed in concordance with craving, as suggested by high levels of operant responding. That initial study of USVs during cocaine selfadministration produced a number of subsequent questions. Barker and colleagues designed an experiment to explicitly test whether or not USVs differed as a function of the animal's cocaine level (calculated according to first-order pharmacokinetics; [8]). After initial load-up, calculated cocaine levels were manipulated using a series of drug 'clamps' wherein cocaine levels were held constant below, at, or above each animal's self-determined satiety threshold via computer-controlled micro-infusions (.0018-0.021 mg/kg/infusion). Consistent with previous work from our laboratory, animals whose drug level was clamped below satiety exhibited robust increases in responding when compared to their normal self-administration contingencies. On the other hand, responding was attenuated in subjects whose drug levels were held at or above their individual satiety thresholds. Consistent with our hypothesis, it was shown that subjects emitted high rates of 22-kHz USVs when levels of cocaine were held below satiety threshold. Interestingly, it was also observed that 50-kHz USVs were emitted almost exclusively during animals' first selfadministered infusions. Following the drug-loading period, rates of 50-kHz USVs decayed to near zero under all conditions tested: during continued maintenance of preferred drug level, or during any of the clamp conditions. This result is similar to observations from intraperitoneal (i.p.) drug administration studies, which have shown that USVs subside prior to the decay of other stimulant-induced behaviors (e.g., increases in locomotor activity; [64]) and suggest that positive affect is only acutely experienced during the transitory state from sobriety to intoxication. Notably, when levels of cocaine were held at or above satiety threshold, few USVs of either 22-or 50-kHz were observed. However, during normal maintenance, the rate of 22-kHz calls increased as a function of how far drug level fell below satiety threshold during the inter-infusion interval. The highest rates of 22-kHz emissions were observed during the sub-satiety clamp, which corresponded to holding drug levels at approximately half the animal's preferred level. Overall, these results suggest that cocaine self-administration produces an initial positive affective response followed by an opposing negative affective state whenever drug level falls below satiety threshold. Accordingly, the negative relationships between calculated level of cocaine and both rate of responding and negative affective USVs, plus the paucity of positive affective USVs during maintenance, suggests that responding during the maintenance phase of self-administration is perhaps more reliably driven by the motivation to escape declining, sub-satiety levels of the drug (i.e., negative reinforcement) rather than motivation to seek further bouts of euphoria. Subjects initiating a drug binge may anticipate the initial positive affective response and transiently experience positive reinforcement. An opponent process may follow, in which the subject is effectively "trapped" in a binge by the aversive experience whenever drug level begins to fall. Each episode of sub-satiety drug level may be experienced similarly to the onset of withdrawal symptoms, the difference during maintenance being the continued self-administration of drug, enabling avoidance or escape from the aversive state of sub-satiety. A short-access (1 h) self-administration study by Maier and colleagues [35] suggests that the decline in 50-kHz USVs during load-up and subsequent shift towards 22-kHz USVs may occur as a result of repeated stimulant exposure. Specifically, Maier and colleagues [35] demonstrated that 50-kHz USVs briefly escalate early in self-administration training, suggesting sensitization to the drug. Continued exposure resulted in a decline in 50-kHz USVs late in training, suggesting the development of tolerance and perhaps dependence on the drug. This notion is further supported by the observation that lever responding continued to increase despite the observed decline in 50-kHz USVs. Thus, it might be suggested that animals become tolerant to the affective effects of psychostimulants over repeated exposure, causing a shift in animals' initial affective responses to the drug. Maier and colleagues [35] also observed an increase in 50-kHz USVs following a two-day abstinence period. Given the available evidence, one might suggest that the observed increase results after 1) development of an opponent, negative process following repeated drug experience and 2) attenuation of negative affect following a brief period of abstinence/withdrawal. A mixture of 50-and 22-kHz USVs were also observed in animals trained to self-administer methamphetamine [68]. In this study, 22-kHz USVs were most pronounced during the first day of self-administration, although the number of observed 50-kHz USVs was always greater than the number of 22-kHz USVs. Similar to other self-administration studies, the number of USVs emitted during selfadministration decayed across repeated training, and very few long 22-kHz USVs were observed. Indeed, the absence of long 22-kHz USVs may be related to observations that abused drugs can cause changes in the quality of emitted USVs [9,32,33,70] or observations that stimulants can reduce the duration of USVs [9]. Similar to studies involving experimenter-administered cocaine, evidence from self-administration data suggests that individual differences exist in an animal's emotional response to abused psychostimulants. Specifically, Reno and colleagues [81] observed that high-and low-calling animals exhibit differences in USVs in anticipation of the opportunity to self-administer cocaine. In addition, high calling animals exhibit greater escalations in cocaine-induced positive affective USVs across training when compared to lowcalling animals. On the other hand, high-and low-calling animals exhibit no differences in the amount of cocaine they self-administer, nor in their psychostimulant-induced locomotor activity. DRUG WITHDRAWAL AND AFFECTIVE DISTRESS: EVIDENCE FROM USVS Addicts experiencing psychostimulant withdrawal report symptoms of depressed mood, fatigue, anhedonia, craving and anxiety [82,83]. Withdrawal-induced psychosomatic symptoms have been modeled in rodents (e.g., [84]), and these models can be used to better understand the precipitation and alleviation of withdrawal symptoms. Concordantly, pharmacotherapy development for cocaine addiction has focused on reducing withdrawal-induced affective distress [85]. Rat USVs provide a non-invasive means of characterizing affective states in preclinical models of withdrawal [25,26]. Indeed, evidence has shown that rats emit 22-kHz USVs when experiencing withdrawal from cocaine [86][87][88][89], opiates [90,91] and ethanol [91][92][93]. The emergence and cessation of affective distress can better inform our understanding of drug withdrawal states, and USVs can serve as a tool to accomplish an improved understanding. Affective distress is experienced during withdrawal from orally self-administered cocaine. In a seminal report characterizing USVs in cocaine-withdrawn rats, Barros and Miczek [86] observed startle-induced 22-kHz USVs when rats were withdrawn from orally self-administered cocaine at 72-but not at 24-hours post-cessation. The presence of 22-kHz USVs was interpreted as reflecting affective distress and/or anxiogenesis during cocaine withdrawal. The presence of 22-kHz USVs at 72-but not 24hours post-cessation suggests that the emergence of affective distress may follow that of anhedonia. Rats withdrawn from chronic cocaine show signs of anhedonia within 24-hours post-cessation as evaluated by intracranial self-stimulation (ICSS; [84]), which is often used to measure hedonic state via changes in the relationship between responding and stimulation current. Moreover, Barros and Miczek [86] observed 22-kHz USVs from rats that had either continuous (24 hours/day for 30 days) or intermittent (4 hours/day for 30 days) access to cocaine, which supports earlier studies showing withdrawal symptom induction from different dosing regimens [84,[94][95][96]. Irrespective of dosing regimen, all animals emitted fewer 22-kHz USVs by 7-days postcessation (as compared to observations at 72 hours) and returned to baseline calling rates by 4-weeks post-cessation [86]. Thus, some investigators have concluded that affective distress is experienced during drug withdrawal and suggested that a similar withdrawal state both in intensity and duration is experienced irrespective of drug dosing regimen. Affective distress is transiently experienced following intravenously self-administered cocaine binges. Addicts often consume cocaine in episodic binges [97]. The duration of cocaine binges varies depending on route of administration, but average binge lengths range from 7 to 17 hours and have been reported to last as long as 40 hours [97]. Moreover, the short-term effects of cocaine withdrawal following a binge have been collectively termed as a "crash", and this state has been characterized by intense craving and depression (i.e. negative affect; [82]). Preclinical studies aimed at modeling human drug bingeing behavior have shown that rats withdrawn from either a 12-or 48-hour intravenously self-administered cocaine binge emitted more startle-induced 22-kHz USVs at 6-and 24-hours post-binge relative to drug-naïve control rats [87]. Interestingly, 22-kHz USVs returned to control levels by 72-hours post-binge. Although the cessation of 22-kHz USVs at 72-hours postbinge appears to contradict previous observations [86], the relatively short duration of affective distress following a cocaine binge supports clinical reports (for review, see [98]) and highlights the importance of implementing preclinical models that best match the parameters of human cocaine addiction. Moreover, these studies indicate that affective distress emerges more quickly following withdrawal from a high-dose of intraveneously self-administered cocaine binge relative to a longer-access, low-dose, orally self-administered cocaine dosing regimen. Rats withdrawn from both actively self-administered and passively-administered cocaine experience affective distress. Mutschler and Miczek [88] found that rats emitted significantly more startle-induced 22-kHz USVs at 24 hours following a 16-hour passively-administered cocaine binge relative to cocaine self-administering animals. Moreover, both groups of cocaine-withdrawn rats emitted significantly more 22-kHz USVs compared to saline-treated control animals. This finding extends earlier work showing that rats experience greater aversion and toxicity when cocaine is passively-administered [99] but nonetheless demonstrates that both passively-and self-administering rats experience affective distress when withdrawn from cocaine. Prior cocaine use has been suggested to mediate the intensity of cocaine withdrawal symptomatology relative to the initial withdrawal state. For example, repeated access to psychostimulants has been previously shown to alter patterns of acquisition, maintenance and reinstatement relative to initial experience with the drug [100][101][102][103]; for review, see [59]). In support, locomotor behavior following amphetamine administration has been shown to be positively associated with subsequent cocaine self-administration behavior but was not found to be predictive of relapse propensity [103]. Previous studies have also reported that tolerance can develop to the reinforcing properties of cocaine (for review, see [72]) but that prior experience with cocaine did not alter the intensity of subsequent withdrawal states [89]. Mutschler and colleagues [89] found that the rate of 22-kHz USVs was relatively stable between rats experiencing withdrawal from either one, two or three 16-hour cocaine binge(s), which failed to support the hypothesis that prior drug use would reduce affective distress experienced during subsequent episodes of cocaine withdrawal. Indeed, the 10day drug-free interval between cocaine binges may have reduced the tolerance-like effects established from prior use, as withdrawal-induced 22-kHz USVs have been shown to be less prevalent at 7-days post-cessation relative to earlier time points [87][88]104]. Thus, although prior cocaine use has been suggested to modulate the reinforcing properties of subsequent cocaine use, prior use does not appear to alter the intensity of cocaine withdrawal states following subsequent uses. Cocaine withdrawal leads to long-lasting changes in immediate early gene expression as well as in dopamine and opiate receptor-mediated neural circuits, and these changes underlie symptoms of acute and prolonged withdrawal states. For example, animals withdrawn from a 16-hour selfadministered cocaine binge showed mRNA downregulation of the immediate early gene zif268 in hippocampal but not mesolimbic brain regions at 24-hours post-cessation [104]. Zif268 is critical for learning and synaptic plasticity [105] and is involved in aversive memory maintenance [106]. Importantly, cognitive dysfunction has been observed during cocaine withdrawal in human addicts [107]. Thus, downregulation of zif268 in the hippocampus may contribute to impaired cognition during cocaine withdrawal but does not appear to underlie affective distress at 24 hours postcessation. Mutschler and colleagues [104] further observed downregulation in zif268 mRNA at 14-days post-binge in the hippocampus, nucleus accumbens and the basolateral amygdala, which corroborates other studies observing longlasting, withdrawal-induced changes in extracellular dopamine levels in mesolimbic circuits [108][109][110][111]. Moreover, a recent study in rats has shown that κ opioid receptors (KORs), which have been previously shown to mediate motivational states [112], become dysregulated within amygdalar subregions during cocaine withdrawal and may underlie shortterm affective distress post-cessation [113]. Combined, these studies reveal that rats emit 22-kHz USVs when withdrawn from cocaine binges, and that short-and long-term changes in immediate early genes and neurotransmitter systems regulating aversive memory formation and affect are observed during withdrawal from cocaine self-administration. While studies have yet to elucidate a predictive role of changes in immediate early gene expression with induction of affective distress, it is clear that drug withdrawal is characterized by both and thus a causal relationship may exist between the two. Results obtained from psychostimulant withdrawal studies have been supported by findings from animal models of opiate and ethanol withdrawal. Much like psychostimulant withdrawal, animals withdrawn from morphine or heroin emit more 22-kHz USVs than control animals [90,91]. Furthermore, systemic administration of naltrexone, an opiate receptor antagonist, was found to dose-dependently reduce 22-kHz USVs in animals withdrawn from chronic morphine [114], suggesting that activated opiate circuits underlie the emergence of affective distress during opiate withdrawal. It is noteworthy that cocaine withdrawal was recently shown to lead to long-term changes in KORmodulated limbic circuits [113], which suggests a ubiquitous role of opiate systems in mediating affective distress during withdrawal from multiple drugs of abuse. Separate lines of research have found that withdrawal from drugs of abuse that act on GABAergic transmission, such as ethanol, leads to increased rates of 22-kHz USVs relative to saline-treated control animals [92,[115][116][117]. Moreover, administration of a KOR antagonist, nor-binaltorphimine (nor-BNI), attenuated the increase in 22-kHz USVs during acute ethanol withdrawal, which suggests involvement of KOR circuits in underlying this opposing process [93]. Thus, evidence from animal models of opiate and ethanol withdrawal corroborate results obtained from psychostimulant withdrawal studies to indicate common underlying neural substrates mediating affective distress during withdrawal from drugs of abuse (for review, see [118]). Withdrawal from drugs of abuse produces affective distress, and multiple factors mediate the presence or relative intensity of withdrawal states. Psychostimulants, such as cocaine, have been shown to reliably induce affective distress in self-administering and passively-administered rats during withdrawal. Moreover, withdrawal from opiates and ethanol have been shown to lead to similar states of affective distress as well as changes in neural substrates governing emotion, learning and memory. USVs, specifically those in the 22-kHz frequency range, have been observed in nearly all drug withdrawal states and corroborate human self-reports to characterize affective distress as a cardinal feature of drug withdrawal. To our knowledge, USV recording remains the only method available to non-invasively, and without the requirement of operant behaviors, evaluate changes in affect during psychostimulant withdrawal that are otherwise difficult or impossible to detect. Furthermore, USVs can be used to resolve the magnitude of affective distress from other drugs of abuse, such as opiates and ethanol, and can be observed in concert with somatic withdrawal signs. Future studies can aim to better uncover the gene transcriptional and molecular pathways involved in acute and protracted withdrawal states from drugs of abuse, and USVs can be used to determine how neural changes mediate affect during drug withdrawal. Two of the few studies reporting USVs in response to drug-related cues [65,133] demonstrated that cues signaling the forthcoming opportunity to self-administer cocaine increase emissions of positive affective USVs. These anticipatory USVs were specifically emitted during the opportunity to self-administer cocaine, as evidenced by fewer emissions in saline-treated and yoked controls. Notably, anticipatory USVs were attenuated when environmental cues had been associated with extinction training instead of cocaine selfadministration, demonstrating a modification to the original context-drug (i.e. CS-US) association, whereby context no longer predicted drug reward and thus no longer elicited an anticipatory response. These results are similar to the observation that 50-kHz USVs are elicited following the presentation of cues associated with natural rewards, such as palatable food [134], as well as studies demonstrating the induction of approach behavior from females when precopulatory 50-kHz USVs are emitted from males [135]. Together, these studies suggest that learned cue-reward associations for both drug, food, and sexual rewards elicit powerful effects on animals' affective state and behavior but also that the semiotic value of an association can be extinguished following changes in the anticipated outcome. Studies have also shown that a period of abstinence or extinction training can modulate the emotional states observed during drug anticipation [68,133]. Specifically, a two-day abstinence period was shown to increase the number of anticipatory, cue-induced positive affective USVs when compared to rates of USVs in sessions that were not preceded by abstinence. Abstinence did not result in any changes in locomotor behavior, adding to results which suggest that affective and behavioral responses to drugrelated cues are dissociable (e.g., [7]). Moreover, the presentation of a drug-paired cue was shown to elicit positive affective USVs during a methamphetamine reinstatement test following one week of extinction training, and this effect was enhanced when animals were drugprimed prior to reinstatement testing [68]. Mahler and colleagues [68] further observed that anticipatory positive affective USVs during the reinstatement test with both cue and a methamphetamine priming injection were positively correlated with drug-seeking behavior (i.e. active lever pressing). These results suggest that both drug cues and drug priming are effective at evoking emotional responses and that their ability to produce such a response may 'incubate' or increase in magnitude following the cessation of drug use. Furthermore, drug-seeking effort appears to be associated with the number of 50-kHz USVs observed during reinstatement testing. Anticipatory positive affective USVs may, after all, reflect a combination of positive anticipation [133] and waning negative withdrawal symptoms from prior sessions [26,[86][87][88]. Thus, a period of abstinence may allow for withdrawal symptoms to subside, resulting in an increase in the net amount of positive anticipation that is observed. Studies using USVs have also illustrated important individual differences in the incentive motivational properties accrued by drug-related cues. Studies by Robinson and colleagues have revealed that certain animals utilize cues as effective conditioned reinforcers ('sign-trackers') while other animals instead track the outcome or goal ('goal-trackers') or, indeed, use a mixed strategy [67,[136][137][138][139][140]. USVs in a cocaine-paired environment were shown to be greater for sign-tracking animals than for goal-tracking animals or for sign-tracking animals for whom stimuli were administered without cocaine (i.e., explicitly unpaired; [67]). Specifically, USVs and behavioral activity were measured for sign-and goal-tracking animals using a conditioned place preference paradigm. Sign-tracking animals developed a place preference for a cocaine paired environment, whereas goal-tracking animals did not. Moreover, while all animals showed an increase in positive affective USVs during drug administration sessions, positive affective USVs during a drug-free test in the presence of drug-paired environmental cues were significantly increased in sign-tracking animals [67], suggesting that conditioned cues serve to generate a greater number of 50-kHz USVs, more so in sign-trackers than goaltrackers. Drug-paired contexts can induce drug-seeking behavior and affective reactions following abstinence. Our laboratory has previously observed that cocaine-experienced animals vocalize in both 22-and 50-kHz frequency ranges when reexposed to the self-administration chamber context at 30and 60-days after cocaine self-administration [7]. It was further observed that cocaine-seeking behavior is not necessarily associated with either 50-or 22-kHz USVs. These findings show that drug anticipation can elicit positive affective reactions [28,65,141] or negative affective responses. Notably, negative affective responses may be the result of learned associations with drug-paired cues or may be the result of reward omission [10,142]. Despite the absence of a clear, unidirectional emotional response, drug-seeking behavior was nonetheless observed at 30-and 60-day reinstatement tests, suggesting a long-lasting context-drug association. A different study showed that a morphine-paired context elicited significantly more anticipatory positive affective USVs after a two-week abstinence period relative to the positive anticipatory affective reactions prior to the twoweek abstinence period [143]. Taken together, these studies show that drug-paired contexts reliably elicit affective reactions in rodents following a period of abstinence, but that the strength of CS-US associations and that the duration of abstinence prior to reinstatement testing may modulate the magnitude and relative presence of positive and negative affective states. In combination, the aforementioned results demonstrate that rats develop conditioned emotional responses to drugrelated cues. Conditioned responses to cues predicting impending drug availability consistently demonstrate that animals exhibit a positive anticipation of forthcoming drug. Notably, the magnitude of this anticipation may be modulated by the experimental schedule, given evidence showing an increase in the magnitude of emotional responses following a brief period of abstinence. Data also suggest that the nature of the association between cues and drug rewards reflects animals' propensities to administer drugs and perhaps ultimately their propensities to relapse. Considerations and Limitations Early studies using USVs often selected for animals with sufficiently high rates of USV emission in order to reliably detect changes in calling behavior. While an important first step, studies demonstrating that high-and low-calling animals represent different genotypes and phenotypes, particularly their differing degrees of anticipating stimulant drugs [81], suggest that future studies must sample the full range of vocalization rates in order to avoid targeting one particular phenotype/genotype. Also, studies of drug abuse often focus on the 'rewarding' rather than 'reinforcing' properties of drugs. However, data from studies using USVs suggests that both positive and negative affect play a role in drug-seeking behaviors. Specifically, available data suggest that, following positive reinforcement during initial load-up, long-access drug seeking behaviors during a binge are maintained via a negative, rather than positive, reinforcement mechanism [8]. Along these lines, it is important that studies analyze the full spectrum of rat vocalizations, including both 22-and 50-kHz frequency ranges. Also, for some time the role of short 22-kHz vocalizations remained unknown. However, our current understanding of these calls has suggested that: 1) short 22-kHz USVs occur only during aversive situations and have yet to be recorded during a situation that is explicitly positive; 2) global pharmacological manipulations can change the duration of USVs [9], suggesting that short 22-kHz USVs observed under the influence of psychomotor stimulants are categorically the same as the longer aversive vocalizations observed under naturalistic conditions; 3) short 22-kHz USVs elicit a greater defensive response (i.e., retreat/hiding) from conspecifics [19]. Ultimately, all types of vocalizations should be considered in order to avoid the loss of crucial information. Current data implicate dopaminergic and noradrenergic systems in the production of positive affective vocalizations [32,33,70], as well as the ascending cholinergic system in the production of aversive vocalizations [11;Brudzynski, this issue]. However, it is also known that manipulations of these systems can affect motor systems and thus the acoustic quality of USVs (e.g., [43]). With this in mind, future pharmacological studies might focus on targeting specific circuits in order to elucidate key differences in the limbic and motor contributions to psychostimulant-induced changes in USVs. Second, studies of drug self-administration and studies incorporating models of relapse/reinstatement are still relatively sparse. Thus, further study may be needed in order to elucidate the role of animals' emotional responses during these processes. Conclusions Given the relationships between psychostimulant use and both positive and negative affective states, models of affect in preclinical studies have recently gained attention [5-9, 25-30, 32-34, 60, 61, 64-69, 133]. Based on available data, USVs provide an objective measure from which an animal's emotional response can be inferred. Indeed, USVs suggest that animals show an initial positive affective response to psychostimulant administration and are capable of identifying individual differences in the animals response to the drug upon initial exposure. Moreover, USVs have been used to demonstrate that animals' emotional and behavioral responses become sensitized to psychostimulants over acute exposure before eventually exhibiting signs of drug tolerance. Perhaps most importantly, the development of tolerance represents a point of divergence between affective and behavioral responses to psychostimulants. That is, drugseeking behaviors (e.g., lever responses) continue to escalate, while affective responses decline. Such divergence is a strong endorsement of the potential utility in adding USV assessment to available behavioral measures. In the drugdependent animal, affective responses indicate a sequence of positive and negative affective responses to psychomotor stimulants, predominantly characterized by an initial bout of positive affect followed by an opponent negative emotional state, mirroring affective responses observed in human addicts (e.g., [50]). Periods of abstinence are known to produce a period of acute withdrawal symptoms (i.e., ~72 h), during which negative affective states are transiently observed. Finally, it has been shown that drug-paired cues produce a learned, positive anticipatory response during the course of drug administration, and that the presentation of these cues following abstinence produces both positive affect and reinstatement behavior. The available data also provide a number of important insights for future studies of drug abuse. First, it is clear that animals' affective responses are different depending on the class of abused drug and may therefore provide subtle insights into the differences between drug types. Indeed, data have suggested that opiates and psychostimulants produce some similar behavioral responses (i.e., CPP and hyperlocomotion) but induce differential USV responses. Along these lines, evidence also suggests that motivated behaviors (e.g., operant responding) are dissociable from affective responses (i.e., USVs). Thus, the integration of USVs into well-designed behavioral paradigms allows for insights that cannot be ascertained from behavioral measures alone. With this in mind, the incorporation of USVs may provide insights into the differences between circuits involved in motivation and those involved in processing emotion. Finally, USVs provide a mechanism for identifying individual phenotypes and genotypes which show traits that are relevant to addiction and might identify subjects that are susceptible to substance dependence or are otherwise relapseprone. Consequently, an understanding of these differences could allow for the identification of the analogous phenotype (and possibly genotype) in humans in order to develop targeted therapies.
2016-02-24T08:38:05.773Z
2015-02-28T00:00:00.000
{ "year": 2015, "sha1": "d22f4a135ebe2be8e2edd41872c78f0ffdcb6d7b", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4598431?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d22f4a135ebe2be8e2edd41872c78f0ffdcb6d7b", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
259204092
pes2o/s2orc
v3-fos-license
High frequency oscillations in spin-torque nano oscillator due to bilinear coupling Exchange coupling in an interfacial context is crucial for spin-torque nano oscillator (STNO) that consists of a non-magnetic spacer which is alloyed with a ferromagnetic material. Currently, investigations on the dynamics of the free layer magnetization and frequency enhancement in the STNO with bilinear coupling are still being actively pursued. In the present work, we investigate the dynamics of the STNO in the presence of bilinear coupling but in the absence of an external magnetic field by analyzing the associated Landau-Lifshitz-Gilbert-Sloncewski(LLGS) equation, and consequently the impact of the bilinear coupling on the dynamics of the magnetization of the free layer is studied. It is observed that the frequency of the oscillations in the magnetization component along the direction of the pinned layer polarization can be enhanced above 300 GHz by positive bilinear coupling and up to around 30 GHz by negative bilinear coupling. We further reveal a transition from in-plane to out-of-plane precession both for positive and negative bi-linear couplings. We also analyze the switching of the magnetization for different values of current and bilinear coupling. Our detailed investigations of STNO with bilinear coupling aim at the possibilities of high-frequency devices by considering the applied current and bilinear coupling in the absence of a magnetic field. I. INTRODUCTION A spin-polarized electrical current can impart spin angular momentum in the ferromagnetic material, which can be used to control the magnetization state of a magnetoresistive device called spin torque nano oscillator (STNO) [1][2][3][4][5][6][7][8][9][10][11][12][13] . In particular, it is feasible to cause the oscillations or precession of the magnetization, which is relevant for tunable microwave devices or to reverse the magnetization that is essential for various magnetic memory systems [14]. In an STNO, two ferromagnetic layers are separated by a thin nonmagnetic, but conductive layer called a spacer. Among the two ferromagnetic layers, one is called the free layer, which is comparatively thinner than the other which is the pinned layer. In the free layer the direction of magnetization can change while it is fixed in the pinned layer. Further, some studies also ensure that the spacer layer can promote a high interlayer exchange coupling between its adjacent ferromagnetic layers [15]. The bottom and top layers of the two in-plane magnetized ferromagnetic layers are exchangecoupled via a Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction across the thin nonmagnetic spacer, whose thickness is tuned to produce an antiferromagnetic coupling in zero applied field [15][16][17][18][19]. For instance, a nonmagnetic layer typically made of Ru [20] introduces a RKKY exchange coupling between two magnetic layers [20]. The spin direction of the ferromagnetic layers can be parallel or antiparallel to each other depending upon the thickness of the spacer layer in magnetic multilayer systems. This parallel or antiparallel orientation of the ferromagnetic layers can be called collinear magnetization configuration [20,21]. On the other hand, obtaining a noncollinear magnetization configuration is possible due to the competition between the interlayer coupling energy and magnetic anisotropies of the coupled ferromagnetic layers for some structures. Recently, Nunn et al. have reported that the influence of the exchange coupling between two ferromagnetic layers (Fe) coupled through a nonmagnetic interlayer (Ru) is essential in controlling the magnetic layers' functionality [22], and this has now been observed in various systems. It has been explained theoretically by several different approaches [23][24][25][26][27][28][29][30][31]. Recent results [7,12,13,22,30,31] in this context show that the presence of an exchange coupling system plays a backbone in the emergence of many spintronicbased applications such as magnetic field sensors, magnetic memory devices [24,25], magnetic resistive random access memory (MRAM) [26] and spin-torque nano oscillators [1][2][3]. Based on the nanoscale size and suitability for room-temperature operation, spin-torque oscillators (STOs) provide exciting possibilities for these applications. However, their adjustable range and oscillation frequency are only from 100 MHz to 10 GHz [27,28]. Recently, we investigated and reported that the frequency of an STNO with bilinear and bi-quadratic couplings can be enhanced above 300 GHz by the current [29]. Also, Kurokawa et al. [30] have shown the oscillations of the free layer magnetization in the components along the perpendicular directions of the pinned layer polarization with frequencies upto 576 GHz in the presence of bilinear and biquadratic interlayer exchange couplings in STNOs, and also with the free layer having low transition temperature for the saturation magnetization. In their investigation they have shown that the biquadratic coupling is essential for the high frequency [30]. In this connection, our present report provides a detailed study on Co |RuFe| Co STNO with bilinear interlayer exchange coupling alone between the free and pinned ferromagnetic layers and show the existence of oscillations of the free layer magnetization in the components along the pinned layer polarization with frequencies above 300 GHz with the free layer having high transition temperature. This unaccompanied role of the bilinear interlayer exchange coupling has been thoroughly researched since it has been used in many spintronics devices [31], and multilayer magnetic thin films. Depending on the interfacial exchange coupling, both negative and positive exchange couplings have been seen in ferromagnetic/ferrimagnetic transition of metal and rare-earth alloy multilayer thin films [32,33] and the role of the bilinear coupling co-efficient are experimentally studied in Ref. [22]. However, numerical and analytical studies on the bilinear coupling in STNO without an external magnetic field that leads to magnetization oscillations have not been thoroughly studied in the literature [34]. The paper is organized as follows. First, we formulate the model and the governing LLGS equation of motion and effective magnetic field for the present study in Sec. II. The positive and negative bilinear coupling dynamics and expression for minimum current for oscillations are presented in Sec. III and IV, respectively. Section V is devoted to the conclusion of the present work. II. MODEL The schematic picture of an STNO considered for our study, which consists of a free layer, a spacer layer and a pinned layer, is shown in Fig.1. The magnetization of the free layer is denoted as M = M s m, where M s is the saturation of the magnetization. While the magnitude of the magnetization is fixed, its direction can change over time. The magnetization of the pinned layer P = M s p is fixed for both magnitude and direction. Here m and p are the unit vectors along M and P, respectively. As shown in Fig.1, the positive and negative currents correspond to the flow of electrons from the free layer to pinned layer and vice versa, respectively. The free and pinned layers are considered to be made up of Co. The spacer layer is a nonmagnetic conductive layer, constituting an alloy of Ru and Fe. The magnetization dynamics described by the LLGS equation that governs the motion of the unit vector m is given as Here, γ and α are the gyromagnetic ratio and damping parameter, respectively. The spin-torque strength is where is the reduced Planck's constant ( (= h/2π)), I is the current, e is the electron charge, and V is the volume of the free layer, η and λ are the dimensionless parameters determining magnitude and angular dependence of the spin-transfer torque. The effective magnetic field H ef f is given by where H ani and H dem is the anisotropy and the demagnetization field, respectively. The effective field also consists of a bilinear coupling interaction H bil of interlayer exchange coupling between the free and reference layers, the details of which are given below. Specifically, the various interactions in (3) are given by Consequently, we have Here e x , e y and e z are the respective unit vectors along the positive x, y and z directions. H k is the magnetocrystalline anisotropy constant, J is the coefficient of the bilinear coupling, M s is the saturation magnetization and d is the thickness of the free layer. The energy density of the free layer responsible for the effective field H ef f = −∂E/∂(M s m) is given by The pinned layer is considered to be polarized along positive x-direction, i.e. p = e x . The material parameters are adapted as M s = 1210 emu/c.c., H k = 3471 Oe, η = 0.54, λ = η 2 , d = 2 nm, A = π×60×60 nm 2 , V = Ad, α = 0.005 and γ = 17.64 Mrad/(Oe s). Since H k < 4πM s , the system exhibits easy-plane anisotropy for xy-plane or hard axis anisotropy for z-axis due to the resultant demagnetization field −(4πM s − H k )m z e z . It means that the magnetization is always pulled towards the xy plane whenever it moves away from the plane with the strength directly proportional to m z . Therefore, before applying any current, to minimize the energy (Eq.(6)), the magnetization of the free layer settles at (-1,0,0) for positive bilinear coupling (J > 0) or (1,0,0) for negative bilinear coupling (J < 0). This implies that the system exhibits antiferromagnetic coupling for the positive bilinear coupling and ferromagnetic coupling for the negative bilinear coupling between the free and pinned layers [20]. It has been shown that the magnitude and sign of the bilinear coupling coefficient can be experimentally tuned by changing the concentration of Fe in the spacer layer made by Ru 100−x Fe x alloy [22] since the oscillations are observed when I < 0 for the positive bilinear coupling and I > 0 for the negative bilinear coupling, and both the cases of the bilinear couplings are investigated separately in the following sections. III. DYNAMICS FOR THE POSITIVE BILINEAR COUPLING In the absence of current the equilibrium state of the unit magnetization vector m for the positive bilinear coupling is S 1 = (-1,0,0) since the field due to the interaction H bil acts along the negative x-direction. This is confirmed in Figs.2(a) and 2(b), where the time evolution of m x and m y are plotted for J = 0.756 mJ/m 2 and 0.352 mJ/m 2 , respectively, for different initial conditions. In both these figures 2(a) and 2(b), we can observe that the magnetization finally reaches the state S 1 . These numerical results coincide well with the experimental results obtained by Nunn et al [22], where the same system exhibits antiparallel configuration between the magnetizations of the free and pinned layers for J = 0.756 mJ/m 2 and 0.352 mJ/m 2 corresponding to Ru 32 Fe 68 . When the current is applied, depending upon the magnitude of the current, the system exhibits three different dynamics for m. (i) When |I| < |I min |, the unit magnetization vector m stays in the state S 1 where it was existing already. (ii) When |I min | < |I| < |I max |, the vector m exhibits continuous precession. (iii) When |I| > |I max | the vector m moves away from (-1,0,0) and settles into the state S 2 (near (0,0,±1)) for small J (<2.8 mJ/m 2 ) or settles into the state S 3 =(1,0,0) for large J (>2.8 mJ/m 2 ). Hence the states S 1 , S 2 and S 3 are associated with the currents when |I| < |I min |, |I| > |I max | for J (>2.8 mJ/m 2 ) and |I| > |I max | for J (<2.8 mJ/m 2 ), respectively. The critical value of the positive bilinear coupling strength J c = 2.8 mJ/m 2 is derived in Eq. (10). Here, I min and I max are the minimum and maximum currents, respectively, between which oscillations can be exhibited. To confirm the precession of m, oscillations of m x and tunability of the frequency by current, Eq.(1) is numerically solved by adaptive step size Runge-Kutta-4 method. The initial condition of m, for the numerical simulation, is randomly chosen near the state S 1 . When a negative current is applied with the magnitude |I min | < |I| < |I max |, the magnetization which was in the S 1 state moves away from it due to the spin-transfer torque. This is due to the fact that the incoming electrons in the free layer, which were spin polarized along the positive x-direction, always move the magnetization to align with the positive x-direction. Once the magnetization moves away from the state S 1 by STT, continuous precession is achieved due to the balance between the damping (due to the effective field) and the STT. Fig.3(a) that the trajectory corresponding to the current I = -0.5 mA (red) exhibits in-plane precession around the x-axis due to the field from positive bilinear coupling. The direction of the precession is clockwise as seen from the positive x-axis. When the strength of the current is increased further to I = -1 mA (blue), the trajectory of the magnetization slightly transforms as shown in Fig.3(a). It seems that the trajectory has been folded along the negative x-axis. The magnetization gets close to the positive x-axis when it reaches the xy-plane. This is due to the fact that the resultant demagnetization field becomes weaker when the magnetization gets closer to the xy-plane. Therefore the STT, which always moves the m towards the positive x-axis, becomes stronger and moves the magnetization towards the positive x-axis as much as possible. Once the magnetization crosses the xy-plane, the magnetization moves away from the positive x-axis. This is due to the fact that the resultant demagnetization field rotates the magnetization from negative to positive y-axis in the northern hemisphere and from positive to negative y-axis in the southern hemisphere. When the current is further increased to -1.5 mA (brown), the magnetization shows a transition from the in-plane precession to out-of-plane precession around the z-axis as shown in the Fig.3(a). This is because an increase of curent increases the magnitude of Here, θ and φ are the polar and azimuthal angles, respectively, H S0 = ηI/2eM s V . The equilibrium state is obtained from the equations P (θ * , φ * ) = 0 and Q(θ * , φ * ) = 0, where φ * is numerically observed as φ * ≈ 0. This leads us to derive the relation Therefore, the equilibrium state S 2 for m when |I| > |I max | is given by S 2 ≈ (sin θ * , 0, ± cos θ * ), where sin θ * is as given above. However, when the magnitude of the current is increased much further than |I max |, the equilibrium state will slightly move away from the state S 2 and if the magnitude of the current is extremely large (|I| >> |I max |), i.e above ∼100 mA, then the magnetization will settle in the state S 3 = (1,0,0). From Eq.(9), we can understand that the value of θ * becomes π/2 when J = dM s (4πM s − H k ). It means that the equilibrium state S 2 of the magnetization moves towards the state S 3 = (1,0,0) as the strength of the positive bilinear coupling J increases and reaches (1, Similarly, the magnetization precession for the high strength of bilinear coupling (J = 7.0 mJ/m 2 ) is also investigated by plotting the trajectories for the currents I = -2 mA (red), -2.1 mA (blue), -2.2 mA (black), -2.3 mA (magenta), -2.35 mA (orange) and -3 mA (black point) in Fig.3(b). Unlike the case of low bilinear coupling as shown in Fig.3(a), there is no transition from in-plane to out-of-plane precession while increasing the magnitude of the current and the magnetization exhibits in-plane precession only around the x-axis. This can be reasoned as follows: When the strength of the bilinear coupling field is strong due to large J(> 0), the STT and the resultant demagnetization field are dominated by this bilinear coupling field. Therefore, the rotations due to the resultant demagnetization field and the approach of the magnetization towards the positive x-axis due to the STT are not exhibited. When the current is increased further, the trajectory moves from the negative to positive x-axis and settles into the equilibrium state S 3 when |I| > |I max |, where I max = -2.35 mA for J = 7.0 mJ/m 2 . The equilibrium state for the current -3 mA is shown by the black point in the Fig.3(b). To confirm the oscillations the time evolutions of the component m x are plotted in Fig.3(c) Fig.4(a) and against bilinear coupling for different values of current in Fig.4(b). From Fig.4(a), we can understand that when the bilinear coupling coefficient is low, the frequency decreases up to some critical current I c and then increases. This change in the frequency from decrement to increment is attributed to the transition of magnetization precession from the inplane to out-of-plane as discussed earlier with reference to Fig.3(a). In Fig.4(a), the existence of I min and I max is evident, and the range of current for the oscillations (|I max | − |I min |) confirms the wide frequency tunability by the current. The magnitude of I c slightly decreases with the increase of J. Also, we can observe that when J is large (≥2.9 mJ/m 2 ) the frequency decreases with the increase in the magnitude of the current up to I max and the I c does not exist. This is due to the nonexistence of out-of-plane precession, as shown in Fig.3(b). From Fig.4(a) it is observed that the tunability range (|I max | − |I min |) decreases and increases with J when the strength of J is small and large, respectively. At a given current, the frequency increases with the magnitude of bilinear coupling. Also, it is confirmed that the frequency can be enhanced up to 300 GHz for J = 12.0 mJ/m 2 and even above when J is increased further. Similarly, the frequency is plotted against J for different values of the current in Fig.4(b). Due to the nonexistence of out-of-plane precession at large strengths of J, the discontinuity appears in the frequency while increasing the value of J as shown in Fig.4(b). From Fig.4(b) we can observe that the frequency almost linearly enhances with J. The frequency range is around 30 GHz and 300 GHz when the values of J are small and large, respectively. The enlargement of frequency and switching time can be essentially attributed to the large value of the bilinear coupling strength J, which causes the system to behave more like a layered antiferromagnet [35][36][37][38][39]. The large value of J in our system is possibly due to Nunn et al.'s recently proposed RuFe spacer layer [22]. The current density corresponding to the frequency 299.6 GHz when I = -3.35 mA can be obtained as 2.96×10 7 A/cm 2 for the cross sectional area A = π × 60 × 60 nm 2 . Also, it is visible that the magnitude of the current can increase the range of J for which the oscillations are possible. Figs.5(a) and (b) summarize the dependence of the frequency on current and J while J is below and above 2.3 mJ/m 2 , respectively. The white color region is nonoscillatory region. From Figs.5(a) & (b), we can see that the magnitude of the current above which the oscillations occur (|I min |) linearly increases with J. The value I min for J >0 can be derived as follows: The nature of the stability of an equilibrium state which is represented by polar coordinates can be identified from the following Jacobian matrix by using Eqs. (7) and (8) The equilibrium state (θ * , φ * ) will be stable only when the system is dissipative about it. It will be dissipative if and only if the trace of the matrix J becomes negative, We knew that when |I| < |I min c | and J > 0 the magnetization settles at S 1 , i.e, (π/2, π) in polar coordinates. Therefore specific set of values (θ * , φ * ) = (π/2, π) satisfies Eq. (12). The trace of the matrix corresponding to (π/2, π) is given by The minimum critical current I min (for J > 0), below which the S 1 is stable can be derived from Eqs. (12) and (13) as and it has been plotted as open circles in Figs.5(a) and (b), which matches well with the numerical results and confirms the validity of the numerical results. From Fig.5(a) and (b) we can observe that value of I max decreases with J at lower strengths of J and increases (almost linearly) with J at higher strengths of it. Fig.5(b) evidences that the range of current which exhibits oscillations increases with J while J is large. In the case of positive current, the STT always moves the magnetization to be aligned with the negative x-direction. Therefore the positive current does not move the magnetization from the state (-1,0,0), where it existed already before the application of the current, and therefore no precession is exhibited. We can observe in Figs.5(a) and 5(b) that the magnetization settles into the equilibrium states S 2 and S 3 , respectively when I > I max . It indicates a transition from S 2 to S 3 while increasing the strength of the positive bilinear coupling. As discussed in Eq.(10), the transition occurs at J = 2.8 mJ/m 2 . From Fig.5(b), we can observe that when the magnitude of the current is above the magnitude of I max , the magnetization will settle into the state S 3 from S 1 for the positive bilinear coupling. This indicates the existence of current-induced magnetization switching from the negative to positive x-direction. Similarly, when the strength of the positive bilinear coupling increases, its corresponding field along the negative x-direction increases, and consequently the magnetization takes much time to reverse from the negative to positive x-direction by the application of negative current as confirmed in Fig.6(d). The above current-induced magnetization switching has spin torque magnetic random access memory applications and is much more efficient than the field-induced switching. The field-free switching may help produce magnetic memory devices with low power consumption and greater device density [40,41]. As observed from Figs.5, when the current I is kept constant and the strength of the positive bilinear coupling J is increased, the magnetization reaches the equilibrium state S 2 via out-of-plane precession (see Fig.3(a)). When J is increased further, the equilibrium state of the magnetization S 2 becomes (1,0,0) as J → J c (see Eq.10 in the revised manuscript). After the magnetization reaches the state S 3 it continues to settle there without showing any oscillations until the further increase in J is strong enough to move away the magnetization from the state S 3 against the STT due to the incoming spin polarized electrons. As observed in Fig.4(b) and Figs.5, the gap between the offset of oscillations of m when reaching S 2 and the onset of oscillations when emanating from S 3 increases with the magnitude of the current. This is due to the fact that the strength of the STT which tends to keep the magnetization along the positive x-direction increases with the magnitude of current and consequently the strength of the bilinear coupling is required to be high enough to regain the oscillations from the equilibrium state S 3 . IV. DYNAMICS FOR THE NEGATIVE BILINEAR COUPLING In the presence of negative bilinear coupling the magnetization will initially be oriented at S 3 since the field due to the negative bilinear coupling H bil acts along the positive x-direction. The magnetization continues to be settled at S 3 until the current I is increased to I min . The STT, due to the positive current, will always move the magnetization to be aligned with the negative xdirection. When I > I min , the magnetization is moved away from S 3 , and the system shows continuous precession for the vector m. The frequency of the oscillations of m x is plotted against low values of current in Fig.7(a) and high values of current in Fig.7(b) for different values of the negative bilinear coupling (given in mJ/m 2 ). From Fig.7(a), we can understand that similar to the case of the positive bilinear coupling, the frequency decreases with current up to a critical value I c and then increases with current. Similar to the previous case, this increment in frequency after decrement is attributed to the transition from in-plane to out-of-plane precession. This is verified by plotting the trajectories of the vector m corresponding to I = 1 mA (red) and 2 mA (blue) for J = -0.1 mJ/m 2 in Fig.7(c). Since the field, due to negative bilinear coupling, acts along the positive x-direction, the magnetisation trajectory corresponding to I = 1 mA (red) has been folded along the positive x-axis and exhibits in-plane precession. When the current increases to 2 mA (blue), the magnetization transforms from in-plane precession to outof-plane precession in the northern hemisphere. However, the out-of-plane precession may also be symmetrically placed in the southern hemisphere. The explanation behind this transition is similar to those discussed in the case of positive bilinear coupling. The out-of-plane precessions corresponding to the currents I = 10 mA (brown), 20 mA (black) and 36 mA (magenta) for J = -0.1 mJ/m 2 also are plotted in Fig.7(c). From Fig.7(a), we can understand that when the strength of the negative bilinear coupling is relatively high, the frequency shows only an increment with the current. This is because at higher values of negative bilinear coupling, the unit magnetization vector m exhibits out-of-plane precession instead of exhibiting any transition from in-plane to out-of-plane precession. In Fig.7(b), the frequency is plotted up to large values of current for different values of J. The frequency increases with current and reaches its maximum. For small values of J, the frequency increases to its maximum and then decreases. Fig.7(b) shows that there is a maximum current I max above which oscillations are not possible. For the currents above I max , the magnetization settles into S 1 without showing any precession. In Fig.7(b) we can observe the discontinuities for frequencies near I max upto J ≈ -0.4 mJ/m 2 , where the system exhibits multistability i.e the magnetization may precess continuously or settle at S 1 . It is confirmed in Fig.7(c) by precession for I = 36 mA (magenta) and equilibrium state S 1 for I = 37 mA (black point). In Fig.7(b) it is observed that the discontinuities in the frequencies have disappeared above J = -0.4 mJ/m 2 . This is because the magnetization does not settle at S 1 below I max . The magnetization exhibits three different nature of equilibrium states for |J| >∼ 0.4 and I > I max . When the current is increased near above I max , the magnetization settles near poles at S 2 . When I is increased further the unit vector m settles into S 2 or S 1 . If the current is increased further to extremely large values, the magnetization settles into S 1 . The range of the current in which the oscillations are possible (I max − I min ) also increases (decreases) with |J| when |J| is small (large). pling. In Fig.7(d), the frequency is plotted against the negative bilinear coupling for different values of the currents. It seems that the frequency increases almost linearly with the increase in the magnitude of negative bilinear coupling coefficient. Also, at a given J, the frequency increases with the magnitude of the current. The dependence of the frequency on the negative bilinear coupling and current is plotted for the large values of current in Fig.8(a) and small values of current in Fig.8(b). The white background corresponds to the nonoscillatory region. From Fig.8(a) we can observe that the value of I max increases up to -0.33 mJ/m 2 and then decreases abruptly. From the bright green and red regions in Fig.8(a) we can understand that the frequency can be maintained constant while increasing the current at fixed J. Also, it is clearly visible that the tunability range of the frequency by current drastically reduces after ∼-0.3 mJ/m 2 . This is different from the case of positive bilinear coupling where the ocillatory region (|I max | − |I min |) can be expanded with the increase of J. For currents above I max , three different regions are identified for m as shown in Fig.8(a). The three different regions for equilibrium states S 1 , S 2 and S 1 /S 2 for the current above I max are indicated in Fig.8(a). To see the minute variation of frequency in the low current region, Fig.8(b) is plotted for currents upto 3 mA. Fig.8(b) confirms the decrement and increment in frequency with current when |J| < 1 mJ/m 2 . Also, the frequency at a given current increases with the strength of the negative bilinear coupling. The minimum current I min for J <0 is similarly derived as in the previous case for positive bilinear coupling. When I < I min and J < 0, the state S 3 becomes stable and the magnetization settles into S 3 , corresponding to (π/2, 0) in polar coordinates. The trace of the matrix J corresponding to the state (π/2, 0) is derived as From the condition (12) and Eq. (15), we can derive the minimum current (for J < 0) below which the equilibrium state S 3 is stable as Eq.(16) is plotted in Fig.8(b) as open circles and matches well with the numerical results. This confirms the validity of the numerical results. If the current is negative, the STT always moves the magnetization towards the positive x-direction. Therefore the magnetization does not move from the state S 3 , where it was already existing before applying the current, by the negative current, and no precession is exhibited. Similar to the case of positive bilinear coupling, magnetization switching can also be identified for negative bilinear coupling. As discussed in Fig.8(a) when a current corresponding to the region of equilibrim state S 1 is applied the magnetization will switch from S 3 to S 1 . In Figs.9(a) and (b) the component m x is plotted to confirm the switching from positive to negative x-direction for different values of J when I = 33.5 mA and for different values of I when J = -0.05 mJ/m 2 , respectively. The variation of the switching time against current and the coupling is plotted in Figs.9(c) and (d), respectively. From Figs.9(a) and (c), we can understand that similar to the positive bilinear coupling, the switching time decreases with the increase in the magnitude of the current. Fig.9(d) confirms that there is no definite relationship between the switching time and the negative bilinear coupling. The switching time variation against the magnitude of the coupling is not smooth like in the case of positive bilinear coupling. V. CONCLUSION In conclusion, we have investigated the dynamics of Co |RuFe| Co STNO using the LLGS equation and identified high-frequency oscillations in the magnetization of the free layer due to the presence of bilinear coupling. The obtained orientations of the magnetization of the free layer with that of the pinned layer in the absence of current match well with the experimental results. A transition in the precession of the magnetization from inplane precession to out-of-plane precession while increasing the current is observed for both positive and negative bilinear coupling cases. However, the transition does not occur at higher strengths of the bilinear coupling. Only an in-plane precession for the positive bilinear coupling and an out-of-plane precession for the negative bilinear coupling are exhibited. A wide range of frequency tunability by the current is observed for both cases of bilinear coupling. While the frequency is enhanced upto 30 GHz by the negative bilinear coupling, the positive bilinear coupling enhances the frequency upto and above 300 GHz. This high frequency has been shown for the oscillations of the magnetization vector (free layer) along the pinned layer polarization and with the free layer having high transition temperature for the saturation magnetization. The range of the current in which the frequency can be tuned increases with the strength of the positive bilinear coupling corresponding to the in-plane precession. Oscillations are exhibited for the positive (negative) bilinear coupling when the current is applied in the negative (positive) direction. Also, oscillations are possible only when the current is between I min and I max . When |I| < |I max |, the magnetization settles into (-1,0,0) for J > 0 and (1,0,0) for J < 0. If the strength of the positive bilinear coupling is large, then the magnetization settles into (1,0,0) for all the magnitudes of the current above |I max |. On the other hand, if the strength is small, it settles near poles (S 2 ) when |I| > |I max | or into (1,0,0) when |I| >> |I max |. If the bilinear coupling is negative, there are three regions corresponding to the equilibrium states S 2 , S 1 (or) S 2 and S 1 above I max depending upon the values of I and J. The magnetization switching induced by the current alone is identified for both of the bilinear couplings. It is observed that the switching time reduces with the increase in the magnitude of the current for both cases of the bilinear coupling. We have also analyzed the expressions for the minimum currents to achieve the oscillations for both the positive and negative bilinear couplings. We have shown that they match well with the numerically obtained results. We have also proved that the bilinear coupling is sufficient for the high-frequency oscillations among two interlayer exchange couplings, namely bilinear and biquadratic couplings. We wish to point out that this study has been carried out for the temperature T = 0 K. However, the free layer we have considered is perpendicular magnetic anisotropic one and this is normally robust against thermal noise [42]. We believe that our detailed study on bilinear coupling can be helpful in applications related to microwave generation with high-frequency en-hancement and magnetic memory devices.
2023-06-21T01:16:36.969Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "3a2bcdb575c2c7596e90d5032c791111b0ba3567", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3a2bcdb575c2c7596e90d5032c791111b0ba3567", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16894426
pes2o/s2orc
v3-fos-license
The Departure of Eta Carinae from Axisymmetry and the Binary Hypothesis I argue that the large scale departure from axisymmetry of the Eta Carinae nebula can be explained by the binary stars model of Eta Carinae. The companion diverts the wind blown by the primary star, by accreting from the wind and possibly by blowing its own collimated fast wind (CFW). The effect of these processes depends on the orbital separation, hence on the orbital phase of the eccentric orbit. The variation of the mass outflow from the binary system with the orbital phase leads to a large-scale departure from axisymmetry along the equatorial plane, as is observed in Eta Car. I further speculated that such a companion may have accreted a large fraction of the mass that was expelled in the Great Eruption of 1850 and the Lesser Eruption of 1890. The accretion process was likely to form an accretion disk, with the formation of a CFW, or jets, on the two sides of the accretion disk. The CFW may have played a crucial role in the formation of the two lobes. INTRODUCTION One of the open questions regarding the massive star η Carinae and its nebulosity is whether the nucleus is a single or a binary stars system. The main argument used by the binary model supporters is the P = 2020 ± 5 days = 5.5 yr periodicity of the spectroscopic event−fading of high excitation lines (Damineli 1996;Damineli et al. 2000, and references therein). There are other, though weaker, supporting observations in favor of a massive companion. Ishibashi et al. (1999) find that the X-ray emission, which follows the 5.5 yr periodic variation, can in principle be explained by colliding winds, where the two winds are blown by the two components of a binary system. However, not everyone supports the binary model. attribute the 5.5 yr periodic variation to variation in the ionizing hard-UV flux reaching the equatorial gas, and argue that presently there is no advantage to a binary model over a stellar inherent instability model (e.g., Stothers 2000) to account for the periodicity. In particular they argue against the suggestion raised by Damineli et al. (2000) that the ionizing flux is emitted by the companion, and that the periodic variation is caused by absorption of the companion's radiation by the primary stellar wind during periastron passages. claim that it is not clear yet whether η Car is a binary system, but if it is the parameters of the companion and orbit are not those found by Damineli (2000), and presently remain unknown (some new parameters are suggested by Corcoran et al. 2001). In the present paper I argue that the departure of the nebula around η Car from axisymmetry can be explained by the central close binary model. By departure from axisymmetry I refer to a large-scale departure, and do not consider small blobs, filaments, and other small-scale features. In that respect a binary companion in an eccentric orbit offers an answer to question 14 raised by hereafter DH97): "Why was the eruption azimuthally asymmetric; . . . " (they refer also to the small blobs, but here I refer only to the large-scale asymmetry). In §2 I describe the departure from axisymmetry of the nebula around η Car. I then demonstrate that the parameters suggested for the binary system can in principle lead to the formation of a nebula possessing a large-scale departure from axisymmetry. In §3 I discuss some implications of the binary model for the formation of the bipolar structure of the Homunculus and the dense equatorial flow. I summarize in §4. DEPARTURE FROM AXISYMMETRY Although the Homunculus seems quite axisymmetrical, it is not perfectly so. Morse et al. (1998) argue that "the lobes exhibit a slight banana-shaped symmetry . . ". The "bananashaped" structure is more prominent in the velocity maps presented by Allen & Hillier (1993), which show the structure in planes parallel to the line of sight. These maps show that the departure of the Homunculus from axisymmetry is mainly along the line of sight, hence hard to detect in simple imaging of η Car. Many other structural features show a highly prominent departure from axisymmetry in the plane of the sky. Figure 3 of Morse et al. (1998, see their Erratum for high quality images), shows a clear departure from axisymmetry on the outskirts, ∼ 10 ′′ from the nucleus. On the south-west side of the equatorial plane there is a dense arc of gas, termed S-Ridge, while on the north-east side there is no such an arc, but rather the "jet" and the "NN bow". It is clear that this departure from axisymmetry along the equatorial plane has a large-scale structure, and can't be attributed to instabilities in the flow or in the mass loss process. A same sense of asymmetry is seen in the radial velocities map presented by Weis, Duschl & Bomans (2001). On the south-west side the measured radial velocities are higher than those on the north-east side. A large scale departure from axisymmetry is clearly seen also in recent X-ray images (Weis et al. 2001;Seward et al. 2001). Other departures from axisymmetry along the equatorial plane are seen much closer to the nucleus. The 10µm image of Morris et al. (1999) shows that the peak on the north-east side of the equatorial plane is much stronger than the emission on the south-west side. The same sense of departure from axisymmetry is seen at shorter IR bands (e.g., fig. 5 of Smith, Gehrz & Krautter 1998 and fig. 1 of Smith & Gehrz 2000). There are indications for displacement from axisymmetry in the equatorial plane along the south-east to north-west direction as well, e.g., figure 3 of shows the two sides of the equatorial plane to be bent toward the south-east. Soker, Rappaport, & Harpaz (1998;hereafter SRH) demonstrate via analytical calculations that when a companion is close enough to influence the mass-loss process from an evolved star and/or from the binary system as a whole, and the eccentricity is substantial, the nebula around the mass-losing star will acquire a large-scale departure from pure axisymmetry. An essential ingredient is that the mass-loss rate and/or geometry varies systematically with orbital phase, due to the periodic change in the orbital separation. SRH examine the displacement of the central star from the center of the nebula, e.g., as in the planetary nebula Hu 2-9 (Miranda et al. 2000), but the departure from axisymmetry can manifest itself in other ways, e.g., one side will contain a denser section (Soker & Rappaport 2001). SRH consider two effects of the companion on the mass-loss process: a tidal enhancement of the stellar wind near periastron, and a cessation of the stellar wind when the Roche lobe of a mass-losing asymptotic giant branch (AGB) star encroaches on its extended atmosphere near periastron passage. With regard to the first mechanism, in a recent paper Corcoran et al. (2001) argue, based on the X-ray light curve, that the mass loss rate from η Car increases by a factor of 20 following periastron passage. Soker & Rappaport (2001) consider other processes by which a companion can influence the mass-loss process from the system; the direct gravitational influence on the wind (Mastrodemos & Morris 1999), and the formation of a collimated fast wind (CFW) by the companion (Morris 1987;Soker & Rappaport 2000;hereafter SR00). In the later process the companion is assumed to accrete from the mass-losing star, to form an accretion disk, and to blow a CFW. The interaction between the CFW, if strong enough, and the AGB wind will form a bipolar planetary nebula (Morris 1987;SR00). Another process relevant to the binary system proposed for η Car is the interaction of the stellar wind blown by the companion with the wind blown by the primary. I now show that a companion star to η Car can strongly modulate the mass-loss process from the binary system along its orbital motion, naturally leading to the formation of a nebula which possesses a large-scale departure from axisymmetry. I scale the binary parameters by values which were quoted in recent years (e.g., Ishibashi et al. 1999;Damineli et al. 2000;Corcoran et al. 2001): for the mass of the primary (mass-losing) star and eccentricity I take M 1 = 80M ⊙ and e = 0.8, respectively. For the present primary's wind I take a mass-loss rate ofṀ 1 = 3×10 −4 M ⊙ yr −1 and a velocity of v 1 = 500 km s −1 . For the companion mass I take M 2 = 30M ⊙ and for the companion's wind I take a mass-loss rate ofṀ 2 = 3 × 10 −6 M ⊙ yr −1 , and a velocity of v 2 = 2000 km s −1 . The semimajor axis is a = 15 AU. The proposed mechanism for the departure from axisymmetry, including the calculations below, is applicable to a more massive model of η Car, as suggested by DH97, who argue that the initial and present masses of η Car are 160M ⊙ and 120M ⊙ , respectively. The accretion radius of the companion, for accretion from the primary's wind, is (1) The distance D 2 of the stagnation point of the colliding winds from the companion along the line between the stars is given by equating the ram pressures of the two winds ρv 2 . For spherically symmetric winds where r is the orbital separation, θ is the angular distance along the orbit (θ = 0 at periastron), and β ≡ [(Ṁ 2 v 2 )/(Ṁ 1 v 1 )] 1/2 . In the second equality I assumed β ≪ 1. The stagnation point should be compared with the accretion radius. For the present winds' parameters used above β ≃ 0.2 and at periastron r = a(1 − e) = 3 AU, so that D 2 = 0.5 AU. The accretion radius is smaller than D 2 , hence no accretion will take place. Also, the accretion radius is quite small compared with the orbital separation even at periastron, hence the companion will not influence the mass-loss process much. If, however, during a mass loss episode the primary's wind velocity decreases to v 1 ∼ < 300 km s −1 , as may be suggested by some condensations along the orbital plane of η Car , the accretion radius will be R a ∼ > 0.6 AU, and the companion will deflect a substantial portion of the primary's wind at periastron, since R a ∼ > 0.2r. For the parameter chosen above and v 1 = 300 km s −1 , I find that at periastron D 2 = 0.65 AU ∼ R a , hence some accretion may occur, but only near periastron. At other orbital phases D 2 > R a , and no accretion to the companion will take place. If in addition to the slower equatorial flow the mass-loss rate is much higher,Ṁ 1 = 0.1M ⊙ yr −1 as suggested for the eruption of 1850 (the Great Eruption) that formed the lobes (DH97), then β ≃ 0.01 and even at apastron D 2 ≃ rβ = 0.4 AU < R a , hence significant accretion will occur during the entire orbital motion. For a mass loss rate oḟ M 1 = 0.1M ⊙ yr −1 and wind velocity of v 1 = 500 km s −1 the accretion rate by the companion at periastron, without wind disruption, isṀ acc ≃Ṁ 1 R 2 a /4r 2 ≃ 10 −4 M ⊙ yr −1 . At other orbital phases the accretion rate is lower, and may cease near apastron. This may be enough for the companion to blow a CFW (SR00), with a varying strength along its orbit. However, it seems (see next section) that during the Great Eruption of 1850 and the Lesser Eruption of 1890, the equatorial mass flux was higher than the average, and the velocity much lower, making the accretion rate by the companion much higher, and the proposed mechanism for causing departure from axisymmetry much more efficient. For an illustration, I assume that near periastron (θ = 0, r p = 3 AU) the accretion rate from the equatorial flow is very high, preventing any mass loss from the system when the orbital separation is r < 4 AU. This is case 2 of mass-loss process considered by SRH. For a = 15 AU and e = 0.8, this corresponds to no wind being blown from the binary system during orbital phases of |θ| ∼ < 65 • . From the left panel of figure 2 of SRH I find that for these parameters <ẏ > /Ω K a = 0.1, where <ẏ > is the average speed of the outflowing matter in the equatorial plane (SRH eq. 5), and Ω K is the Kepler frequency. Here Ω K = 2π/5.5 yr −1 and Ω K a = 81 km s −1 , from which I find <ẏ >= 8 km s −1 . The offset of the nucleus from the center of the equatorial flow is (SRH eq. 7) δ =<ẏ > /v 1 , which for a slow equatorial flow of v 1 = 50 km s −1 gives δ ≃ 0.15. I argue that this explains the departure from axisymmetry of the equatorial ejecta near the nucleus of η Car. It should be noted that for a slower flow near the binary system, the accretion rate will be higher near apastron rather than near periastron passages (see next section). As can be noticed from the right panel of figure 2 of SRH, this will lead to a much larger departure from axisymmetry. A CFW blown by the accreting companion may also increase the departure from axisymmetry. Finally, we note the following mechanism to cause departure from axisymmetry, which can operate even for a circular orbit. If there is an eruption which lasts for a time much shorter than the orbital period, the wind will be blown while the mass-losing star is moving in a specific direction along its orbital motion. This will cause the center of the structure formed by this impulsive mass loss to be displaced from the central binary system by d = (v o /v 1 )R n , where v o is the orbital velocity of the mass-losing star around the center of mass at the moment of mass loss, v 1 is the expansion velocity of the mass being blown, and R n is the distance of the ejecta from the binary system (increasing with time). For the binary system consider here, the primary orbital velocity changes from 27 km s −1 at apastron to 242 km s −1 at periastron. For v 1 = 500 km s −1 we find for this pure impulsive mass-loss episode that d/R n can be in the range of 0.05 − 0.5. Of course, we do not expect such a mass-loss event, though it is possible that the mass-loss rate has increased substantially during a time shorter than the orbital period, say 2 years. It will then collide with previously ejected mass and mass blown later to form a more complicated structure. The overall departure will be less than for a pure impulsive mass-loss episode d/R n < v o /v 1 , but still may be noticeable if it has occurred not too close to apastron passage, if it occurs for a short time, and if the increase in the mass-loss rate during the impulsive mass-loss episode is significant. The conclusion from this section is that for typical parameters used by the binary model proponents, the secondary can have an influence on the mass-loss process which varies with orbital phase, in particular if the primary's wind velocity is v 1 ∼ < 300 km s −1 . This may naturally ex-plain the large-scale departure from axisymmetry observed in some structural features of η Car. Although the arguments presented here suggest that a binary companion can explain in principle the departure from axisymmetry, I can't predict the exact shape and degree of departure from axisymmetry. For this 3D gasdynamical numerical simulations are required. IMPLICATIONS OF THE BINARY MODEL In the previous section I demonstrated that a large accretion rate was likely to have occurred during the eruption of 1850, and possibly during that of 1890. What is the effect of such an event? The answer depends strongly on the accretion rate, which itself is very sensitive to the relative velocity between the accreting body and the wind. The wind could be moving very slowly at a distance of several AU in an "extended envelope", which was formed during the eruption, with the "photosphere" almost as big as the orbit of Saturn (DH97). I now speculate that the accretion rate by the companion was high, and the companion blew a collimated fast wind (CFW) which led to the formation of the bipolar shape. Bipolar symbiotic nebulae, similar in many properties to the bipolar shape of η Car, are known to result from binary interaction (e.g., Corradi et al. 2000), and so is the common view regarding the formation of bipolar planetary nebulae (SR00). First let me point to a difficulty with the energy budget in models which assume a spherical mass ejection in the 1850 eruption (e.g., Frank, Balick, & Davidson 1995). For a total ejected mass of 2.5M ⊙ and with an initial velocity of ∼ 650 km s −1 , which is the current expansion velocity of the lobes , the total kinetic energy of the ejected gas is E ks ≃ 10 49 erg. This is ∼ 1/3 of the total energy radiated during the Great Eruption of η Car, E r = 3 × 10 49 erg (DH97). Such a high efficiency of radiation to kinetic energy conversion can occur in an explosion. However, the duration of the Great Eruption was much longer than the dynamical time scale (see below), hence it wasn't a regular explosion. Shaviv (2000) proposes a model to explain the supper-Eddington luminosity during the Great Eruption, where some of the radiation escape while exerting a smaller average force on matter. This seems to reduce the efficiency of energy transfer. The sum of the absolute values of the momentum along all directions is p s = 3 × 10 41 g cm s −1 , whereas the total momentum that can be supplied by the radiation during the eruption is p r = ζE r /c = 10 39 ζ g cm s −1 , where ζ is the average number of times a photon is scattered by the ejected gas. To account for the wind's momentum, on average each photon must be scattered ∼ 300 times, which again means a very efficient acceleration mechanism. This can be compared to another intensive mass-loss process, but in AGB stars. Progenitors of most planetary nebulae terminate the AGB phase with an intensive mass-loss phase, the "superwind", which lasts ∼ 1 − 2 × 10 3 yrs. This time is several hundred times the Keplerian orbital time along the AGB stellar equator, e.g., for M * = 0.6M ⊙ and R * = 2 AU the Keplerian orbital time is 3.7 years. The same ratio holds for the Great Eruption of η Car, which lasted 20 years, ∼ 500 times the Keplerian orbital time on the surface of a 80M ⊙ star with a radius of R * = 0.5 AU. From observations it is found (e.g., Knapp 1986) that in most cases the momentum flux in the superwind is ∼ < 3 times the momentum flux in the stellar radiation, i.e., ζ ∼ < 3. In a minority of the cases with a higher momentum flux, dynamical effects due to a binary companion probably play some role. But in all cases the total kinetic energy in the superwind is much smaller, by a factor of > 100, than the total radiated energy in the same period of time. The present momentum flux in the wind of η Car is only 20% of the radiation momentum flux (White et al. 1994), and therefore the wind can be explained by radiation pressure. From this discussion it seems that it is possible to explain the kinetic energy of the Grat Eruption with a single star model (e.g., Shaviv 2000), but a very efficient acceleration mechanism is required. As I suggest below, an accreting binary companion can supply some of the kinetic energy. The present kinetic energy of the lobes is much lower than that required in a spherical eruption. Assume for simplicity spherical lobes, i.e., a shape of r = 2r 0 sin φ, where φ is the angle from the equatorial plane, r is the distance from the nucleus, and 2r 0 ≃ 3 × 10 17 cm is the diameter of each lobe (DH97). I assume for simplicity that most of the mass is on the outer boundary of the lobes, and that each mass element is expanding at a constant velocity since eruption, so that the velocity as a function of angle from the equatorial plane is v w (φ) = 650 sin φ km s −1 . ( For the distribution of mass with the angle φ I take a simple form for the mass density per unit solid angle, defined from the center of η Car (not from the centers of each of the spherical lobes), where m 0 , K and γ are constants. For a constant mass per unit solid angle K = 0, whereas for a concentration of mass toward the equatorial plane 0 < K ≤ 1. We can integrate for the kinetic energy E kns = (mv 2 w /2)2π cos φdφ and for the total mass in the lobes M ns = 2πm cos φdφ. Evaluating the integrals gives the total kinetic energy in the lobes under these assumption as where v po is the wind velocity along the polar directions. For a constant mass per unit solid angle K = 0, and the kinetic energy of the ejected mass is a third of that in a spherical ejection E kns = (1/3)E ks . The case where 2/3, instead of 1/2, of the total mass is within |φ| < 30 • and γ = 1, has K = 0.8. In that case the kinetic energy of the ejected mass is E kns = (0.22)E ks . Since more mass is actually concentrated toward the equator (DH97), the kinetic energy is even lower, and we can safely take here E kns ≃ 0.1 − 0.2E ks ≃ 1 − 2 × 10 48 erg. This means that models in which the fast ejecta is mainly along the polar directions (e.g., Frank, Ryu & Davidson 1998), require an order of magnitude less energy than models with spherical ejection. In the binary model the CFW (or jets) along the polar directions is (are) blown by the companion (SR00). The CFW can form a hot bubble inside each lobe, and efficiently accelerate the slowly moving gas, ejected by the mass-losing star, to higher velocities (SR00). I now examine the feasibility of such a scenario. A rotating star close to the Eddington luminosity limit will form a slow equatorial flow (Maeder & Meynet 2000), having an expansion velocity of the order of the rotation velocity, which will be slower than the orbital velocity along most of the orbit of the companion. Zethson et al. (1999) have detected slowly expanding equatorial gas, which they claim originated hundreds of years before the Great Eruption of 1850, although fast equatorial ejecta exit as well (Morse et al. 1998). Substituting the Keplerian velocity in the expression for the accretion radius of the companion (eq. 1) gives, for the ratio of the accretion radius to the orbital separation r = a(1 − e 2 )/(1 + e cos θ), For the parameters used in the previous section, M 1 = 80M ⊙ , M 2 = 30M ⊙ , and e = 0.8, I find R a /r = 0.3 at periastron (cos θ = 1), and R a > r at all phases where cos θ < −0.9. This large R a /r ratio means that the companion in such a system accretes a large fraction of the slowly expanding equatorial flow. The density at the location of the companion can be much higher than that expected for a pure wind, especially near periastron, since some of the material in the extended envelope may fall back on the primary star. This means that the total accreted mass maybe much larger than that expelled during the eruption. I scale the mass blown in the CFW with M c = 0.25M ⊙ , which is the case if the accreted mass is equal to the ejected mass of 2.5M ⊙ , and a fraction 0.1 of it is blown in the CFW (or jets), and the CFW speed is scaled by the escape velocity v c ≃ 2000 km s −1 from the companion. The total kinetic energy of the CFW is E c = 10 49 (M c /0.25M ⊙ )(v c /2000 km s −1 ) 2 erg. This is more than the required energy in the non-spherical mass ejection to form the lobes, as mentioned above. That a fast mass loss along the polar directions can form the desired morphology was demonstrated by numerical simulations performed by Frank et al. (1998), although their idea was of a single star, whereas in the present scenario the companion blows the CFW simultaneously with the eruption of the primary star (SR00). This scenario, like that of Frank et al. (1998), avoids the problems of the interacting winds model, some of which are summarized by Dwarkadas & Balick (1998). One of the problems mentioned by Dwarkadas & Balick is that the massive disk required in the interacting wind model to confine the spherical ejection is not found. The claim by Morris et al. (1999) for the presence of a massive, ∼ 15M ⊙ , torus of cold gas around the nucleus of η Car is disputed by , who claim that a correct model gives an equatorial gas distribution which is "incapable of generating the pinched waist". Dwarkadas & Balick (1999) proposed instead that the spherically ejected mass (during the eruption) interacts with a very dense torus at several AU around the nucleus. The main problem in this scenario, in addition to the energy and momentum budget problem mentioned above, is the formation of a dense torus around and close to the nucleus. A massive companion can play a significant role here (Mastrodemos & Morris 1999). Finally we note the high velocity gas ejected in the equatorial plane. The flow in the equatorial plane is very complicated, with both slowly expanding, v ∼ 50 km s −1 gas (e.g., Zethson et al. 1999); and fast moving, velocity of ∼ 300 km s −1 , features (e.g., . A large fraction of the equatorial gas was ejected in the Lesser Eruption of 1890 , rather than during the Great Eruption of 1850. The equatorial ejecta possesses departure from axisymmetry, as noted in the previous section. As noted by SR00, when the momentum flux of the CFW (or jets) blown by the companion is much smaller than the momentum flux of the primary's wind, the CFW will be strongly bent so that it will flow close to the equatorial plane. Based on that, I suggest that some of the fast moving gas in the equatorial plane was ejected by the companion at high speed, but because of the relative (to the primary's wind) low momentum flux of the CFW it was bent toward the equatorial plane. The ratio of the momentum fluxes of the primary's wind and CFW depends mainly on the accretion rate and primary's wind concentration toward the equator. It is possible that during the Great Eruption of 1850 the conditions were favorable for the formation of a very strong CFW (e.g., slowly expanding wind concentrated toward the equatorial plane), which forms the two lobes, whereas during the Lesser Eruption of 1890, only a weak CFW was formed, but still strong enough to form fast moving gas in the equatorial plane. SUMMARY In the present paper I argue that the large-scale departure from axisymmetry of the η Carinae nebula can be explained by the binary nucleus model. Using binary parameters as quoted by the binary supporters, I found that the companion was likely to substantially influence the mass-loss process from the binary system. The degree by which such a companion diverts the outflow depends on the orbital separation, hence on the orbital phase in the eccentric orbit. The modulation of the mass loss process with the orbital phase may lead to a detectable departure from axisymmetry (SRH), as is observed in η Car. I speculated that if such a companion exists, it may have accreted a large fraction of the mass that was expelled in the Great Eruption of 1850 and the Lesser Eruption of 1890. This requires that the matter in the equatorial plane was moving very slowly, at ∼ 50 km s −1 , during these eruptions. The accretion process was likely to form an accretion disk, with the formation of a collimated fast wind (CFW), or jets, on the two sides of the accretion disk. I showed that a CFW of ∼ 0.25M ⊙ , which could be formed if the accreted mass was equal to the mass that was blown into the lobes in the Great Eruption, 2.5M ⊙ , and ∼ 10% of it was blown into the CFW, which was blown at 2, 000 km s −1 , can account for the total kinetic energy of lobes of η Car. The CFW, therefore, was likely to be a significant factor in shaping the lobes of η Car. If the CFW blown by the companion is weak, i.e., its momentum flux is small, it will be sharply bent by the slow wind blown by the primary star. The CFW will flow parallel to the equatorial plane, leading to fast outflowing material near the equatorial plane. I therefore speculated that during the Lesser Eruption of 1890 the CFW was indeed weak, leading to the formation of the fast equatorial outflow which was expelled then.
2014-10-01T00:00:00.000Z
2001-03-02T00:00:00.000
{ "year": 2001, "sha1": "b87e7aaa567aa61005b802c65a9a31bd25243e04", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/325/2/584/3976567/325-2-584.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "b87e7aaa567aa61005b802c65a9a31bd25243e04", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235762850
pes2o/s2orc
v3-fos-license
Identification and prediction of difficult-to-treat rheumatoid arthritis patients in structured and unstructured routine care data: results from a hackathon Background The new concept of difficult-to-treat rheumatoid arthritis (D2T RA) refers to RA patients who remain symptomatic after several lines of treatment, resulting in a high patient and economic burden. During a hackathon, we aimed to identify and predict D2T RA patients in structured and unstructured routine care data. Methods Routine care data of 1873 RA patients were extracted from the Utrecht Patient Oriented Database. Data from a previous cross-sectional study, in which 152 RA patients were clinically classified as either D2T or non-D2T, served as a validation set. Machine learning techniques, text mining, and feature importance analyses were performed to identify and predict D2T RA patients based on structured and unstructured routine care data. Results We identified 123 potentially new D2T RA patients by applying the D2T RA definition in structured and unstructured routine care data. Additionally, we developed a D2T RA identification model derived from a feature importance analysis of all available structured data (AUC-ROC 0.88 (95% CI 0.82–0.94)), and we demonstrated the potential of longitudinal hematological data to differentiate D2T from non-D2T RA patients using supervised dimension reduction. Lastly, using data up to the time of starting the first biological treatment, we predicted future development of D2TRA (AUC-ROC 0.73 (95% CI 0.71–0.75)). Conclusions During this hackathon, we have demonstrated the potential of different techniques for the identification and prediction of D2T RA patients in structured as well as unstructured routine care data. The results are promising and should be optimized and validated in future research. Supplementary Information The online version contains supplementary material available at 10.1186/s13075-021-02560-5. Background The treatment for rheumatoid arthritis (RA) has substantially improved over the past decades, enabling many patients to reach and maintain a state of low disease activity or even remission [1]. However, even when following current management recommendations, there is still a subgroup of patients that remains symptomatic after treatment with several (biological and/or targeted synthetic) disease-modifying antirheumatic drugs ((b/ ts)DMARDs) [1][2][3]. These patients are referred to as having "difficult-to-treat (D2T)" RA. Depending on the definition used, this disease state is estimated to affect 5 to 20% of all RA patients [2][3][4]. D2T RA is likely the subgroup of RA patients with the highest medical need [5][6][7]. Identifying and optimizing treatment could thus have great clinical impact for individual patients as well as for the sustainability of the healthcare system as a whole. The importance of focusing on this subgroup of RA patients was previously acknowledged by an international survey among rheumatologists [5]. This survey indicated that several topics that are considered important for the management of D2T RA are not addressed in current RA management recommendations, reflecting an unmet clinical need. Additionally, results showed a wide variety in the existing concepts of D2T RA. Consequently, a European League Against Rheumatism (EULAR, from 2021 European Alliance of Associations for Rheumatology) Task Force recently defined D2T RA (Supplemental table 1) [8] and specific management recommendations for this patient population are under development [8][9][10]. In the process of developing these recommendations, it became clear that evidence regarding this patient population is scarce and that further research is urgently needed [9,10]. This is however complicated by the difficulty of identifying D2T RA patients both retrospectively in cohorts and prospectively in clinical practice, due to the multidimensionality of the D2T RA definition and the presumed fluctuation of the disease state over time. Additionally, D2T RA comprises a heterogeneous group of patients with potential differences in contributing factors and underlying pathology [6,8,11]. Identifying D2T RA patients in routine care data enhances research opportunities, as it allows to retrospectively study the development of RA into D2T RA and the progression of the D2T RA state over time. Clear identification of these patients in retrospective data could also enable the development of models that can predict the development of D2T RA early on in the disease course, ultimately aiding in preventing D2T RA by a timely adjustment of therapy. We previously conducted a cross-sectional study at the department of Rheumatology & Clinical Immunology of the University Medical Center Utrecht (UMC Utrecht), the Netherlands, in which RA patients meeting the D2T RA definition [8] and a control group of RA patients not fulfilling all three criteria of the definition were enrolled [6]. This resulted in a valuable dataset with elaborate information on clinically classified D2T and non-D2T RA patients. This data served as a validation set during a hackathon (November 2020), in which data scientists and clinicians collaborated to identify and predict the development of D2T RA in structured and unstructured routine care data of all RA patients at UMC Utrecht. Routine care data Structured and unstructured routine care data were extracted from the Utrecht Patient Oriented Database (UPOD) and pseudonymized. The organization and content of the UPOD have been described in more detail elsewhere [12]. In brief, the UPOD is an infrastructure of relational databases comprising electronic health record data of all patients treated at UMC Utrecht and was established in 2004. UPOD data acquisition and management are in accordance with current regulations concerning privacy and ethics. For this hackathon, first, we identified the RA population according to the 10th revision of International Classification of Diseases (ICD-10) codes. We included patients with classification M05.X (seropositive rheumatoid arthritis) and M06.X (other rheumatoid arthritis) and subsequently excluded patients with M06.1 (adult-onset Still disease). Subsequently, the following structured data were extracted from the UPOD: Age (at time of RA diagnosis) and sex Medication prescriptions: We included relevant medication based on Anatomical Therapeutic Chemical (ATC) codes (Supplemental table 2). All inpatient and outpatient prescriptions, including ATC codes and start dates, were extracted. As medication stop dates are prone to administrative errors, we only used start dates in our analyses. The b/tsDMARDs were labeled according to their mechanism of action (MoA). Medication prescriptions dated back to 2007. Laboratory analyses: We extracted laboratory measurements deemed clinically relevant (Supplemental table 3). In addition, we included all hematological parameters, as these are available in the UPOD for all patients for whom one or more components of the complete blood count (CBC) have been requested (e.g., hemoglobin). These parameters include the entire CBC, as well as research-only values and raw scatter pattern measurements from the Abbott Celldyn Sapphire machines (Abbott hematology, Santa Clara, CA, USA). This data was available from 2003. Clinical measurements: Clinical measurements including 28 joint counts for swelling and for tenderness (SJC28/TJC28), length, weight, blood pressure, and general health related to RA according to the patient as scored on a visual analog scale (VAS-GH) were extracted for all patients. This data was available since 2002. Hospital visits: Visits to the outpatient rheumatology clinic (since 1995) as well as hospitalizations on the rheumatology ward (since 1987) were extracted for all patients. In addition, clinical correspondence was extracted as unstructured data from the UPOD. This included all clinical letters from the rheumatology department as available since 1988. Clinically classified patients In a previous cross-sectional study [6], 52 D2T and 100 non-D2T RA patients were clinically classified according to the EULAR definition (Supplemental table 1) in 2019-2020 [8]. See Supplemental table 4 for an overview of the clinical characteristics of these patients. Both the structured and unstructured UPOD data as well as the study data were extracted. Study data included patient and disease characteristics as well as factors potentially contributing to D2T RA (e.g., treatment non-adherence, fibromyalgia), which were collected during a single study visit including a physical examination, laboratory analyses, and by a subsequent questionnaire set. The data from these clinically classified patients served as a validation set, used to define the ability of the identification and prediction models to correctly classify D2T RA patients. Identification of D2T RA patients Four different techniques were employed to identify D2T RA patients in routine care data. The first two were based on the application of the criteria of the D2T RA definition in structured and unstructured data, respectively. Both methods focused on the first two criteria of the D2T RA definition (failing ≥ 2 b/tsDMARDS with different MoA and signs of active/progressive disease, see Supplemental table 1 for details) [8]. The third criterion (problematic management) was deemed too subjective to be extracted from the available data. The third method explored the ability of other variables available in the structured data to differentiate D2T from non-D2T RA patients using a feature importance analysis. The fourth method entailed an exploratory dimension reduction of longitudinal hematological data. Classification in structured data In this approach, the structured data of medication prescriptions, laboratory analyses, clinical measurements, diagnostic codes, and hospital visits were analyzed for all RA patients in the UPOD. Patients were classified as D2T or non-D2T RA using these data (Supplemental table 1) [8]. Patients with registered medication prescriptions of at least two b/tsDMARDs with different MoA were deemed eligible to meet the first criterion of the D2T RA definition [8]. To define "active disease" (second criterion), we aimed to calculate the disease activity score assessing 28 joints (DAS28) from SJC28/TJC28 and VAS-GH combined with erythrocyte sedimentation rate (ESR) or C-reactive protein (CRP) where available. However, as these were missing for many patient visits in the database, a model was developed that approximated the DAS28-ESR. This model was based on laboratory values, number of hospital visits, patient characteristics and swiftness of cycling through b/ tsDMARDs with a different MoA (see Supplemental table 5 for a brief description of the model and an overview of included parameters). The model had a mean absolute error of 0.8 (for reference: the DAS28 itself has a measurement error of 0.6) [13]. Patients who had a mean approximated DAS28-ESR ≥ 3.2 in the period from 3 to 12 months after starting a b/tsDMARD of a second MoA were deemed to have failed their treatment due to active disease, thus fulfilled the first and second criterion of the D2T RA definition [8]. Patients who started a third b/tsDMARD with a different MoA were also deemed to have failed the b/tsDMARD of a second MoA and thus also met the first and second criterion of the D2T RA definition. This way, the RA patients in the UPOD dataset could be classified as being either D2T or non-D2T based on the available structured data. Classification in unstructured data In this approach, text mining techniques were applied to analyze clinical letters of RA patients in the UPOD to classify patients as D2T or non-D2T RA (Supplemental table 1) [8]. Medication prescriptions were extracted from the headings "medication" and "DMARD history". Patients who had a history of a prescription of at least 2 b/ tsDMARDs with different MoA were deemed to meet the first criterion of the D2T RA definition. To meet the second criterion, relevant subheadings were screened for synonyms of active disease, such as "flare". Negations such as "no flare" were excluded. This way, the RA patients in the UPOD dataset could be classified as being either D2T or non-D2T based on the available unstructured data. Feature importance analysis To gain insight in the importance of structured data variables regarding their ability to differentiate D2T from non-D2T RA patients, we performed an exploratory feature importance analysis using logistic regression. We included all available structured data variables from the UPOD of the 152 clinically classified patients, including those used for the application of the EULAR definition [8]. We determined the importance of different variables with multivariable logistic regression with L1 regularization (based on 1000 bootstrapped crossvalidations with a 140/12 split). L1 regularization limits the number of coefficients by eliminating uninformative coefficients. This was preceded by standard scaling and multiple imputation using Bayesian Ridge regression and univariate feature filtering using a false discovery rate with alpha 0.05. The repeated measured variables were time-aggregated using the mean, median, standard deviation, mean difference, and mean minus the median. The resulting variables were univariately filtered based on their ability to differentiate between D2T and non-D2T RA patients. An identification model was derived using XGBoost, of which we present the receiver operating characteristic (ROC) curve based on ten-fold crossvalidation. XGBoost is a machine learning model which uses gradient boosting [14]. In gradient boosting, multiple decision tree models are combined together into an ensemble. Each sequential model is trained to correct for the errors of the previous model. An important advantage of XGBoost is that it can handle missing data without imputation, which makes it a suitable model for real-life EHR data. We also considered multivariate logistic regression and a dense neural network, but the XGBoost model had a better performance in terms of AUC. Dimension reduction of longitudinal hematological data To explore the possibility to differentiate D2T from non D2T RA patients solely based on longitudinal hematological data, a non-linear dimensionality reduction was performed. In dimension reduction, all available hematological parameters are reduced to two parameters, which allows for this information to be plotted on a 2-dimensional x-y graph. Dimension reduction was performed using uniform manifold approximation and projection (UMAP) [15]. UMAP is a non-linear alternative to principal component analysis, which explicitly aims to preserve the Euclidean distance between samples. This method was applied to all hematological data of the 152 clinically classified patients for training purposes using supervised techniques. Subsequently, this method was applied to the hematological data of all RA patients from the UPOD, to assess its ability to differentiate D2T from non-D2T RA patients. A Y-score was calculated for each patient, indicating the likelihood of having D2T RA. This was based on the combined outcomes of the classifications in structured and unstructured data (as described above), and the clinical classification (if available). The results of these analyses are visualized for each individual patient using the median of the reduced dimensions (d1 and d2) of the hematological data over time. This was done both for the clinically classified patients as well as all RA patients from the UPOD. The aim of this method is to investigate if distinct clusters can be distinguished to separate D2T from non-D2T RA patients based on longitudinal hematological data. Prediction model In an effort to predict D2T RA patients early in the disease course (i.e., before satisfying the D2T RA definition), we developed a prediction model based on machine learning techniques using XGBoost [14]. All available structured UPOD data from before the start of the first b/tsDMARD of the clinically classified D2T and non-D2T RA patients were used. The longitudinal data were regularized to a one-month time interval using forward fill-in. This implies that missing values are imputed based on the last known values. The XGBoost classifier was used as the predictive model because of its robustness regarding data preprocessing. We used 10-fold cross-validation and the area under the ROC (AUC) statistic to determine model performance. Data extraction from the UPOD Based on the ICD-10 codes, 1873 RA patients were identified in the UPOD. Classification in structured data Of the 1873 RA patients in the UPOD, 122 patients met the first criterion of the D2T RA definition (7%) as determined in structured UPOD data. For 100 of these patients, sufficient data was available to determine the fulfilment of the second criterion. Patients for whom insufficient data was available were classified as non-D2T. Twenty-five of 52 patients clinically classified as D2T RA patients were correctly classified based on the structured data (sensitivity 48%, see Table 1). Two of the 100 patients clinically classified as non-D2T RA were incorrectly classified (specificity 98%, Table 1). Using this approach, 43 additional (potential) D2T RA patients were identified. Classification in unstructured data In the UPOD, 16,780 clinical letters of 1873 patients were available and extracted as unstructured data. Twohundred thirty-nine of all RA patients from the UPOD (13%) met the first D2T RA criterion, based on the unstructured data. This included all 52 clinically classified D2T RA patients from the cross-sectional study. One hundred sixty-one patients also met the second criterion of the definition. Thirty-six of 52 patients clinically classified as D2T RA patients were correctly classified using the unstructured data (sensitivity 69%, see Table 2). Eight of the 100 patients clinically classified as non-D2T RA were incorrectly classified (specificity 92%, Table 2). One hundred and seventeen additional (potential) D2T RA patients were identified. When comparing these patients with the 43 identified additional (potential) D2T RA patients using the structured data approach, 123 unique, additional (potential) D2T RA patients were found. Feature importance analysis The most important structured data variables (features) to identify D2T and non-D2T RA patients and their logistic regression coefficients are shown in Tables 3 and 4. Among others, this included the number of different medication prescriptions, the time period since RA diagnosis, and the mean DAS28-ESR. Based on these features, an identification model was derived with an AUC-ROC of 0.88 (95% CI 0.82-0.94), Fig. 1. Figure 2A depicts the medians of the reduced dimensions of the longitudinal hematological data of the clinically classified D2T and non-D2T RA patients. Each point represents a single patient, and the axes represent the two reduced dimensions d1 and d2. Two distinct clusters are visible, which are strictly separated due to the supervised techniques. Figure 2B depicts the medians of the reduced dimensions of the hematological data of all 1873 RA patients in the UPOD. A tendency towards two separate clusters is visible based on the likelihood of having D2T RA, although these are not strictly separated. Prediction model The machine learning prediction model was trained on the data of the clinically classified RA patients for whom data was available before prescribing the first b/ tsDMARD (28 D2T and 88 non-D2T RA patients). The most important features mainly included hematological parameters, e.g., white blood cell count, percentage of neutrophils, segmented neutrophils, and hemoglobin (see Supplemental Table 6 for further details). With this XGBoost model, we were able to correctly predict 22 of the clinically classified D2T RA patients and 44 of the clinically classified non-D2T RA patients (sensitivity 79%, specificity 50%, Table 5). The average AUC-ROC over the 10-fold cross-validation was 0.73 (95% CI 0.71-0.75), Fig. 3. Discussion The current study presents the results of a hackathon aimed at the identification and prediction of D2T RA patients in structured and unstructured routine care data. We were able to identify 123 potentially new D2T RA patients by applying the criteria of the D2T RA definition in structured and unstructured data. Additionally, we developed an identification model based on a feature Patients were classified by applying the D2T RA definition [8] in structured routine care data from the UPOD D2T difficult-to-treat, DAS28-ESR disease activity score based on 28-joint count and erythrocyte sedimentation rate, RA rheumatoid arthritis, UPOD Utrecht Patient Oriented Database *Clinical classification of D2T and non-D2T RA patients as performed in the cross-sectional study [6] Patients were classified by applying the D2T RA definition [8] in unstructured routine care data from the UPOD D2T difficult-to-treat, RA rheumatoid arthritis, UPOD Utrecht Patient Oriented Database *Clinical classification of D2T and non-D2T RA patients as performed in the cross-sectional study [6] importance analysis with high diagnostic performance (AUC-ROC 0.88), and we have shown the potential of longitudinal hematological parameters to differentiate D2T from non-D2T RA patients using supervised dimension reduction. To predict the risk of developing D2T RA, we developed a machine learning model based on structured data that correctly predicted 79% of clinically classified D2T RA patients using data available from before the time of prescribing the first b/tsDMARD (AUC-ROC 0.73). To our knowledge, there is no previous literature using these techniques in the context of (D2T) RA. Routine care data is a valuable source of information, as it comprises a vast amount of "real world" patient data that is ample available. Unfortunately, this data often remains unutilized, due to technical challenges in their analysis. Yet routine care data could play a crucial role in the developing field of personalized medicine. A major strength of this study is that we have shown various data analytical techniques to utilize this valuable source of information in the identification and prediction of D2T RA. Identifying D2T RA patients from routine care data enhances research possibilities, as it allows for retrospective analysis of the development of RA into D2T RA and the progression of the D2T RA state over time. Moreover, in clinical practice, it creates an opportunity to optimize the treatment of D2T RA patients according to current and emerging guidelines. Correct identification of patients in longitudinal routine care data may also enhance the performance of models that can predict D2T RA early in the disease course. When patients at risk can be identified at an early stage, they may be monitored more intensively for the presence or development of factors contributing to D2T RA (e.g., treatment non-adherence or depression) [6]. When these contributing factors develop and are adequately addressed, the risk of acquiring D2T RA could potentially be diminished. Table 3 The most important features to identify D2T RA patients based on logistic regression coefficients Interestingly, our feature importance analysis, our machine learning prediction model, and our exploratory dimension reduction all show an important role for hematological data in the identification and prediction of D2T RA patients. This is in line with previous research that has shown the potential role of the neutrophil-lymphocyte and platelet-lymphocyte ratios as biomarkers of disease activity in RA patients, although the underlying pathophysiology is not well-understood [16][17][18]. Of note, the large contribution of hematological parameters in our analyses is likely influenced by the ample availability of these structured data, as this is a key feature of the UPOD. Nevertheless, as hematological parameters are low in costs, often readily available, and require a minimal effort of the treating physician, they could be valuable potential markers in the evaluation of RA disease progression. The performance of our identification strategies based on structured and unstructured data has been estimated conservatively. Patients for whom insufficient data were available to apply the D2T RA definition were now classified as "non-D2T", which may have contributed to the relatively low sensitivity that was observed. The D2T RA patients that were not identified by our models could especially include the D2T RA patients who were referred to UMC Utrecht from other hospitals as a "second opinion", as data transfers between hospitals are often incomplete and electronic health record data from different hospitals, general practitioners, and pharmacies are (unfortunately) not synchronized in the Netherlands. Improving the availability of these data could thus potentially improve the performance of our identification and, subsequently, prediction models. Although the results of this study are promising regarding the accuracy of identification of D2T RA patients as well as predicting the development of D2T RA, this preliminary study also has several limitations. For example, not all components of the D2T RA definition (Supplemental table 1) [8] were incorporated in the structured and unstructured data approaches. This was Predictions are based on data from before the start of the first b/tsDMARD b/tsDMARD biological or targeted synthetic disease-modifying antirheumatic drug, D2T difficult-to-treat, RA rheumatoid arthritis *Clinical classification of D2T and non-D2T RA patients as performed in the cross-sectional study [6] A decision threshold of 0.15 was applied Fig. 3 ROC-curve of the D2T RA machine learning prediction model. AUC-ROC of the D2T RA prediction model based on data from before the start of the first b/tsDMARD. AUC, area under the curve; b/tsDMARD, biological or targeted synthetic disease-modifying antirheumatic drug; csDMARD, conventional synthetic disease-modifying antirheumatic drug; D2T, difficult-to-treat; RA, rheumatoid arthritis; ROC, receiver-operator characteristic; std dev, standard deviation done for several reasons. First of all, the subjective character of criterion 3 "the management of the signs and/or symptoms is perceived as problematic by the rheumatologist and/or the patient" was deemed too subjective to extract from the available data. Additionally, whether the management of patients is perceived as problematic will most often not be routinely noted in health records. This issue will therefore remain a challenge in further research on D2T RA. Second, for criterion 2c "inability to taper glucocorticoid treatment below 7.5mg/day prednisone or equivalent", the stop dates of the medication that are available in the digital prescriptions system were deemed too unreliable. For example, additional medication prescriptions may be requested from the general practitioner instead of the rheumatologist (which are noted in separate systems), resulting in missing data in the prescription system and incorrect stop dates. Inclusion of these criteria in future identification and/or prediction models could further improve their performance. Furthermore, an inherent limitation of working with routine care data is the dependency on the availability of certain data parameters. Several factors that have previously been reported in association with more severe RA disease activity, such as smoking status and radiographic progression, were not readily available in the UPOD [19,20]. Improvement of registration of these parameters and the optimization of free text mining techniques could allow for future inclusion of these parameters in model development resulting in still better performing prediction models. In future studies, the possibility of combining the different techniques presented in this paper for the identification of D2T RA patients in structured and unstructured routine care data should be addressed. In addition, other data sources could be utilized to explore other known contributing and risk factors for D2T RA, such a low socio-economic status based on, e.g., postal codes [6,21]. Furthermore, the performance of the presented identification and prediction models should be evaluated in external data. Conclusions In conclusion, during this hackathon, we have demonstrated potential techniques (including text mining, feature importance analysis, and machine learning) for the identification and prediction of D2T RA patients in structured and unstructured routine care data. The results are promising to fuel research in this emerging field and should be optimized in further research.
2021-07-08T13:46:12.179Z
2021-07-08T00:00:00.000
{ "year": 2021, "sha1": "8753b8a8f504b6874770435de61bf6d303d04776", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-021-02560-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8753b8a8f504b6874770435de61bf6d303d04776", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269220437
pes2o/s2orc
v3-fos-license
Comparison of Supervised versus Self-Administered Stretching on Bench Press Maximal Strength and Force Development Purpose: While there is reported superior effectiveness with supervised training, it usually requires specialized exercise facilities and instructors. It is reported in the literature that high-volume stretching improves pectoralis muscles strength under supervised conditions while practical relevance is discussed. Therefore, the study objective was to compare the effects of volume equated, supervised- and self-administered home-based stretching on strength performance. Methods: Sixty-three recreational participants were equally assigned to either a supervised static stretching, home-based stretching, or control group. The effects of 15 min pectoralis stretching, 4 days per week for 8 weeks, were assessed on dynamic and isometric bench press strength and force development. Results: While there was a large magnitude maximal strength increase (p < 0.001–0.023, ƞ2 = 0.118–0.351), force development remained unaffected. Dynamic maximal strength in both groups demonstrated large magnitude increases compared to the control group (p < 0.001–0.001, d = 1.227–0.905). No differences between the intervention group for maximal strength (p = 0.518–0.821, d = 0.101–0.322) could be detected. Conclusions: The results could potentially be attributed to stretch-induced tension (mechanical overload) with subsequent anabolic adaptations, and alternative explanatory approaches are discussed. Nevertheless, home-based stretching seems a practical alternative to supervised training with potential meaningful applications in different settings. Introduction In several sports and rehabilitation settings, increasing or restoring strength capacity is of paramount importance [1,2] which is commonly achieved using resistance training [3,4].Nevertheless, even though highly effective, there are a number of difficulties with common resistance training programs such as travelling to specialized training facilities to receive professional supervision.The lack of training success might not be attributable to the effectiveness of resistance training interventions per se, but to the participants limited motivation and commitment to travel to training locations or perform exhausting interventions [5,6].The literature points out the high demand for time-and space-saving exercise alternatives which can be integrated into the participants or patients daily routines [7,8]. Even though the literature reports alternatives, such as blood flow restriction training [9] or electromyostimulation [10,11] to induce sufficient stimuli to improve strength, these still require expensive equipment or coaches which might be not available to the broad population.Potentially using stretching as an alternative was suggested by Arntz et al. [12] and Panidi et al. [13] who reported that high-volume and/or high-intensity stretch training could potentially induce improvements in strength capacity.Accordingly, six weeks of onehour daily self-administered calf muscle stretching induced increases in maximal strength, muscle thickness, and flexibility that were not significantly different from a commonly used resistance training routine (5 × 12 repetitions on 3 days per week for 6 weeks) [14].Nevertheless, the plantar flexors can be considered a comparatively small muscle group with comparably low impact on multi-articular, complex (athletic) movements [15,16].While Wohlann et al. [17], Ikeda and Ryushi [18] and Chen et al. [19] reported stretchinduced strength increases in the thigh muscles, Reiner et al. [20], Warneke et al. [21] and Wohlann et al. [17] showed transferability to the upper body.Wohlann et al. [17] pointed out that 15 min of supervised stretching has the potential to substitute high-intensity pectoralis resistance training.However, Schoenfeld et al. [22] highlighted the impracticality of stretching-induced strength gains, especially when this type of training requires a second person or special equipment.Therefore, this study explores the possibility of alternative and more practical stretching training such as home-based stretching training and directly compares it to supervised stretching training.It is investigated whether home-based stretching training can achieve an equivalent increase in strength capacity as supervised stretching training. To account for highly specific testing conditions [23,24], strength was tested under isometric and dynamic conditions, as most studies focused on one of these parameters [21,25,26].While Arntz et al. [12] were not able to detect significant stretch-induced force development enhancements in their meta-analysis, this result might be attributable to the inclusion of short stretching protocols in their analysis.Assuming a dose-response relationship for maximal strength, it was hypothesized that longer stretching durations could be sufficient to affect force development capacities. Materials and Methods Participants from all groups visited the lab three times, which included an initial briefing and a pre-and a post-test.The briefing visit simultaneously served as a familiarization session to avoid adaptations due to learning effects in order to optimize the exercise execution, especially for participants who did not regularly perform maximal repetitions in the bench press.Furthermore, the familiarization session would improve the validity of the isometric maximal strength testing [24].During both the pre-test and post-test, measurements were taken in the following sequence: isometric, dynamic maximal strength, and force development. Participants The required sample size was estimated via G-Power with an estimated effect size of f = 0.25.A total sample size of 42 was estimated.To account for potential dropouts and enhance statistical power, 63 recreationally active participants were recruited from the university sports center and assigned to supervised stretching with a stretching device (SVS), self-administered home-based stretching (HBS), or a control group (CG) (Table 1).The following eligibility criteria were applied: Participants were considered recreationally active when they were physically active at least twice a week without any injuries or surgery in the chest or shoulder during the last 6 months leading to prolonged immobilization and thus training interruptions.Furthermore, as the training program might be primarily applicable to untrained and sedentary populations, flexibility-trained participants were excluded.All participants provided written informed consent at the habituation session. Maximal Strength and Force Development Tests Before conducting the maximal strength and force development tests, a standardized warm-up was performed using 5 min of ergometer cycling with 60 revolutions per minute followed by 2 × 5 push-ups for the males and 2 × 5 push-ups with hands on an elevated surface for the females.Afterwards, participants were allowed to perform their individual bench-press warm-up programs, if needed.The bench press movement was performed using a Smith machine (Train Hard, Hansson Sports, Steinbach, Germany) For the isometric testing condition, the bar was fixed in the Smith machine to provide an unsurpassable (immovable) resistance.The elbow angle of 90 • was ensured via goniometer testing.To measure maximal isometric strength, the participants were instructed to push the barbell with maximal effort against the fixed bar.Applied forces were quantified via a Kistler force platform with four 9051 load cells, operating at a sampling frequency of 1000 Hz and connected with an A/D converter NI6009 (National Instruments DAQ 700).The participants performed at least three trials until strength values decreased.A 120 s rest period between each trial was ensured to avoid fatigue.After isometric testing, the dynamic one-repetition-maximum (1 RM) bench press test was conducted.The barbell was loaded with weight until a valid repetition could no longer be performed.A repetition was considered valid when the elbows were positioned below the upper body during the eccentric phase and pushed upward until the elbows were extended without assistance. For the force development tests, 50% of the 1 RM was used.The barbell was positioned on metal coil springs integrated into the Smith machine, guaranteeing that participants' elbows remained fixed at a 90 • angle as they kept their hands on the barbell.Responding to an acoustic signal, the participants were instructed to perform a pressing movement with the intention to throw the barbell concentrically upward from the chest as quickly as possible to ensure maximal bar velocity.However, for safety reasons, the participants did not actually throw the bar.Impulse (p = F × ∆t) was used to interpret the force development behavior and was calculated as follows: Each individual force value (F) within an interval of ∆t (0.001 s) was multiplied, and the sum of these values over the interval was computed.The intervals from the start of contraction to 200 ms and 500 ms were considered for interpretation.Figure 1 shows a force-time curve with force development determination.The curved dark gray line represents the force output of the participants during the bench press movement. Intervention All participants in the SVS and HBS groups performed an eight-week stretch training program, four days per week with equalized stretching volumes.The SVS group underwent 15 min of passive static stretch training on a custom-made stretching board [17].Each SVS stretch training session was performed with an examiner.The elbows were Intervention All participants in the SVS and HBS groups performed an eight-week stretch training program, four days per week with equalized stretching volumes.The SVS group underwent 15 min of passive static stretch training on a custom-made stretching board [17].Each SVS stretch training session was performed with an examiner.The elbows were fixed at a 90 • angle using an orthosis, while the shoulder angle was maintained at 90 • to achieve maximum stretching of the pectoralis major muscle.An automatic ratchet was used to retighten continuously to counteract relaxation effects, which are assumed to decrease resistive force in constant-angle stretching.To prevent any excessive arching of the back, participants positioned their legs against a wall (Figure 2).Participants in the HBS group followed a standardized 3 × 5 min static stretching for the chest muscles with three stretching exercises as a home-based training using a standardized resistance band, identical to that in Warneke et al. [21].The stretching exercises for the HBS group were carried out independently by the participants, while the stretching duration and adherence were documented in a stretching diary.Stretching intensity was set to the maximum-tolerated stretching pain. Data Analysis Statistical analysis was carried out using IBM SPSS Statistics version 28 (IBM SPSS, version 28).A normal distribution of the main outcome data was ensured using the Kolmogorov-Smirnov test (n > 30) and the homogeneity of variance was ensured with the Levene test.The data are presented as mean (M) and standard deviations (SDs).Reliability is expressed via intraclass correlation coefficients (ICCs) and coefficients of variance (CVs).A one-way analysis of variance (ANOVA) was conducted to test for pre-test group differences, while the research question was evaluated via two-way repeated-measures Data Analysis Statistical analysis was carried out using IBM SPSS Statistics version 28 (IBM SPSS, version 28).A normal distribution of the main outcome data was ensured using the Kolmogorov-Smirnov test (n > 30) and the homogeneity of variance was ensured with the Levene test.The data are presented as mean (M) and standard deviations (SDs).Reliability is expressed via intraclass correlation coefficients (ICCs) and coefficients of variance (CVs).A one-way analysis of variance (ANOVA) was conducted to test for pretest group differences, while the research question was evaluated via two-way repeatedmeasures ANOVA (3 groups × 2 testing times) with a Scheffé post hoc analysis.Betweengroup differences were reported using the following effect size classifications: small effect (d < 0.5), medium effect (0.5-0.8), and large effect (d > 0.8) [27].The critical significance level was set at p = 0.05. Results In accordance with Koo and Li [28], ICCs ranging from 0.96 to 1, CV = 0.2-3.6% for isometric and dynamic maximal bench press strengths, and force development values after 200 ms and 500 ms were classified as high.With p > 0.05, a normal distribution was assumed, while the one-way ANOVA ruled out pre-test differences (p > 0.05). Isometric and Dynamic Bench Press With a time effect of p < 0.001 and η p 2 = 0.23-0.45,both isometric and dynamic testing conditions showed a significant strength increase with a moderate-magnitude Time×Group interaction in the isometric (p = 0.023, η p 2 = 0.118) and a large-magnitude Time×group interaction effect in dynamic testing conditions (p < 0.001, η p 2 = 0.351) (Table 2).Post hoc testing revealed a significantly greater isometric force with SVS versus CG (p = 0.032, d = 0.63) but no differences between HBS and CG (p = 0.125, d = 0.53).Dynamic maximal strength showed significant increases in the SVS compared to CG (p < 0.001, d = 1.23) and in HBS compared to CG (p = 0.001, d = 0.91).No significant differences could be detected between the SVS and HBS in isometric (p = 0.821, d = 0.101) and dynamic (p = 0.518, d = 0.322) testing conditions, respectively. Force Development Neither the Time (p = 0.117-0.159)nor the Time×Group interaction (p = 0.604-0.619)reached the level of significance, showing force development as remaining unaffected by both stretching conditions (Table 3). Discussion The present study compared the effects of high-volume supervised stretching training with a self-administered equal-volume stretch training on strength performance.Both training conditions significantly increased strength with no superior effectiveness between supervised and non-supervised stretch training.Irrespective of the group, the rate of force development determined after 200 ms and 500 ms remained unaffected (p = 0.60-0.62). The study results are in accordance with a growing body of evidence showing highvolume stretch training to sufficiently enhance maximal strength [12,29].Assuming a doseresponse relationship, recent research enhanced the stretching duration up to 2 h per day for 6 weeks [30], showing highly consistent results in the plantar flexors.Similarly, upperbody-muscle static stretching induced pectoralis muscle hypertrophy [17] and strength increases [17,20,21]. Potential Underlying Mechanisms to Explain Stretch-Mediated Strength Increases Strength increases are commonly explained with morphological and/or neuromuscular adaptations [31].Although Goldspink and Harridge [32] suggested that the striated muscle cross-sectional area reflects force production potential, previous studies did not obtain a meaningful relationship between stretch-mediated hypertrophy and strength increases induced via stretching [17,33]. Consequently, a neuronal influence should be considered a potential explanation for stretch-induced strength increases.Adaptations in neuromuscular control were suggested more than 10 years ago by Nelson et al. [26], finding a contralateral force transfer to the non-stretched control leg.However, participants seemed to be untrained, as the authors speculated that stabilization via the non-stretched leg while performing 4 × 30 s stretching on 3 days per week might have caused these increases.However, it is also possible that several reflex mechanisms induced by stretching [34] affected central nervous control, which could be reflected by increases in the contralateral strength [35,36].Nevertheless, since EMG activity while performing 10 min of static stretching was not significantly enhanced [37], the possibility of substantial neural adaptations is called into question.Furthermore, authors speculated that an elongated muscle could induce muscle contractions against the stretch device that could initiate a training stimulus that might be comparable to full ROM resistance training [38]. A further explanation is related to blood flow conditions.Since blood flow restriction training seems to enhance strength capacity and muscle mass with lower-intensity contractions [39], similar adaptations might be possible with prolonged static stretching.Interestingly, McCully [40] investigated blood flow patterns when performing 10 min of stretching and showed restricted blood flow to the muscle.However, since the influence of stretch-induced blood flow restriction on muscle hypertrophy was not explored and no neuromuscular adaptations (i.e., EMG testing, blood flow, muscle hypertrophy) were measured, this rationale remains speculative. Supervised versus Self-Administered The strength increases of the different stretching training in this study are in line with other studies, showing daily self-administered stretching in the calf muscles [14] and 15 min supervised continuous pectoralis stretching [17] induced similar strength increases.Wohlann et al. [17] showed comparable increases of those expected by resistance training in untrained populations.A potential advantage of supervised stretching training over self-administered stretching training might be the possibility of ensuring proper exercise execution and, thus, training intensity.In the literature, stretching intensity is often controlled using a visual analog scale (VAS) without quantifying the actual tension on the muscle.Quantifying stretching intensity seems even more crucial considering that Lim and Park [41] found no correlation between measured passive tension and a subjective pain scale.Wohlann et al. [17] showed a continuous decrease in mechanical stretching tension in the intervened muscle (due to relaxation effects) when using constant-angle stretching.Thus, to ensure more constant tension and therefore higher intensities, an adjustment of mechanical tension might be beneficial.However, this might not be applicable in a self-administered stretching routine.Nevertheless, no differences were found between the two stretching groups, indicating a higher practical relevance of the self-administered stretching training due to its independence from location and a second person. Contraction Specificity Most studies focused on either isometric or dynamic testing routines.Warneke et al. [24] as well as James et al. [42] underlined specific testing conditions in maximal strength testing, as maximal isometric and dynamic strength should be considered individual abilities.Therefore, assuming movement training specificity, static stretching is more related to isometric testing conditions, and thus a higher increase in isometric strength could be speculated.However, Warneke et al. [33] showed isometric strength to increase about 16%, whereas dynamic strength was enhanced by 25%. Furthermore, angle specificity in isometric testing should be considered [24].Accordingly, Yahata et al. [43] showed strength increases exclusively in the neutral joint angle position, while the plantar flexed isometric testing revealed no pre-post change via stretching.It can be speculated whether the stretching could have led to a change in muscle fiber length and thus a change in joint configuration during movement execution.Panidi et al. [13] demonstrated that stretching interventions with high intensities could lead to a change in muscle fiber length (p = 0.006, SMD = 0.28), but may not result in a change in the pennation angle. Assuming isometric maximal strength measurements do not automatically predict dynamic performance due to different activation patterns of motor neurons [44,45] the present study included both isometric and dynamic testing conditions, which was supplemented by the rate of force development values after 200 ms and 500 ms.However, there were no changes in the rate of force development after 8 weeks of stretching. Practical Applications This study was performed to counteract methodological limitations described by Schoenfeld et al. [22] and others [5,6,14], indicating that long stretching durations were impractical.While increasing strength may potentially be particularly relevant for sportspecific tasks such as jumping and sprinting [46], or ball throwing velocity in handball [47], a recently published systematic review did not find stretch-induced performance enhancement [48], which seems in accordance with the lack of results for the rate of force development and explosive strength parameters obtained in the current study [15].Furthermore, in rehabilitation, there is a high relevance of restoring muscle strength after prolonged phases of immobilization [49] or reduced physical activity.Especially in sedentary populations, the recent literature pointed out the possibility of using high-volume stretch training [8] and referred to studies using prolonged stretching training [30].Resistance training is efficient, but it is location-dependent and requires special equipment, while supervision by a movement expert is highly recommended, especially for training beginners and on a recreational level.Therefore, the relevance for orthopedic patients with limited mobility, as well as for those with restricted time or lack of motivation, should be considered.This study showed self-administered stretching to be a valid alternative for strength increases, as it could be performed while watching TV or working at the computer [8], without meaningful reductions in effectiveness. However, whether stretching is a long-term alternative to other training routines remains speculative, as no studies could be found exceeding intervention periods of 8 weeks.Since it is well known that especially untrained and recreationally active participants respond to almost all novel stimuli with strength increases, further research is necessary to validate especially home-based stretching programs for the alternative application in sports practice (≥8-week intervention periods). Limitations Even though this study provided further evidence for stretch-induced maximal strength increases, no underlying mechanisms were explored in the present study.Strength increases might be explained by neuromuscular activity changes; however, no EMG study measurements were performed.When testing maximal isometric strength, angle specificity was assumed.Nevertheless, this study used just one given elbow angle, which may be of limited validity for other joint angles.Based on the results of Yahata et al. [43], it can be assumed that different joint angle positions may yield different outcomes.Therefore, the transferability of the results to other joint angle positions needs to be examined.Further research is needed to clarify the underlying mechanism and identify moderators such as stretching intensity, training frequency, or joint angle specificity to assess a best practice model. In the home-based group, no control of the intensity could be carried out.Therefore, a placebo effect cannot be entirely ruled out.Since Apostolopoulos et al. [50] underlined the relevance of stretch intensity, the lack of control might have limited the results.Nevertheless, no significant difference between the interventions was observed. Conclusions A comparison between self-administered stretching training and supervised stretching training with the same stretching volume has not yet been conducted.Both supervised and self-administered stretching increased bench press maximal strength without a difference between the training modes.The supervised stretching required a second person, organizational coordination, and a special setup to stretch the chest muscle.In contrast, the self-administered stretching could be performed independently by participants at home, regardless of location, time of day, or the need for a second person.A self-administered stretching routine thus appears to be a valid alternative to supervised stretch training when aiming to enhance maximal strength.The results of this study contribute to the discussion on the practicality of stretching training and open perspectives for further practical applications. Sports 2024 , 12, x FOR PEER REVIEW 4 of 12 Figure 1 . Figure 1.Force-time curve with 50% of 1 RM.Y-axis = measured force in Newtons, x-axis = time in milliseconds.Force development was determined 200 ms (impulse, 0.2 s) and 500 ms (impulse, 0.5 s) after the start of contraction.The straight light gray line represents the calibration and consists of the subject's body weight, the barbell (115 Newton), and 50% of the weight used in the 1 RM test.The curved dark gray line represents the force output of the participants during the bench press movement. Figure 1 . Figure 1.Force-time curve with 50% of 1 RM.Y-axis = measured force in Newtons, x-axis = time in milliseconds.Force development was determined 200 ms (impulse, 0.2 s) and 500 ms (impulse, 0.5 s) after the start of contraction.The straight light gray line represents the calibration and consists of the subject's body weight, the barbell (115 Newton), and 50% of the weight used in the 1 RM test.The curved dark gray line represents the force output of the participants during the bench press movement. 12 Figure 2 . Figure 2. Stretching exercises.(A) A period of 15 min of supervised static stretching, (B-D) homebased stretching exercises, holding for 5 min per session respectively. Figure 2 . Figure 2. Stretching exercises.(A) A period of 15 min of supervised static stretching, (B-D) homebased stretching exercises, holding for 5 min per session respectively. Table 1 . Characteristics of the participants. Table 2 . Descriptive statistics and two-way ANOVA of isometric and dynamic maximal strength. Table 3 . Descriptive statistics and two-way ANOVA of force development.
2024-04-19T15:17:14.710Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "040d82f8da1bdec8a6638e4ab46c3d2246c5cad7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4663/12/4/109/pdf?version=1713348250", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7cefe3f98157f1b679eea25cef8cfa377bdb08f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
218755732
pes2o/s2orc
v3-fos-license
Checkpoint Inhibition for Metastatic Urothelial Carcinoma After Chemotherapy—Real-World Clinical Impressions and Comparative Review of the Literature Background: The introduction of checkpoint inhibitors is a long-awaited new option for a urothelial cancer with a poor prognosis. Apart from clinical studies, the data on real world experience is scarce. Methods: Patients for monotherapy with either Atezolizumab, Nivolumab or Pembrolizumab after chemotherapy were included. Adverse events and immune related adverse events as well as survival data and imaging analyses were recorded in a prospectively designed multi-center data base. Duration of response, progression free survival (PFS), and overall survival (OS) were estimated with the Kaplan-Meier method. Results: A total of 28 patients were included. The median follow-up was 8.0 (range, 0.7–41.7) months. Median PFS was 5.8 (95% CI, 2.3–NA) months. Median OS for all patients was 10.0 (95% CI, 8.0–NA) months. The overall response rate (ORR) was 21.4% (6 out of 28 patients). Adverse events were recorded in 20 (71.4%) of patients. Higher grade adverse events (≥Grade 3) were present in 11 (39.3%) patients. No therapy related deaths occurred during the observation period. A total of 13 (46.4%) patients had adverse events that were considered to be immune related. The most commonly affected organ was the thyroid gland with 21.4% of events. Conclusion: Our real-world clinical series confirms an objective response for about every fifth patient, promising OS and a low incidence for severe adverse events (≥Grade 3). INTRODUCTION In Europe, ∼151,000 new cases of urothelial carcinoma are diagnosed every year (1). Urothelial carcinoma is associated with a grim prognosis in the metastatic state (2). Platinum based chemotherapy is the current gold standard for metastatic disease (3), albeit the fact that median overall survival (OS) ranges between 12 to 15 months (4) and 12.8 to 14 months for patients ineligible for platinum based therapy receiving vinflunine-carboplatin or vinflunine-gemcitabine (5). Options seemed even more limited in the second line setting, with OS rates of 6.9 months for vinflunine (6). Toxicity related adverse events, the fact that only about half of patients are eligible for first line cisplatin (7), together with the poor outcome in the second line setting have emphasized the need for alternative therapeutic regimens for decades. Currently used checkpoint inhibitors for urothelial carcinoma counteract immune evasion of cancer cells by blocking the interaction between programmed death 1 (PD-1) receptor and its ligands PD-L1 and PD-L2 (8). In Europe, Atezolizumab, Nivolumab, and Pembrolizumab have been approved for second line treatment, while Atezolizumab and Pembrolizumab may also be used in the first line setting, i.e., for patients ineligible for cisplatin based chemotherapy (9)(10)(11)(12)(13). Today, the use of checkpoint inhibition in the first line setting is tied to the expression of the transmembrane protein PD-L1 in cancer tissue and the presence of immune cells (14). In this study we take a first look at real world data and first impressions on all three available substances for the treatment of advanced urothelial carcinoma. Our main goal was to evaluate clinical data on checkpoint inhibition for urothelial cancer patients in a real-world setting. PATIENTS AND METHODS All patients included in this study had confirmed histopathology of urothelial carcinoma. All patients received intravenous monotherapy with either Atezolizumab, Nivolumab or Pembrolizumab with the approved dosages of 1200 mg q3weeks, 3 mg/kg q2weeks, and 200 mg q3weeks, respectively. Durvalumab and Avelumab were not approved in Europe outside of clinical trials and were not used. Only patients progressing after or during chemotherapy were included. Multiple regimens (≥1) of chemotherapy prior to checkpoint inhibition were allowed. Patients with both, lower and upper tract urothelial carcinoma were included. Patients with adenocarcinoma or sarcomatoid differentiation were excluded. Routine laboratory values prior to checkpoint inhibitor administration as well as performance-status according to the Eastern Cooperative Oncology Group (ECOG) were recorded (15). The Bellmunt criteria (ECOG performance-status > 0, hemoglobin concentration of less than 10 g per deciliter and presence of liver metastases) were applied for stratification of patients into risk groups (16). All patients were followed with staging imaging. Metastatic lesions were assessed according to the Response Evaluation Criteria in Solid Tumors (RECIST, version 1.1. (17)). Adverse events in general and immune related adverse events were defined and recorded according to the National Cancer Institute Common Terminology Criteria for Adverse Events (version 4.03.). Immune-related events were counted only once per organ and per patient. Prospective and ongoing data collection was performed in a prospectively designed, multi-center relational database. This retrospective study was carried out in accordance with the current standard of care according to the recommendations of the European Association of Urology (EAU) guidelines on treatment of metastatic urothelial carcinoma. The protocol and the retrospective analysis of anonymous data were approved by the Ethics Committee of Hanover Medical School, Hanover, Germany. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The data cutoff for the current analysis was December 12th 2018. For descriptive data presentation, categorical data was shown with absolute numbers and percentages. Continuous variables were presented with either the mean and the standard deviation or the median with range. Progression free (PFS) survival and OS were calculated with the Kaplan-Meier estimation method. R statistical software was used for statistical analysis, figures and tables (18). RESULTS A total of 28 patients from 3 separate institutions were included. Data was collected between 01/2016 and 02/2020. Patient characteristics are summarized in ( Table 1). All 28 patients were given checkpoint-inhibition after prior chemotherapy. The number of patients receiving Atezolizumab, Pembrolizumab or Nivolumab were 10 (35.7%), 16 (57.1%), and 2 (7.1%), respectively. Data on PD-L1 status was scarce due to the fact, that all patients presented here were not part of any clinical trial. Duration of follow-up was defined as the time from first administration of the checkpoint-inhibitor to the date of the last clinical visit. The median follow-up was 8.0 (range, 0.7-41.7) months. Median duration of therapy for all patients was 6.05 (range, 0.7-41.8) months. Median PFS was 5.8 (95% CI, 2.3-NA, Figure 1) months. Median OS for all patients was 10.0 (95% CI, 8.0-NA months, Figure 2). OS did not differ between different scores for Bellmunt (16) risk criteria (risk score: 0, 1, ≥2) with estimated OS times of 8.3, 10.0, and 8.9 months (p = 0.9, Figure 3). From clinical experience we tend to see good oncological control for patients who develop immune related adverse events. We could demonstrate this difference when comparing patients with and without immune related adverse events: Patients with no event vs. grade ≥2 (8.3 months vs. not reached, p-value = 0.1067), however this difference was not statistically significant (Figure 4). At the end of data collection, a total of 8 (28.6%) patients were still under active checkpointinhibitory therapy. The overall response rate was 21.4% (6 out of 28 patients; 95% CI, 6.2%−36.6%). The median time to response was 13.1 weeks. The median duration of response was 16.4 weeks. At data cutoff, 5 (83.3%) out of 6 initially responding patients had an ongoing response. Change in target lesion size and RECIST Data are illustrated in (Figures 5, 6) DISCUSSION This series of patients does not represent a randomized controlled trial with a defined competitor. Our main point of discussion focuses on the question whether or not real-life treatment of patients, outside of trial associated selection and restrictions, can reproduce the published data on treatment response and tolerability. Regarding treatment response, our PFS survival almost reached 6 months. In comparison, PFS in the intention to treat population of randomized clinical phase II and III trials of checkpoint inhibition showed a PFS of no longer than 2.1 months in all trials (10,12,13,19). This discrepancy is most likely due to the fact that our study population is still rather small. Also, in this series of real-life data, imaging did not follow the strict 3-monthly intervals as scheduled in the above-mentioned trials, also a very reasonable explanation for Summary for patients with total adverse events and with immune-related adverse events. Numbers are shown as total number of afflicted patients per grading interval (Grade ≤2 or Grade ≥3) and percentage with regard to the total patient number of n = 28 patients (CTCAE = Common Terminology Criteria of Adverse Events). Numbers are shown as total number of patients with immune-related adverse events per organ and as percentage of the study population of n = 28. In total, 23 immune-related events in 13 patients were recorded. the observed PFS. Therefore, progression may have been picked up late, at least in a subgroup of our patients. A systematic comparison of response rates and survival data of the current literature are shown in (Table 4). We were able to achieve a response rate of over 21% over all. Evaluating responses with regard to each of the three substances individually was not feasible from a statistical standpoint considering the low and uneven patient count for each group. Also, the expected variance in response rates in cohorts of 200 to 400 patients (as were evaluated in the above-mentioned trials) is rather high: Response rates from the literature show that only about every 5th patient responds to checkpoint inhibition monotherapy. Our data is consistent with this finding. However, the assumed response rates follow a binominal distribution with rather wide confidence intervals. When assuming an actual response rate of 20%, we calculated that 95% of response results would fall between 15.5% and 24.5% in a cohort of 300 patients. This explains the wide confidence intervals on response rates reported for Atezolizumab, Nivolumab and Pembrolizumab (10,12,13,19). A more representative estimation on response, but only for Atezolizumab, can be extracted from the SAUL trial that comprised n = 1004 patients. Unfavorable conditions, such as an ECOG performance status of 2, cerebral metastases or autoimmune disease, among others, were allowed. OS in the intention-to-treat population was 8.7 months (95% CI 7.8-9.9 months), which is comparable with our results. When exclusively All numbers refer to the intention to treat population. PD-L1 (programmed cell death ligand 1), PD-1 (programmed cell death protein 1). looking at patients (n = 643) from the SAUL trial who had similar inclusion criteria as in the IMvigor211 trial, median OS improved to 10.0 (95% CI 8.8-11.9) months. ORR was 13% (11-16%) months with a disease control rate of 40% (37-43%) (20). With regard to OS, our real-world analysis reproduced the promising results from prior trials. As seen in the swimmer plot (Figure 4), a few patients had a short duration of treatment and died early. This may be related to the fact that most patients receiving Atezolizumab were included in the expanded access program. Some of these patients had extensive metastatic load, multiple prior regimens of chemotherapy and were given checkpoint inhibition very late in the course of the disease. Taking this into consideration, OS might improve with patients being more and more able to receive checkpoint inhibition earlier on. Gathering real life data on checkpoint inhibition is therefore important. Regarding the safety of treatment, checkpoint inhibition exhibited a more favorable safety profile than chemotherapy, as could be expected from trials with chemotherapy as a competitor (12,19). OS differed in favor for patients with immune related events. Albeit the fact, that this difference was not statistically significant, our data support the concept, that the presence of immune related adverse events may correlate to some extent with an increased likelihood of treatment efficacy. The thyroid gland was the most prevalently afflicted organ. Colitis, in contrast to prior trials, was not a major issue in this series. However, we did see events of immune mediated colitis in our cohort of patients with checkpoint inhibition in the first line setting (data not shown). As a limitation, data quality may not be comparable to data derived from randomized controlled trials: In particular, RECIST evaluation was performed by multiple radiologists from 3 different institutions and imaging did not follow a strict time schedule as is the case in clinical trials. Last, a variety of inclusion and exclusion criteria do not apply in this real-world setting, hence data is less homogenous.
2020-05-21T09:10:56.686Z
2020-05-21T00:00:00.000
{ "year": 2020, "sha1": "1f3f368580d6a78f456fc07d4422ceecdf135340", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.00808/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c6e0f658a9c0fe091050cb38099d19c8b00b3e12", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12627382
pes2o/s2orc
v3-fos-license
Melatonin promoted chemotaxins expression in lung epithelial cell stimulated with TNF-α Background Patients with asthma demonstrate circadian variations in the airway inflammation and lung function. Pinealectomy reduces the total inflammatory cell number in the asthmatic rat lung. We hypothesize that melatonin, a circadian rhythm regulator, may modulate the circadian inflammatory variations in asthma by stimulating the chemotaxins expression in the lung epithelial cell. Methods Lung epithelial cells (A549) were stimulated with melatonin in the presence or absence of TNF-α(100 ng/ml). RANTES (Regulated on Activation Normal T-cells Expressed and Secreted) and eotaxin expression were measured using ELISA and real-time RT-PCR, eosinophil chemotactic activity (ECA) released by A549 was measured by eosinophil chemotaxis assay. Results TNF-α increased the expression of RANTES (307.84 ± 33.56 versus 207.64 ± 31.27 pg/ml of control, p = 0.025) and eotaxin (108.97 ± 10.87 versus 54.00 ± 5.29 pg/ml of control, p = 0.041). Melatonin(10-10 to 10-6M) alone didn't change the expression of RNATES (204.97 ± 32.56 pg/ml) and eotaxin (55.28 ± 6.71 pg/ml). However, In the presence of TNF-α (100 ng/ml), melatonin promoted RANTES (410.88 ± 52.03, 483.60 ± 55.37, 559.92 ± 75.70, 688.42 ± 95.32, 766.39 ± 101.53 pg/ml, treated with 10-10, 10-9, 10-8, 10-7,10-6M melatonin, respectively) and eotaxin (151.95 ± 13.88, 238.79 ± 16.81, 361.62 ± 36.91, 393.66 ± 44.89, 494.34 ± 100.95 pg/ml, treated with 10-10, 10-9, 10-8, 10-7, 10-6M melatonin, respectively) expression in a dose dependent manner in A549 cells (compared with TNF-α alone, P < 0.05). The increased release of RANTES and eotaxin in A549 cells by above treatment were further confirmed by both real-time RT-PCR and the ECA assay. Conclusion Taken together, our results suggested that melatonin might synergize with pro-inflammatory cytokines to modulate the asthma airway inflammation through promoting the expression of chemotaxins in lung epithelial cell. Backgound Eosinophils are known to be the important effector cells in asthmatic airway inflammations [1]. Previous studies have demonstrated that eosinophils are accumulated in the peripheral blood, the bronchoalveolar lavage fluid, and the airway of the asthmatic patients or the allergensensitized animals [2]. Eosinophil trafficking is regulated by a wide variety of chemotactic factors [3]. Eotaxin and RANTES (Regulated on Activation Normal T-cells Expressed and Secreted) are C-C chemotaxins that can recruit eosinophils to the airway in asthma [4]. A variety of tissues and cell types, including lung epithelial cell, produce eotaxin and RANTES which play an important role in airway [5]. Pro-inflammatory cytokines such as tumor necrosis factor (TNF) and interleukin (IL)-1 are released in the early stage of allergic inflammation. In endothelial and epithelial cells, TNF-α induces an influx of eosinophils into tissues through the increased expression of adhesion molecules [6,7]. Although eotaxin and RANTES tend to be expressed constitutively in several cell types, their expression may also be regulated in response to TNF-α in other cell lines [8]. Melatonin(N-acetyl-5-methoxytryptamine) is a key regulator of circadian rhythm homeostasis in humans [9,10]. It also appears to have an important immunomodulatory effect in allergic diseases [11,12]. Melatonin promotes the cytokine production in the peripheral blood mononuclear cell. Pinealectomized rats sensitized to ovalbumin demonstrated that pinealectomy significantly reduces the inflammatory cell counts in the bronchoalveolar lavage fluid after ovalbumin challenge, and that melatonin administration to pinealectomized rats restores the ability of inflammatory cells to migrate to the bronchoalveolar fluid. Those results suggest that melatonin may modulate the expression of chemotaxins in airway epithelial or endothelial cells [13]. The circadian variations of lung function in nocturnal asthma are associated with the increased airway inflammation during night. As a key regulator in human circadian rhythm homeostasis as well as an immunomodulator in allergic diseases, melatonin may regulate the circadian airway inflammation in asthma through modulating the expression of chemotaxins in the airway epithelial cells. In order to test this hypothesis, we conducted the present study to answer two questions. First, whether melatonin is able to up-regulate RANTES and eotaxin expression in the lung epithelia cell line-A549. Second, what is the combinatory effect of melatonin and TNF-α on RANTES and eotaxin expression and whether this effect increases the eosinophils chemotactic activity (ECA) released in A549. The answers to these questions might provide new insights into the pathophysiology of asthma. Methods This study was approved by the medical ethics committee of the West China Hospital of Sichuan University. Informed consents were obtained from all subjects in the study. Cell Culture A549 cells, human type II-like epithelial lung cells, were obtained from ATCC (Manassas, VA, USA). The cells were cultured in tissue flasks incubated in 100% humidity and 5% CO 2 at 37°C in DMEM medium (GIBCO BRL, Grand Island, NY) supplemented with 10% heat-inactived fetal bovine serum (GIBCO BRL) and penicillin-streptomycin (50 µg/ml, GIBCO BRL), at 1 × 10 6 cells/ml. A549 cells were then plated onto 6-well, flat-bottom tissue culture plates (Becton Dickinson and Co., NJ, USA) at a density of 1 × 10 6 cells/ well in DMEM medium. The medium was changed every 2 d until the cells became confluent and then the cells were used for the experiments. Cytokine Assays As IL-1β and TNF-α have similar effect on the expression of many chemotaxins [14,15], we chose TNF-α as the representative pro-inflammatory cytokines in the asthmatic lung in this study. After the cells became confluent, the medium was changed to serum-free DMEM medium for 12 h. A549 cells were then exposed to increasing concentrations of melatonin (10 -10 , 10 -9 , 10 -8 , 10 -7 , 10 -6 M, the physiology concentration are 10 -9 to 10 -7 M during day and night [16]) (Sigma, St. Louis, MO, USA) and TNF-α (100 ng/ml) (Sigma), for 12 h. The cells were also stimulated with a combination of melatonin (10 -10 , 10 -9 , 10 -8 , 10 -7 , 10 -6 M) and TNF-α (100 ng/ml). The epithelial cell layers were then washed three times with Hanks' balanced salt solution (GIBCO BRL) and incubated for 48 h. Cellfree culture supernatants were collected. RANTES and eotaxin were assayed using enzyme-linked immunosorbent assay (ELISA) kits according to the instructions of the manufacturers. Assay kits for RANTES and eotaxin were purchased from R&D Systems (Minneapolis, MN, USA), and the minimum detectable concentration of RANTES and eotaxin was 5 pg/ml. Experiments were performed at least three times with the similar results. Eosinophil Chemotaxis Assay Eosinophil chemotaxis assay was performed as described previously [19]. Briefly, eosinophils were isolated from the peripheral blood of three healthy donors by negatively selected with immunomagnetic beads. Erythrocytes in venous peripheral blood were removed by hypotonic lysis. Neutrophils and mononuclear cells were depleted with anti-CD16 and anti-CD3 immunomagnetic beads (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany). Eosinophils were stained with Randolph's stain and counted in a hemocytometer. Cytospins of each preparation were stained with Diff-Quik (International Reagent Corp., Green Cross, Osaka, Japan). The mean percentage of the eosinophil purity was 98.0 ± 0.3%. The viability measured by trypan blue exclusion was consistently greater than 95.0%. Eosinophil chemotaxis assay was measured by the Boyden's blind-well chamber technique using a 48-well, multiwell chamber (NeuroProbe Inc., Bethesda, MD). The bottom wells of the chamber were filled with 26.5 µl of the A549 cell supernatant stimulated by various chemicals, as described previously, in tripli-cate. A polycarbonate filter with a pore size of 5 µm (Nucleopore, Pleasanton, CA) was placed over the bottom wells, and isolated eosinophils were placed into each of the top wells. The chambers were then incubated at 37°C, 5% CO 2 for 90 min. After incubation, eosinophils in the top wells were removed by scraping. The filter was then stained with Diff-Quik. Eosinophil chemotactic activity (ECA) is shown as the total number of migrated eosinophils counted in 10 high-power fields under a light microscope (Olympus, Lake Success, NY) at × 400 magnification. Data analysis Data were expressed as means ± SD. Differences between groups were assessed by one-way ANOVA followed by the LDS significant difference test. A value of p < 0.05 was considered statistically significant. Effect of TNF-α and melatonlin on RANTES and eotaxin released from A549 cells RANTES released from A549 cells increased significantly when the cells incubated with TNF-α(100 ng/ml). Melatonin alone didn't have this effect on A549 in dose from10 -10 to 10 -6 M. However, TNF-α induced RANTES release in A549 increased significantly by incubation with melatonin (from10 -10 to 10 -6 M). Similarly, eotaxin released from A549 cells also increased significantly when the cells incubated with TNF-α; Melatonin alone had no effect on eotaxin released from A549 at dose range from10 -10 to 10 -6 M. However, eotaxin released from A549 increased significantly when the cells incubated with melatonin and TNF-α ( Figure 1). Effect of TNF-α and melatonlin on the expression of RANTES and eotaxin in A549 cells To determine whether the production of RANTES and eotaxin is accompanied by the transcription of the corresponding genes, we used real-time RT-PCR to examine RANTES and eotaxin mRNA expression in A549 cells. A549 were stimulated with melatonin (10 -10 , 10 -9 , 10 -8 , 10 -7 , 10 -6 M) and TNF-α (100 ng/ml). Melatonin alone did not change the RANTES and eotaxin mRNA expression in A549. TNF-α can promote the RANTES and eotaxin expression in A549 cells. When stimulated with TNF-α, melatonin synergistically increased the RANTES and eotaxin expression in a dose dependent manner (Fig 2). Discussion In this study, we examined the RANTES and eotaxin protein level and the gene expression in A549 in response to TNF-α and melatonin stimulation using ELISA and realtime RT-PCR. We also measured the ECA released by A549 in response to TNF-α and melatonin stimulation. Unexpected, we found that the eotaxin and RANTES protein level and gene expression in A549 cells were unchanged when treated with melatonin alone, and the ECA released by A549 remained unchanged too. However, when A549 cells co-stimulated with melatonin and TNF-α, eotaxin and RANTES released from the cells increased in a melatonin dose dependent manner. The gene expression of eotaxin and RANTES, and the ECA also increased at the same time. This result support our hypothesis that melatonin play an important role in airway inflammation through up-regulation of the eotaxin and RANTES expression in lung epithelial cell when the cells stimulated with pro-inflammatory cytokines. The pro-inflammatory characteristics of TNF-α have been documented extensively. Numerous studies have demonstrated that these attributes contribute to the RANTES and eotaxin released from A549 cells Figure 1 RANTES and eotaxin released from A549 cells. Melatonin(10 -6 M) alone did not change RANTES and eotaxin released from A549 cells. However, it (from10 -10 to 10 -6 M) promoted RANTES and eotaxin released from A549 cells in a dose dependent manner when co-stimulated with TNF-α (100 ng/ml). * and **, p < 0.05 and 0.01, compared with control and melatonin alone (pg/ml, n = 3). $ and #, p < 0.05 and 0.01, compared with TNF-α alone (pg/ml, n = 3). inflammatory conditions present in airways of asthmatic subjects. TNF-α has been shown to activate the inflammatory cells, up-regulate the adhesion molecules on endothelium and circulating leukocytes, increase the production of chemotaxins [20], the bronchial responsiveness. TNF-α is expressed primarily by the alveolar cells and tissue macrophages, mast cells, and bronchial epithelial cells. Additionally, in most other airway cell systems studied, conditions simulating an inflammatory state result in expression of TNF-α. Thus, it is not surprising that TNF-α concentration is higher in the bronchoalveolar lavage fluid from symptomatic asthmatics compared with normal control subjects [21]. In this study, we found that TNF-α could promote the RANTES and eotaxin production in A549 and melatonin further exaggerated this effect of TNF-α. Lung function in a healthy individual varies in a circadian rhythm, with the peak lung function occurring near 4:00 PM (1600 hours) and the minimal lung function occurring near 4:00 AM (0400 hours). An episode of nocturnal asthma is characterized by an exaggeration in this normal variation in lung function from daytime to nighttime, with diurnal changes in the pulmonary function RANTES and eotaxin mRNA expression in A549 cells Figure 2 RANTES and eotaxin mRNA expression in A549 cells. Melatonin(10 -6 M) alone did not change the RANTES and eotaxin mRNA expression in A549 cells. TNF-α (100 ng/ml) could promote the RANTES and eotaxin expression in A549 cells. Melatonin (from10 -10 to 10 -6 M) increased the RANTES expression of A549 cell in a dose dependent manner when co-stimulated with TNF-α(100 ng/ml). **, p < 0.01, compared with control and melatonin alone (n = 3). #, p < 0.01, compared with TNF-α alone (n = 3). generally of > 15%. A recent study showed that the circadian variability in pulmonary function in asthma was related to changes in the airway eosinophils recruitment and activation [22]. Although the molecular mechanism responsible for the selective infiltration of eosinophils into the inflamed tissue in asthma has not been elucidated, chemotaxin may play an important role in this process. Eotaxin is a chemotaxin that binds with high affinity and specificity to the chemotaxin receptor CCR3 and plays an important role in the pathogenesis of allergic disease. RANTES, a C-C chemotaxin, was initially shown to be chemoattractant for T cells and monocytes but has subsequently been shown to be a potent eosinophil chemoattractant [23,24]. In other studies, an up-regulation of RANTES message was observed in the airways of asthmatic patients [25], and increased levels of RANTES have been detected in the nasal aspirates of children with the viral exacerbation of asthma [26], suggesting an important role for RANTES in this process. From the result of our study, together with the studies above, we can infer that melatonin, the most important circadian rhythm regulator, may also regulate the asthma airway inflammation by up-regulating the expression of eotaxin and RANTES in the airway epithelium in inflammatory status of asthma. RANTES and eotaxin expression are regulated by two important transcriptional factors: active protein-1 (AP-1) and nuclear factor kappa B(NFκB). Benis et al [27] found that melatonin could suppress the activation of NFκB and AP-1. Although NFκB and AP-1 could up-regulate the expression of many pro-inflammatory cytokines and chemotaxins, other transcriptional factors also could be involved in the regulation of RANTES and eotaxin. Further studies are needed to elucidate the mechanism of how melatonin regulates the transcription of these chemotaxins. The role of melatonin as an immunomodulator is poorly understood and, in some cases, contradictory results have been reported. For example, Shafer's study showed that melatonin has no effect on the activity of stimulated macrophages [28]. However, pinealectomy of rats significantly reduces airway inflammation after ovalbumin inhalational challenge, and melatonin administration to the pinealectomized rats seems to restore the airway inflammation, which further supports the pro-inflammatory effect of melatonin. In addition, up-regulation of the gene expression of transforming growth factor-β(TGF-β), macrophage-colony stimulating factor (M-CSF), TNF-α and stem cell factor (SCF) in peritoneal exudate cells, and upregulation of the gene expression of IL-1β, M-CSF, TNF-α, interferon-γ (IFN-γ) and SCF in splenocytes, were observed in male C57 mice received 10 consecutive daily intraperitoneal injections of melatonin [12]. Further research should be directed at evaluating the mechanism of melatonin regulating the transcription of those kinds of cytokines. Conclusion Melatonin alone did not change eotaxin and RANTES protein level and gene expression in A549 cells, and had no effect on ECA released by A549 cells. However, when A549 cells were stimulated with melatonin, together with TNF-α, the mRNA expression and protein release of eotaxin and RANTES increased significantly. This result suggested that combined with pro-inflammatory cytokines, melatonin may play a role in the airway inflammation through up-regulation of the eotaxin and RANTES expression in the lung epithelial cells.
2014-10-01T00:00:00.000Z
2004-11-10T00:00:00.000
{ "year": 2004, "sha1": "bebf11cb4ddaaa79f899545bde19b8870b6575b7", "oa_license": "CCBY", "oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/1465-9921-5-20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57d769482c145f815c615a35a45508433550c2c6", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222528695
pes2o/s2orc
v3-fos-license
A Novel Skin Closure Technique for the Management of Lacerations in Thin-Skinned Individuals Suturing thin, fragile skin, particularly in elderly patients, is often problematic and presents a challenge to many clinicians. We describe a novel technique that re-enforces the edges of such thin fragile skin, with the use of topical skin adhesive, 2-octyl cyanoacrylate (Dermabond™; Ethicon, Somerville, NJ). This allows secure suture placement and application of tension to facilitate wound closure. Introduction Epidermal and dermal atrophy, as well as decreased collagen content, are often a result of ageing and sun damage, which results in skin fragility. Such skin is prone to tearing and subsequent wound closure is often complicated by the suture cutting through the tissue [1]. Steri-Strips™ (3M, St. Paul, MN) alone often are not strong enough and, with increased wound tension, can place traction on the skin surface, resulting in blistering of the skin as the epidermis is sheared off [2]. Novel techniques have involved combining sutures and Steri-Strips™ to prevent 'cheese-wiring' of the skin [3]. Alternatives to Steri-Strips™ such as adhesive strips or polyethylene films have also been described [4,5]. One drawback of such techniques is the need to remove the sutures and adhesive strips which may further damage the fragile skin. We describe a novel approach for the suturing of thin fragile skin. Suturing such skin, particularly in elderly patients, is often problematic and presents a challenge to many clinicians. Sutures tend to "cheese-wire" when even a minimum amount of tension is applied across the wound. We suggest a technique that reenforces the edges of such thin fragile skin, allowing secure suture placement and application of tension to facilitate wound closure. Case Presentation We present the case of an 83-year-old lady who presented to our trauma clinic with a 18x3cm laceration to the dorsum of her right forearm ( Figure 1). FIGURE 1: Laceration on dorsum of right forearm requiring suturing, with surrounding ecchymosis This occurred following a mechanical fall at home. The patient was transferred to the minor operating theatre for wound closure under local anaesthetic. The wound was irrigated prior to definitive closure. The degloving injury involved skin and subcutaneous fat. The underlying fascia was intact. Figure 2 demonstrates the thin fragile nature of the patient's skin due to age-related atrophy of epidermis and dermis. FIGURE 2: Degloving injury with dermal and epidermal atrophy The topical skin adhesive, 2-octyl cyanoacrylate (Dermabond™; Ethicon, Somerville, NJ), is applied around the perimeter of the wound, to increase strength and reinforce the wound edge ( Figure 3). FIGURE 3: Yellow hatched line demonstrating area of skin adhesive application Care must be taken to avoid allowing adhesive into the wound bed itself. The adhesive is allowed to dry completely (approximately two minutes). Sutures (simple or mattress) can then be placed through the adhesive/skin layer in one bite (Figure 4). FIGURE 4: Wound sutured post-application of topical adhesive The skin adhesive reinforces the skin edge and prevents the suture from cutting through the skin. A significant amount of tension can therefore be applied to the suture to appose the wound edges and facilitate wound closure. Discussion This technique allows full visualization of the skin edges, enabling the user to check for skin edge apposition along the length of the wound. The topical skin adhesive is absorbable (approximately five to seven days), and does not require removal. To date, we have not experienced any traction blistering and this technique does not interfere with wound healing. This simple and versatile novel technique is applicable to all parts of the body with thin or poor quality skin, helping to reduce complications and further morbidity. Conclusions In conclusion, suturing of thin fragile skin is challenging, with sutures often pulling through the tissues. Use of topical skin adhesive, 2-octyl cyanoacrylate (Dermabond™) reinforces skin edges, thus allowing suture placement and wound closure. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Mater Misericordiae Hospital Ethics Committee issued approval n/a. Approval was granted by the ethics committee. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2020-10-16T07:25:17.948Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "36a242495e9a1299982147152751cb0e2b58386f", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/35398-a-novel-skin-closure-technique-for-the-management-of-lacerations-in-thin-skinned-individuals.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51620c13d1f7072490ef390e799c2fc49f0815ad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54016225
pes2o/s2orc
v3-fos-license
Antimicrobial Studies and Characterization of Copper Surfactants Derived from Various Oils Treated at High Temperatures by P . D . A . Technique Biologically potent compounds are one of the most important classes of materials for the upcoming generations. Increasing number of microbial infectious diseases and resistant pathogens create a demand and urgency to develop novel, potent, safe and improved variety of antimicrobial agents. This initiates a task for current chemistry to synthesize compounds that show promising activity as therapeutic agents with lower toxicity. Therefore, a substantial research is needed for their discovery and improvement. Chemistry of present era aims to build a pollution free environment. For the same, it targets to create some alternativeswhich are eco-friendly and nature loving. Present research work is a step towards achieving such alternatives. INTRODUCTION Surface active agents are very useful in biological systems, as well as play an important role in many industrial processes [1].Exact information about micellar feature of copper (II) surfactants play a vital role in its selection in various fields such as foaming, wetting, detergents, emulsifier, herbicides, pesticides, paints, varnishes, wood preservatives, lubricants etc [2].Anionic soaps containing copper ions play a vital role in various fields such as rubber industries, paints, varnishes, lubrication, protection of crops, stabilization of nylon threads, preservation of wood etc [3].Inspite of all these applications, copper surfactants derived from various edible oils have not been thoroughly investigated.Many copper complexes are found to have significant anti-tubercular, fungicidal and antitumor activities [4].Several workers described the uses of copper soaps as stabilizers for nylon threads, synthetic polyamides and polyesters [5,6].The protection of fabrics, nets, cordage etc from fungi and decay by impregnating them in ammonical solution of copper soaps was described by several workers [7,8].The effectiveness of copper soaps as fungicides, bactericides, insecticides and herbicides were also studied.Recent development in metallic soaps preservation shows that zinc and copper napthanates, exhibiting no specific fungal weakness, can be used to prevent attack by wood boring insects the use of copper soaps as driers for the preparation of paints, varnishes and other protective coating.It was observed that the addition of copper soaps to fuel oil reduces the smoke and fumes of burning oil [9].The use of copper linoleate as heavy-duty wood preservative and many other biological activities of copper metal containing surfactants have also been studied [10].These facts led us to synthesize copper soaps of sesame and soyabean oils (Fresh and treated at high temperature at different times) and fungicidal activities was planned to study for exploring their applications. Synthesis Soyabean and sesame oils are easily available in India and chosen for the investigation.Their compositions are recorded in Table 1.Three samples of each oil have been prepared as fresh (untreated), treated oil at high temperature for 15 minutes and for 60 minutes.Copper soaps were prepared by Direct Metathesis process as earlier reported [11] and characterization was done by using elemental analysis, UV, IR methods. Determination of Molecular Weight of Copper Soaps Molecular weights of Cu (II) soaps are determined from S.E [12].The values of saponification value and molecular weights are recorded in Table 2. Autoxidation Literature independently suggested that the first reaction was between molecular oxygen and an ethylene bond with formation of peroxide which like hydrogen peroxide, was capable of oxidizing other compounds.It is further suggested that the initially formed peroxide changes by intra-molecular rearrangement to a tautomericenediol-ketohydroxide system.It is believed that the moloxide was the primary product of reaction and this rearranged to the peroxide and confirmed that the primary product of autoxidation of non-conjugated unsaturated acids or esters are hydroperoxides in which the double bond remains intact [13]. Thermal polymerization When the esters of di and tri ethenoid acids are heated above 200 o C, they undergo certain changes.It is confirmed from literature that thermal polymerization of non-conjugated and conjugated octadecadienoates have concluded that the first step in the polymerization of the non-conjugateddienes is isomerization to the conjugated esters and that after configurational change to the trans -trans diene, this enters into diels alder condensation with conjugated or nonconjugateddiene, preferably the latter since this is present in greater proportions [14].As a result, average molecular weight of oil changes after heating.This fact is supported by several workers that the deterioration during frying is higher in the oils containing higher polyunsaturated fatty acids. Electronic Absorption Spectra In order to confirm the formation of copper soaps derived from groundnut oil, the electronic absorption spectra was recorded on a Perkin-Elmer-Lambda-28 spectrophotometer. Infrared Spectral Analysis To study the structure of copper soaps derived from oils, the infrared spectra of these compounds in the present study were recorded in KBr disc by making use of Perkin Elmer infrared spectrometer.The IR absorption peaks are given in Table 3. Fungicidal Activities The fungicidal analysis procedure follows below steps as suggested by Booth and Hawks worth as follows: Sterilization of Glassware's For biological activity the glassware were thoroughly washed and cleaned with chromic acid, followed by washed with distilled water and keep them in hot air oven at 160 o C for 24 h.All operations concerning inoculation are done in a completely sterilized chamber. Inoculation The artificial induction of micro-organism into a medium is called inoculation.The latter is the most fundamental technique for studying the growth characteristics of micro-organisms and for transfer and maintenance of culture under aseptic condition. Preparation of Slant Agar slants were prepared to inoculate microbial culture.To prepare agar slant, a required number of culture tubes were taken and about 12 to 15 ml of liquefied agar medium was poured in each of them.The tubes were now cottonplugged and sterilized in an autoclave.After the sterilization was over, the tubes were taken out and were placed in slanting (stopping) position for sometimes, the tubes got cooled and the medium in them was solidified resulting in a sloppy surface. Culture Media Used In preparing a Culture medium for any micro-organism, the primary goal is to provide a balanced mixture of the nutrient that will permit good growth.Additionally, the culturing of micro-organisms Careful Control of various environmental factors which normally are maintained within narrow culture media. Preparation of PDA Potato Dextrose Agar (PDA) and Potato Dextrose Broth (PDB) are common microbiological media from potato infusion and dextrose (corn sugar) it was prepared by earlier reported method [15]. Preparation of Sample Solutions The calculated amount of the copper surfactants derived from sesame and soyabean were weighed in a standard flask and 10 3 and 10 4 ppm concentration of solutions prepared by serial dilution method. Test Organism The test organism was Alternaria alternate, and Aspergillus niger which was cultured and isolated from its natural habitat and identified morphologically . Fungicidal Testing The fungicidal testing procedure was exactly same as reported by Sharma et al. [16]. The data were statistically analyzed according to the following formula [17]. (2) C-Total area of fungal colony in plat without copper surfactants after 2 days. T-Total area of fungal colony in plate with copper surfactants after 2 days. Electronic Absorption Spectra The spectra give information concerning copper-ligand binding.The electronic absorption spectra of copper sesame soap show that one broad band at about 670-680 nm (14925-14706 cm -1 ) and a sharp band at about 280 nm (35714 cm -1 ).One broad band may be attributed to 2E g → 2T 2g transitions arising from MLCTabsorption bands which confirms the formation of copper sesame soap and proposes a distorted octahedral stereochemistry around the metal ion.Absorption peaks (λ<300nm) belong to π→ π* or n→ π* orbital transition of the ligand [18,19]. IR Spectra The detailed infrared absorption spectral studies reveal that there is a marked difference between the spectra of oils and that of corresponding copper soap.In the IR spectra of sesame oil, three distinct bands appear at 3008, 2925 and % / 100 Inhibition C T C    2854 cm -1 due to =C-H stretching, -C-H symmetrical and -C-H antisymmetric stretching vibration respectively.Apart from these, oils show characteristic absorption bands of esters (because oils are esters of long chain fatty acids) [20].In IR spectra of oil, two bands are observed at 1745 and 1164 cm -1 . These bands may be assigned to C=O stretching and C-O stretching vibration of ester group.In the spectra of copper soaps, strong band in the region 2970-2840 cm -1 are due to C-H symmetrical and antisymmetrical stretching vibration of methyl and methylene group.There is complete disappearance of the characteristic bands of esters in the spectra of soap molecules and appearance of two new absorption bands in the region 1580-1610 cm -1 (symmetricvibration of carboxylate ion) and 1380-1400 cm -1 (antisymmetric vibration of carboxylate ion).The absence of C=O band in the IR spectra of soaps show that there is a resonance in the two C=O bonds of carboxylate group [21,22].A number of progressive bands are observed for both oils and soaps in the region 1300-1120 cm -1 .Such progressive bands with medium or weak intensity are assigned to the wagging and twisting vibrations of the chain of successive methylene group of the soap molecule.Weak bands in the region 725-710 cm -1 may probably be due to methylene rocking vibrations of the straight carbon chain -(CH 2 )-.The bands in the region 750-450 cm -1 in the infrared spectra of these soaps are due to metal to oxygen bond stretching vibration.These are called characteristic absorption of metal constituent of each soap molecule [23].In the IR spectra of CSe 60 also, bands are present at about 3500 cm -1 (very weak), 1750 cm -1 (strong), 1625 cm -1 (weak) and 1100 cm -1 (weak).Appearance of these bands may be due to formation of various autoxidized products such as ene-diol, ketohydroxide or carbonyl degradation products.In IR spectra of CSo 15 and CSo 60 bands are present at 3450 cm -1 and 1745 cm -1 which shows the formation of keto-hydroxide during the autoxidation reactions. Fungicidal Activities Copper soaps derived from untreated and treated oils have been screened for their anti-fungicidal activity against Alternaria-alternata and Aspergillus-niger at 1000 ppm and 10000 ppm by agar-plate technique [24].Copper soaps showed moderate activities against both the fungi. A perusal of Fig. (1) reveals that all the copper soaps have significant fungitoxicity at 10000 ppm but their toxicity decreases markedly on dilution (at 1000 ppm).It is apparent that their efficiency increases with their concentration.Thus it is evident that concentration plays a vital role in increasing the degree of inhibition [25,26].Fungicidal screening data revealed that at lower concentration the inhibition of growth is less as compared to higher concentration.From comparison of the results for both the fungi, it is found that all copper soaps are more potent (more toxic) against Aspergillus niger than against Alternaria-alternata i.e. inhibition of growth is higher for Aspergillus niger than inhibition of growth for Alternaria alternate (Figs. 2, 3).It reveals that CSe is the least fungi toxic (% inhibition lowest) whereas CSo is the most toxic against both fungi.The activity (toxicity) of copper soaps derived from untreated oils is found to increase in the order: CSo > CSe For copper soaps derived from treated oils for 15 and 60 minutes, the results are same as copper soaps derived from untreated oils.CSe 15, CSe 60 is the least active and CSo 15 CSo 60 is the most active against both fungi.The order of activity of copper soaps derived from treated oils for 15 minutes is as follows: From comparison of copper soaps derived from untreated and treated sample of oil, it is found that fungitoxicity increases with the increase of time of heating for oils.All the tests were performed in triplicate the standard deviation has been measured by the conventional measure of repeatability and the average was taken as final reading.The results of ANOVA for the antifungal activities for all sops complexes are shown in Table 4 [27,28].The predicted R 2 are in reasonable agreement and closer to 1.0 [29,30].This confirms that the experimental data are well satisfactory.The descriptive statics results of Cu (II) soaps shown in Tables 5, 6 confirm satisfactory results in triplet.The result is statistically significant, by the standards of the study, due to p < F. CONCLUSION From the comparison between IR spectra of Cu (II) soaps of untreated oils and Cu (II) soaps of treated oils, it is found that there is no band in 3200-3600 cm -1 region in Cu (II) soaps derived from untreated oils.But in the IR spectra of copper soaps derived from treated oils, there are bands at about 3400-3500 cm -1 , 1750 cm -1 , 1625 cm -1 , 1100 cm -1 .These bands may be due to formation of various autoxidized products such as enediol, keto-hydroxide or carbonyl degradation products.The antifungal activities of copper soaps derived from various edible oils have been evaluated by testing these against Alternaria alternata and Aspergillus niger at different concentrations by agar plate technique.It has been suggested that copper soaps derived from the oils treated for longer period show maximum activity (inhibition of the growth) against both the fungi.Copper soaps derived from oils treated for lesser period show lesser activity and soaps derived from untreated oils show minimum activity against both fungi.The activity (inhibition of the growth) also increases with the increase in concentration of soap.These studies suggested that used oils available in the Indian market can be used as fungicidal, pesticidal or herbicidal agents as they show positive result. Table 4 . ANOVA results for antifungal activities of Cu (II) soaps. MS= mean square, df= degree of freedom, p < F (level of significance).
2018-11-21T01:26:04.435Z
2018-11-14T00:00:00.000
{ "year": 2018, "sha1": "e748c8a2a6d1282fba0361336092173e097cfaf9", "oa_license": "CCBY", "oa_url": "https://openpharmaceuticalsciencesjournal.com/contents/volumes/V5/PHARMSCI-5-36/PHARMSCI-5-36.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e748c8a2a6d1282fba0361336092173e097cfaf9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
245887733
pes2o/s2orc
v3-fos-license
Development of 5-E Learning Cycle-Based Simple Chemistry Practicum Guideline Module for Eleventh Grade Article Info Received: September 22, 2021 Revised: January 5, 2022 Accepted: January 10, 2022 Published: January 31, 2022 Abstract: The chemistry practicum activity in school is impossible due to the COVID-19 pandemic, whereas practicum is very important to train soft skills and understanding of chemistry materials. This research aimed to develop 5-e learning cycle-based simple chemistry practicum guideline module for students of class XI. The development model used was a 4-D development model. The quality of the product was assessed by one material expert, one media expert, five reviewers (chemistry teachers of high school), and responded by ten students of high school. The instruments used in the research were product quality assessment sheets using a Likert scale and student response sheets, in the form of a questionnaire, using the Guttman scale. The characteristics of the developed practicum module were in the form of a practicum module for hydrocarbons, thermochemistry, reaction rates, and chemical equilibrium, using simple tools and chemicals; being combined with the 5-e learning cycle model. The quality assessment on the practicum module, which was carried out by one material expert, media experts, and reviewers, gained 88.89. 95.83. and 94.00%. All assessments got a score with Very Good score. From these results, it can be concluded that the developed product is feasible to be used as a practicum guideline. Introduction The development of soft skills is the master key to deal with the changes in the era of fourth industrial revolution (Samad, 2020). In this era, information technology has become the main basis in human's daily life (Yuliati & Saputra, 2019). The progress of science and technology automatically makes the competition of human's competency tighter (Gotama, 2018). Based on the results of studies by the Stanford Research Institute and Carneige Mellon Foundation, it shown that 75% of long-term job success depends on soft skills and 25% is determined by hard skills (Rashidi, et al., 2013). Here, the soft skills include the ability to think critically, creativity, communication, and collaboration (Redhana, 2019). However, the reality on the ground shows that education in Indonesia is more oriented towards the cultivation of hard skills and has not been much oriented to the formation of soft skills (Wisetya & Ismara, 2018). As a result, students have not been able to develop their soft skills optimally. The development of students' soft skills can be optimized through learning with a scientific approach (Redhana, 2019). The scientific approach has several characteristics. Those are student-centered learning, involving science process skills in constructing concept, stimulating the development of intelligence (thinking skills), and being able to develop students' character (Mulyati, 2020). Learning with a scientific approach can be applied to science subjects. One of which is chemistry (Widyasti, et al., 2020). Chemistry has two inseparable main things, namely, chemistry as a product (chemical knowledge in the form of fact, concept, principle, law, and theory) or scientific 157 finding, and chemistry as a process (scientific work) (Faizan, 2020). This does not rule out the possibility of difficulties for students to participate the chemistry learning. According to the results of interviews with students of high school in Yogyakarta, the majority of chemistry materials contain abstract concepts which get students experiencing difficulties to understand it. One of learning methods that can be applied to support students' understanding of chemistry is practicum (Suryaningsih, 2017). Practicum is an experiment-based teaching and learning activity (Rahmawati & Khamidinal, 2019). Experiment-based learning directs students to experiential learning (learning based on concrete experience). Thus, students have opportunity to find out and prove the theories that have been studied by themselves (Suryaningsih, 2017). In addition, students can also develop soft skills which include observation, data analysis, problem solving, teamwork, and communication skills (Amarlita, 2019). In accordance with the research by Sari and Mauliza (2020), it stated that not all teachers implement the practical activities in the learning process. This is due to the instruments and chemicals are quite expensive and the unavailability of laboratory assistants is an obstacle to implement practicum in schools. In addition, during the current COVID-19 pandemic, it is also not possible to carry out practicum (Sugiharti & Sugandi, 2020). Meanwhile, to deliver the majority of chemistry materials, especially in class XI, practical activities are required. The ordinary chemistry practicum can be used as an alternative to overcome any obstacle in carrying out practicum in the laboratory, especially during the COVID-19 pandemic (Hendriyani & Novi, 2020). This is due to the instruments and materials which are used to perform the simple practicum come from the surroundings, so that practicum activity can still be implemented even though it is not performed in the laboratory. The learning activity by using a simple practicum is considered to be able to provide good motivation in order to foster students' interest in learning. Eventually, students can gain thorough understanding when studying chemistry (Baunsele, et al., 2020). Furthermore, the practicum guidelines are required in order that the practicum activity can run smoothly and effectively (Yuniar, et al., 2019). The practicum guidelines are required in order that practicum activities by students can achieve the competencies to be achieved (Amarlita, 2019). However, based on the results of interviews with chemistry teachers of high school in Yogyakarta, it was stated that the available practicum guideline module has not used simple tools and materials and it has not been able to develop students' soft skills. Here, the soft skills include problem solving skills and creative thinking skills. Any effort that can be made to solve the problems above is to integrate the practicum guideline module with a learning model that can develop students' soft skills (Khairunnufus, et al., 2018). One of the learning models that can be applied is model of 5-e learning cycle (Adriyani & Purwanti, 2018). The model of 5-e learning cycle allows active learning and develops students' abilities to communicate, to relate various science topics, and to apply complex concepts (Pambudi et al., 2016). The 5-e learning cycle model can also improve students' science process skills (Adriyani & Purwanti, 2018). Nevertheless, based on the literature study that has been carried out, there has not been much research on the development of 5-e learning cycle-based practicum guideline module, especially on chemistry for class XI. The previous research conducted by Utami (2019) has similar topic but it was limited to the materials of alkane derivative carbon compounds for class XII. In addition, research conducted by Setyowati (2016), was limited to the materials of class X. Therefore, 5-E learning cycle-based simple chemistry practicum guideline module for eleventh grade needs to be developed. This research aims to develop a simple chemistry practicum guideline module based on the 5-e learning cycle for class XI and to determine the quality and response of students toward the product. Hopefully, the practicum guideline module can be used as a guideline during performing practicum and help develop students' soft skills. Method This research is a development research (R&D) which uses a 4-D development model (define, design, develop, disseminate). However, this research was limited to the stage of development. The developed product was assessed by one media expert, one material expert, and five reviewers (chemistry teachers of high school) and it was also responded by ten students of eleventh grade majoring in mathematics and natural science. The instruments used in this research included product validation sheets, product quality assessment sheets, and student response sheets. The assessment of product quality was conducted by using Likert-scale questionnaire while student responses were obtained through Guttman-scale questionnaire. The data analysis technique of the product quality assessment was carried out by converting the qualitative assessment to quantitative (score). Likertscale was used to convert the qualitative assessment to quantitative (score). The obtained score was then calculated in order to determine the average score for the whole and each aspect of the assessment. Furthermore, the score was then converted into a qualitative value according to the ideal assessment category as shown in Table 1. Data analysis technique of student responses was carried out by converting qualitative data into quantitative data which was in the form of scores by using the Guttman scale. The data that has been converted to the form of scores were then calculated in order to find out the average score of overall and each assessment aspects. Define The first stage is the definition which consists of needs analysis and curriculum analysis. According to needs analysis, the majority of chemistry materials contained abstract concepts so that practicum activity was necessary to support students' understanding. However, not all teachers carried out practicum activities in the learning process due to various obstacles, including equipment and chemicals which are quite expensive; no laboratory assistant who was available; and the busy schedule so that teachers put more emphasis on delivering materials. In addition, the available practicum guideline module has not been able to develop students' soft skills which include problemsolving and creative thinking abilities. Meanwhile, based on curriculum analysis, it was found that the material used in the practicum guideline module containing hydrocarbon compounds, thermochemistry, reaction rates, and chemical equilibrium. The practicum activity on the materials above can be performed simply by using tools and materials which are easily found in everyday life. section contained an introduction, concept map, module description, instructions for using the module, lab rules, introduction to some lab tools, labels for dangerous symbols, core competencies, and basic competencies. The cover of the simple chemistry practicum guideline module based on the 5-e learning cycle can be seen in Figure 1. Figure 1. Cover of Practicum Guideline Module Introduction to several lab tools was presented in the form of a table containing pictures of lab tools along with the names and functions. The presentation of several lab tools aimed to increase students' knowledge regarding various tools and their functions even though students have never seen or used them in the laboratory. Meanwhile, labels of hazardous symbols were also presented in the form of a table that contained descriptions of chemical properties and examples of their compounds. The presentation of hazardous symbol aimed to increase the knowledge and awareness of students when performing practicum activity in the laboratory and working on chemicals labeled with hazardous symbols. Figure 2. Symbols of Hazardous Chemicals The content section included seven practicum activities based on sub-chapters of hydrocarbon compounds, thermochemistry, reaction rates, and chemical equilibrium. The practicum activities which were presented contained five learning stages (engagement, exploration, explanation, elaboration, evaluation) which were arranged in order to facilitate students in finding out the material concepts. The first stage, engagement (the stage of arousing students' interest and curiosity) contained questions and examples of interesting cases/phenomena presented by using communicative language in order that students were intrigued and they eventually performed practicum activities. Furthermore, an example of the engagement stage in the developed module can be seen in Figure 3. The second stage, exploration (inquiry) contained topic-based practicum activity being learned and it included (objectives, basic theory, tools, chemicals, work methods, observational data, and questions to be discussed). Here, the practicum activity used simple work steps as well as tools and chemicals that were easily found in everyday life. In addition, images of a series of tools were included in order that students easily could understand the objectives of work steps presented. In addition, a column containing an invitation to apply affective attitudes was also inserted during the practicum activity. Affective attitudes that were emphasized included habituation of discipline and working together in groups. The column of affective attitude habituation can be seen in Figure 4. The third stage is explanation. This stage contained encouragement to students in order to explicate their works in practicum activity by using their own language. The fourth stage is elaboration (application). It contained interesting cases/phenomena that were presented in new situations in order to encourage students to apply the concepts that have been obtained. In addition, additional references in the form of barcode images were included. The selection of the barcode image aimed to make it easier for students to access. Students only needed to scan the barcode by using their smartphones. These additional references hopefully could increase students' insight in answering questions at the elaboration stage. The last stage, evaluation (assessment) consisted of exercises and self-reflection. The exercises contained questions that were set to test students' understanding of the obtained concepts through practicum activity. An example of reflection can be seen in Figure 5. Self-reflection included an overall evaluation of practicum activity from the beginning to the end. In addition, the questions in the engagement stage were also brought up again in the evaluation stage. Furthermore, the closing section contained a bibliography and about the author. Develop 5-E learning cycle-based simple chemistry practicum guideline module for class XI were assessed by a material expert. The assessment aspects included aspects of content feasibility, language, and 5-e learning cycle. The results of the assessment by material expert can be seen in Table 2. Based on Table 2, it can be seen that the quality of the simple chemistry practicum guideline module based on the 5-e learning cycle for class XI is Very Good (VG) with an ideal percentage of 88.89% and it is feasible according to material expert. Furthermore, the assessment by media expert was conducted based on two aspects. Those were presentation and the graphics. The results of the assessment by media experts can be seen in Table 3. Based on Table 3, it can be seen that the quality aspect of the 5-e learning cycle-based simple chemistry practicum guideline module for class XI is Very Good (VG) with an ideal percentage of 95.83% and it is feasible according to media expert. The assessment on product quality by reviewers (chemistry teachers of high school) was carried out by filling out checklist on the assessment questionnaire. The assessment was conducted by using Likert-scale which was divided into five assessment aspects. The five assessment aspects included aspects of content feasibility, language, 5-e learning cycle, presentation, and graphics. The results of product quality assessments by reviewers can be seen in Table 4. Based on Table 4, it can be seen the quality aspect of the 5-e learning cycle-based simple chemistry practicum guideline module for class XI is Very Good (VG) with an ideal percentage of 94.0% and it is already feasible according to reviewers. The response of students was acquired by filling out the checklist on questionnaire consisting of five aspects, namely content, language, presentation, graphics, and 5-e learning cycle. The questionnaire of student response used the Guttman-scale with the statement "Yes" or "No". There were 10 indicators which contained five positive statements and five negative statements. The results of student responses can be seen in Table 5. Based on Table 5, it can be seen that the ideal percentage obtained is 90% and it is considered as Very Good category. This means that the 5-e learning cyclebased simple chemistry practicum guideline module for class XI obtained very good response from students. Therefore, it can be concluded that the practicum guideline module which has been developed is feasible to be used as learning media. Conclusion According to the assessment by material expert,the quality of the 5-e learning cycle-based simple chemistry practicum guideline module for class XI gains an ideal percentage of 88.89% with the Very Good category. Furthermore, based on assessment by media expert, it obtains the ideal percentage of 88.89% and 95.83% with the Very Good category. Thereafter, based on the assessment by reviewers, it gets the ideal percentage of 94.00% with the Very Good category. Hereinafter, it also receives positive response from students of class XI with the ideal percentage of 90%. Therefore, it can be concluded that that the 5-e learning cycle-based simple chemistry practicum guideline module for class XI is feasible to be used as a learning media in term of performing chemistry practicum activities in high school. However, the practicum guideline module which has been developed needs to be tested in learning chemistry of class XI, especially in the practicum activity. In addition, it is necessary to develop similar research with different subject matter or chemistry materials.
2022-01-13T16:35:57.737Z
2022-01-10T00:00:00.000
{ "year": 2022, "sha1": "b24ddc7ef3e51599e1c8b9b3fd4cf4d7297b2e99", "oa_license": "CCBY", "oa_url": "https://jppipa.unram.ac.id/index.php/jppipa/article/download/988/946", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f2fa376cabe11a560ed0d06fb685c9e54578de01", "s2fieldsofstudy": [ "Chemistry", "Education" ], "extfieldsofstudy": [] }
261419056
pes2o/s2orc
v3-fos-license
Health and commercial relevance of Garcinia species: Key scientometric analyses from three decades of research Garcinia species ( G. indica, G. cambogia, G. kola and G. mangostana ) represent some of the most sought - after herbs globally due to their impressive medicinal qualities, hence the ever - growing interest of researchers in not into these plants. In this study, an extensive bibliometric analysis of the available research outputs on the widely - known Garcinia species was conducted to appraise the progress made and also highlight the future focus of research on the plants. The published articles (original and conference articles) on the selected species from 1991 to 2021 were retrieved from Scopus® database, scrutinized and further analyzed using the VOS viewer software. Over 2000 research outputs were published posting an annual publication rate of 75 articles, which have altogether garnered almost 37000 citations within the period under review. Of the 85 country affiliations on the publications, 5, which include India, Thailand, Nigeria, Indonesia, and the United States have cumulatively contributed two - thirds of the total outputs. The institutions; the University of Ibadan (97), Prince Songkla University (52) and Mahidol University (50) have the most publications revealing their research focus on herbs. However, in terms of individual influence, Prof E.O. Farombi, of the University of Ibadan, led the pack with an impressive 42 publications (1585 citations) on Garcinia kola followed by Prof Y.W. Chin of the Seoul National University, South Korea with 23 publications (452 citations) on Garcinia mangostana . The versatility in the health applications of these species especially as sources for new therapeutics, nutraceuticals or functional food ingredients, has been the main driver of the research within the past three decades. Recent research undertakings have demonstrated the potential industrial uses of herbs in the clothing and petroleum industries and these may dominate the research emphases in the immediate future. Introduction The genus Garcinia which belongs to the Clusiaceae family includes about 400 species that are native to Asia and Africa, America, Australia, Brazil, Polynesia and New Caledonia (1,2). Recently, Garcinia species have received considerable attention worldwide from the scientific as well as industrial sectors, and several potential utilities and novel compounds with diverse bioactivities have been reported. These compounds offer numerous opportunities for pharmaceutical companies in the development of new drug leads. They also represent an excellent source of molecules to produce food additives, functional foods, nutritional products, and nutraceuticals for the growing number of natural products companies. The plants of the genus also have applications in petroleum industries and other diverse industrial fields (3). Fruit-yielding Garcinia trees such as Mangosteen (G. mangostana L.), Brindle berry (G. gummi-gutta (L.) N. Bobson. Syn. G. cambogia (Gaertn.) Desr.) and Kokum (G. indica Choisy) are currently gaining commercial, industrial and medicinal importance (4). Edible fruits and vegetables from these Garcinia species play an important role in providing dietary diversity, food security, nutrition and income generation for local communities and the global economy (5). The presence of HCA in some Garcinia species is linked to increased fat oxidation, anorexigenic effect, and regulation of endogenous lipid biosynthesis (9). Furthermore, gamboge (or camboge) is the exudate from the bark of several Garcinia species and is used as a pigment in Indian murals and European water paintings. Gamboge is also used for colouring wood, leather, metal and dyeing clothes (10). In South India, G. gummi-gutta and G. indica are cultivated for commercial extraction of a variety of products such as bioactive acids, nutraceuticals, fats and condiments. The latter species was recently reported to augment synthetic lubricants for the reduction of friction and coating of engine parts and surfaces to protect them from wear (11). Its antioxidant and antimicrobial properties have also been reported (12) and when applied in combination with G. cambogia, they enhanced the shelf life of Mackerel fish using a novel icing medium (12). G. indica also inhibited the corrosion of mild steel purportedly due to the presence of cyanidin anthocyanins (13). Furthermore, G. indica oil has a great demand for the preparation of ointments, face creams and lipsticks in the cosmetic industry while its butter has been utilised as a substitute for cocoa butter (14)(15). G. mangostana also known as Mangosteen is commonly utilized as a functional food and mangosteenbased beverages had a turnover of more than $200 million in 2008 in the USA alone (16). The impressive commercial relevance of the specie stems from its wide applicability ranging from technological and biomedical applications to biomaterial production (17). Xanthones derived from mangosteen have been reported for their wide spectrum of pharmacological and biological properties (18). These properties include but are not limited to antibacterial, antiprotozoal, anti-cancer, antidiabetes, antioxidant and anti-inflammatory activities (19)(20)(21). Some clinical trials have also demonstrated the bioavailability of xanthonerich mangosteen-based supplements and their potent anti -inflammatory and antioxidant effects (22). G. kola (bitter kola) is one of the most studied Garcinia species. It is highly valued in Africa and used for hospitality purposes during cultural and social ceremonies where the seeds are usually eaten in their crude form as a snack. G. kola is used in African ethnomedicine for prophylactic and therapeutic purposes, especially for inflammatory-related diseases (22). Due to its health benefits, the efficacy and safety of a detox tea containing a mixture of G. kola and other plants (Andrographis paniculata and Psidium guajava) were investigated as adjuvants to the conventional therapy for COVID-19 in a pilot randomized trial (24). G. kola contains bioactive such as flavonoids, biflavanone, benzophenone derivatives (kolaflavones, Garcinia-flavones 1 and 2), and chromanols (garcinal and garcinoic acid). Of these, the biflavonoidkolaviron is the most studied and has great potential for clinical use as an antidiabetic agent because it targets multiple abnormalities in the diabetic milieu, specifically by targeting ROS production, bolstering antioxidants and limiting inflammation (25). Pharmaceutical companies from Nigeria and Cameroon have recently focused on the small-scale production of bitter kola syrups and herbal pastes as herbal remedies and food supplements (3). In this article, a bibliometric study of the dynamics of scientific research on commercially-relevant Garcinia species was investigated. The selection of these five species is based on their general popularity and commercial importance (as indicated by their footprint on the World Wide Web) and their scientific importance (as indicated by the number of research publications). This bibliometric study is instrumental in identifying topical hotspots, research strengths and weaknesses, information gaps and top researchers for collaborations. The information will inform research priorities, identify new research areas and promote further commercialization of these herbs. Materials and Methods The scientific data (original articles and conference articles/proceedings) on the research on the selected Garcinia species published within the past 30 years was obtained from Scopus® database on the 1 st of February 2022. Scopus was selected because it is the largest scientific journal indexing and citation database administered by Elsevier academic publishers (Amsterdam, Netherlands) (26)(27). The data search on Scopus database was limited to research publications from 1 st January 1991 to 31 st December 2021. This time duration was considered as growth in the use of natural therapeutics and products from plant that became prominent during this period. The search command deployed in this study is defined as follows; TITLE-ABS-KEY ("Garcinia kola" OR "Garcinia indica" OR "Garcinia mangostana" OR "Garcinia cambogia" OR "Garcinia gummi -gutta") AND PUBYEAR > 1990 AND PUBYEAR < 2022 AND (LIMIT-TO (DOCTYPE, "ar") OR LIMIT-TO (DOCTYPE, "cp")). The extracted data from Scopus database was carefully checked for correctness and to avoid duplication using Microsoft Excel® spreadsheet (version 2013, Washington, USA). Thereafter, the VOSviewer software (version 1.6.16, Leiden, Netherlands) was employed to establish coauthorship, co-occurrence and co-citation relationships between authors, institutions or countries, as well as the participation of journals in Garcinia research. Garcinia research publication growth (1991-2021) In over 3 decades, a total of 2260 original articles and conference publications on the selected Garcinia species (Fig. 2) were indexed on Scopus database and have garnered a cumulative 36880 citations. There has been a meteoric rise in the research outputs from just 7 articles published in 1991 to 184 in 2021 posting an average annual publication of 75 in the three decades under review. The research activities on the species in the second decade in particular, were pivotal, as the global research popularity of the use of medicinal plants as potential preventative medicines for many chronic diseases continues to increase. For example, the top-4 most-cited research articles; (28)(29)(30)(31), were published during this period. These studies explored the health importance of Garcinia species, especially with relation to their antioxidant, antiglycation, biochemical and enzyme inhibition potentials for the management of many diseases. This upward research trend continues into the last decade with the year 2020 being the most productive year to date with 205 published articles. The exponential research growth on these Garcinia species over the years indicates continued advances and interest from the international research community as the quest to find novel natural therapies for lifestyle diseases goes on. Country contribution to Garcinia research (1991-2021) Altogether, 85 countries participated in the research on Garcinia species as shown from the clearly-defined affiliations on the publications indexed on Scopus database. Only 10 of these countries published 50 or more research outputs on the subject within this period with India having the most number (396) as depicted in Table 1. Thailand (329), Nigeria (301), Indonesia (273) and the United States (212) complete the top 5 countries with the greatest number of publications on Garcinia research and cumulatively represent about 66.9% of the total publications. However, in terms of citations, the publications affiliated with the United States were cited the most garnering a total of 8119 citations and is followed closely by Thailand (7295), India (6020), Nigeria (5093) and Japan (4878). Thus, the United States, Taiwan and Japan may have the highest country influence on Garcinia research stemming from their superior citation-topublication ratios of 38.3, 37.,0 and 33.0 respectively. The research contributions over the years by these countries demonstrate their focus and investment in discovering effective phytotherapeutic strategies for life-long diseases that have plagued the human race. Asian countries are well-known for their age-long beliefs in herbal medicines for example, the Kampo (Japan), traditional (China) (32), or Ayurvedic (India) (33) medical systems have become widely accepted worldwide. Similarly, folkloric use of herbal medicines for disease management by the indigenous North American population has been documented even before the advent of conventional 'orthodox' medicines. In recent times, there has been an upsurge in the popularity of herbal medicines in the United States which can be associated with the amount of scientific research being carried out in this niche (34). The sway of the influence on Garcinia research by Nigeria in Africa (Table 1), is not surprising as the country is host to the naturally occurring G. kola tree where its seed has been consumed as a recreational snack in cultural settings since times immemorial. The plant was also dubbed "a miracle tree" due to its role as a major component of the traditional medicinal concoctions to manage many ailments such as diarrhea, bronchitis, bacterial infection etc. (35). this study, VOS viewer was employed to assess the extent of collaboration among the countries that were actively involved in Garcinia research in the last 3 decades. (nodes) is a function of the number of publication outputs with inward and outward links uniting or departing from the node while the assembly of individual nodes forms a cluster that is linked together by lines to indicate networks of collaborations and their strengths (44)(45)(46). The thickness of links shows the strength of the connections between any 2 nodes in the network (47) Institutional participation in the research on Garcinia species (1991-2021) It is important to assess the influence of universities and other research centers on the published outputs on Garcinia species over the last three decades. The institutions with the highest number of publications, citations and most-cited articles within the period under review were presented in Table 2. The University of Ibadan, Nigeria had the most published outputs (n= 97; 22% of top-10) on Garcinia species, predominantly on the potential pharmacological activities of G. kola. These publications have amassed a total of 2437 citations averaging at least 25 citations per article. The most cited article (226 citations) from the institution described the possible use of a bioflavonoid compound (kolaviron) isolated from G. kola seed to ameliorate the liver injury caused by 2-acetylaminofluorene, a chemical carcinogen (48). The University of Ibadan was distantly followed by Prince Songkla University and Mahidol University, both in Thailand with 52 and 50 publications that were cited 822 and 2143 times respectively. The latter university, however, had the most citation/publication ratio posting an impressive minimum of 42 citations per article among the most influential institutions in Garcinia research. Altogether, the top 100 institutions ( Table 2) have published almost 20% of the total publications on Garcinia research within the period under review. The domination by the Asian institutions is also reflective of the emphasis placed by the continent on the research on herbs for health, food and other applications. Author participation and citation The contributions of leading authors in selected Garcinia publications in the last 3 decades are presented in Table 3. (29) while Bagchi D was the overall author with the least number of publications (14). Co-authorship analysis The VOS viewer Bibliometric Map of the Co-authorship network of researchers with at least 5 publications on selected Garcinia plant research is shown in Fig. 4. Out of 7704 authors, 195 meet the threshold of individuals that has co-authored at least 10 publications. Each of these authors was grouped into 9 clusters. Authors in the same clusters may have been grouped together based on similarities in their research interests and collaborations. It is important to point out that of these 195, only 60 authors are significantly linked and connected together to form Effects of a natural extract of (-)hydroxycitric acid (HCA-SX) and a combination of HCA-SX plus niacinbound chromium and Gymnema sylvestre extract on weight loss (61) 140 Table 3. Author participation-Top 10 authors with highest publications on selected Garcinia species research this network visualization. Cluster 1 (red) contains 12 authors, cluster 2 (green) has 11 authors grouped together, cluster 3 (blue) and 4 (yellow) has 7 authors linked together, while clusters 5 (purple) and 6 (teal) contains 6 authors respectively, cluster 7 (orange) has 5 authors linked together while 8 (brown) and 9 (pink) contains only 3 authors for both. Co-citation In total, the co-citation network of cited authors with a minimum of 150 citations has 83 authors meeting the threshold for co-citing authors. The total link strength of all authors is 211372. The 83 items have been classified into 4 clusters as presented in Fig. 5. In cluster 1 with 41 items grouped together (red), Iinuma has the greatest cocitation network with a total link strength of 17056 and 615 co-citations. In cluster 2 (green), 22 items are grouped together a researcher named Wang Y. has the highest total link strength with 6374 and 374 co-citations. In cluster 3 (blue), 13 items were grouped together, Bagchi D has the highest co-citation strength with a total link strength of 9539 and 402 co-citations. In cluster 4 (yellow), with 7 grouped items, Farombi has the highest TLS of 7733 with 776 co-citations. Journal participation in selected Garcinia species research (1991-2021) A total of 575 journals (sources) have published original research and conference papers on selected Garcinia species in the last thirty years. Table 4 presents their contribution in terms of total production, citation, average citation per publication and impact factor. Of the top 10 Journals, Acta Horticulturae is the most prolific journal used by researchers with 83 published articles although it also has the least citation score (TC: TP). Among the top 10 Journals in Garcinia species research based on the number of publications, the Journal of Agricultural and Food Chemistry has the highest citation score. The 3 most cited journals are the Journal of Agricultural and Food Chemistry, Journal of Ethnopharmacology and Food and Chemical Toxicology, with a TC: TP ratio of 89.75, 68 and 66.63 respectively. Also, these Journals have an impact factor greater than 4. Bibliographic coupling of sources analysis provides thematic clusters that are based on those publications that share the same references (67). For bibliographic coupling, the relatedness of documents is relative to the number of references they share (68). Also, the size of the nodes (Journals) is proportional to the number of documents published in the journal, while the relative proximity of the nodes and the thickness of the links symbolize their degree of similarity based on the number of references they have in common (69). Fig. 6 shows the bibliographic coupling of Journals with at least 10 documents. Of the 1024 different journal sources that have been published on Garcinia, only 30 met the threshold and were classified into 3 clusters based on the relatedness of subjects of interests. Cluster 1 (red) 12 journals, cluster 2 (green) 12 journals and cluster 3 (blue) 6 items as shown in Fig. 6 (also presented in Co-occurrence of author keywords in selected Garcinia species research (1991-2021) Co-occurrence of keyword analysis is a powerful tool used to identify, describe and visually present the interactions between keywords in a scientific field (Table 6) (70,71). This tool analyzes the frequency of co-occurrence of 2 keywords i.e., it quantifies the number of articles in which these words appear together. The network visualization of Garcinia species research was carried out and of the 18744 keywords relating to this subject, 154 met the threshold of 50 sets. The keywords used most commonly also gives direction on where research on this subject is currently gaining ground. For instance, the top 10 keywords in Table 6 show the most frequently used keywords in Garcinia studies indicating that it is mostly used for its ethnopharmacological importance and as a possible natural remedy for illnesses. Conclusion Garcinia species have shown potential to improve a variety of illnesses, including hyperlipidemia, COVID-19, diabetes mellitus and neurodegenerative disorders and have been reported to be safe for consumption. The 3 species understudied (G. indica, G. cambogia, G. kola and G. mangostana) all have significant impact in the health industry because of the potentials in them to manage different varieties of diseases. Research undertakings have also demonstrated the potential industrial uses of the herbs in the health, pharmaceutical industries, clothing and petroleum industries and these may dominate the research emphasis in the immediate future, however, in terms of isolation of beneficial compounds more studies will be needed to maximize the benefits of these species.
2023-09-01T15:11:54.631Z
2023-08-30T00:00:00.000
{ "year": 2023, "sha1": "91a19eccb187985d6585118e97d53cfc74f0a0a3", "oa_license": "CCBY", "oa_url": "https://horizonepublishing.com/journals/index.php/PST/article/download/2396/2274", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7aa1ee117b6c7eb9b34d7bc03d96f35a24284ca5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
219121333
pes2o/s2orc
v3-fos-license
TO STUDY MECHANICAL PROPERTIES OF SPRING LIKE SPECIMEN BEFORE AND AFTER DIFFERENT HEAT TREATMENT OPERATIONS MECHANICAL PROPERTIES OF SPRING LIKE SPECIMEN BEFORE AND AFTER DIFFERENT HEAT TREATMENT : In this work we have analyzed the effect heat treatment on properties of spring shape steel specimens under various heat treatment processes. Specimen was subjected to heat treatment in electric muffle furnace. Heat treatment temperature, soaking time and cooling rate were selected as per phase diagram of specimen material. Specimen was tested for mechanical properties before and after heat treatment. Two processes annealing and normalizing compared with respect to their effect on properties of spring shape specimens in reference with standard data for steel used . Introduction Heat treatment is process widely used alter the physical and chemical properties of a material as per requirement. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] Heat treatment is used to change mechanical properties such as hardness, elasticity, toughness, ductility, plasticity, strength, malleability. Heat treatment can involve the heating of a material and then chilling. In case of heat treatment annealing, quenching, tempering, case hardening, carburizing. [20][21][22][23][24][25] Present work was planned to study heat treatment process, in reference to their effect on properties of metal components used in various engineering application. The main objectives of this work are:  To study various heat treatment processes.  To study properties of spring shape specimen.  To analyze effect of heat treatment on mechanical properties of spring shape specimen.  To compare mechanical behavior before and after heat treatment. Materials and Methods Composition & properties of materials used and various Methods applied in present work are as follows:-Mild steel rod: Mild steel rod (Purchased from local market) was used to prepare specimen for present investigation. Rod was converted to spring specimen with conventional method of preparing metal rod rings in workshop (turn around mandrel in desired dimensions) [26][27][28][29][30] [2] Annealing which differs from normalizing in terms of cooling rate, temperature range, holding or cooling mechanism. Here first step is to heat sample at slow rate to critical temperature range and then cooling is performed inside the furnace at specific rate. HT treatment not always bring out increase in hardness, additionally in case of annealing there is decease in hardness, which can be because, annealing is implemented in materials which undergone cold working, casting or quenching during fabrication, further metal/alloy becomes softer after annealing, which can be attributed to phase change due to slow heating and cooling rate that offers enough time for formation of phase with reduced hardness which is favorable for machining of material. Tensile Test: Here characteristics of results obtained for "Extension" of specimen, for two different HT processes can be related to value of hardness for respective samples. Increase in hardness results in decrease in ductility and vice versa. Additionally as ductility decreases, extension also decreases. Hence when normalizing operation performed, hardness increase after HT which decreases ductility and finally decrease in extension. Similar theory applicable for specimen undergoes annealing operation. Our results are in agreement with theory. [30-37] Toughness/ Impact Strength: Toughness requires a reasonable value of ductility in the material, so that material delays fracture or we can say material deform first before facing fracture. As material lost hardness, it retains some amount of toughness. In case of annealing operation there is decrease in hardness, which on the one hand give indication that amount of energy absorbed before fracture will increase, on other hand it requires strength so that to withstand applied load or to resist fracture. Similar theory applicable for normalizing operation. Charpy test technique used in present work. [31,32] Conclusions From all the characterizations and study of various parameters involved in heat treatment, we conclude that annealing and normalizing have significant and different effect on the properties of alloys. Following conclusions have been drawn: 1) Heat treatment of spring shape mild steel specimen results in variation in mechanical properties to a significant amount. 2) Annealing reduces hardness with destruction of cementite/pearlite networks during phase transformation by heat treatment. Normalizing results in formation of martensite, cementites and hence improves hardness. 3) Annealing increase extension characteristic of spring structure while Normalizing results in decrease in extension. 4) Annealing increases toughness in spring structure, whereas normalizing result in decrease in toughness. Annealing and normalizing differs in terms of heating rate, soaking time & cooling rate which effects the overall phase transformation and hence properties of material after heat treatment. Here one cannot mention which HT operation brings improvement in properties, as both processes and their experimental data have their own significance. Annealing provides better machinability and normalizing favors strength oriented applications of material in production field. [4] Rapid Stress Relief and Tempering: Process description by Mario GRENIER and Roger GINGRAS.
2020-04-30T09:06:19.394Z
2020-03-25T00:00:00.000
{ "year": 2020, "sha1": "9d324a3e1b46c3163a77f535267e4cd49527f346", "oa_license": "CCBY", "oa_url": "https://www.granthaalayahpublication.org/ijetmr-ojms/index.php/ijetmr/article/download/01_IJETMR19_A05_1091/389", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9d3677c0cb69ca331f88c911aa8e5dfaff213cac", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
67852308
pes2o/s2orc
v3-fos-license
‘ Walking in the light ’ and the missio Dei : Perspectives from the Anglican Church of Kenya This article specially elucidates the concept of walking in the light in the East African Revival Movement (EARM) in the Anglican Church of Kenya and in the missio Dei. Though initially the EARM had infiltrated the Anglican Church in East Africa in the early 20th century, it nevertheless later was attracted to other mission churches like Methodist and Presbyterian. The movement brought together a significant number of adherents from all these churches. This study investigates the EARM in the Anglican Church, viewing the relationship between the two as that of a daughter to a mother. Thus, the term ‘EARM’ is not a synonym for either the Anglican Church or any other but represents a fellowship that cuts across the Protestant mission churches in the East African region. This article focuses mainly on Kenya, which is one of the East African countries1 adversely influenced by the EARM. In Kenya, the movement is commonly referred to as ‘Brethren Fellowship’. Introduction This article specially elucidates the concept of walking in the light in the East African Revival Movement (EARM) in the Anglican Church of Kenya and in the missio Dei.Though initially the EARM had infiltrated the Anglican Church in East Africa in the early 20th century, it nevertheless later was attracted to other mission churches like Methodist and Presbyterian.The movement brought together a significant number of adherents from all these churches.This study investigates the EARM in the Anglican Church, viewing the relationship between the two as that of a daughter to a mother.Thus, the term 'EARM' is not a synonym for either the Anglican Church or any other but represents a fellowship that cuts across the Protestant mission churches in the East African region.This article focuses mainly on Kenya, which is one of the East African countries 1 adversely influenced by the EARM.In Kenya, the movement is commonly referred to as 'Brethren Fellowship'. To place walking in the light in its proper perspective, a contextual definition and description is necessary before situating it in the missio Dei. Contextual definition of 'walking in the light' The phrase 'walking in the light' as used by the members of the EARM refers to a daily sanctification that is attained by a life of daily walking with the Lord and a regular examination of one's heart and repentance (Gitari,n.d.:2).Ward and Wild-Wood (2012:215) seem to concur with Gitari in their statement that walking in the light means being transparent and open with one another.In spite of its somewhat erroneous theological background, walking in the light has had a profound influence on the EARM's socio-ethical belief and practice.To grasp the strength of this force something more elaborate and descriptive than a definition is required. Contextual description of 'walking in the light' Bruner (2012) gives one of the most profound descriptions of the phrase 'walking in the light' from the perspective of the EARM: [T]he Balokole 2 [Brethren] believed that spiritual darkness shrouded sinful secrets, and they worked to bring these secrets to the light.They believed that sins must be exposed through public confession, and 1.The countries that comprise East Africa are Sudan, South Sudan, Eretria, Ethiopia, Seychelles, Kenya, Uganda, Tanzania, Rwanda and Burundi.Among these, Kenya, Uganda, Tanzania, Rwanda and Burundi are the main setting of the EARM because of their historical contact with the Keswick movement. 2.This term refers to the 'saved ones' in Luganda and is synonymous with 'Brethren', a term widely used in Kenya to allude to members of the EARM. The East African Revival Movement's (EARM) socio-ethical belief and practice of walking in the light pervades mainstream Protestant churches in Eastern Africa with its emphasis on public confession of sin, which breeds severe relational consequences.Indeed walking in the light of the EARM has long plagued the Anglican Church of Kenya's participation in the missio Dei, which brings to the fore two categories of Christians, the saved and unsaved.While walking in the light has been buttressed in the Anglican Church of Kenya it is critical to recognise that the mission of God ought to be the heartbeat of the EARM's very existence. Accordingly, this article demonstrates that it is not the church that has a mission, but the Triune God that challenges the place of walking in the light in the Trinitarian God.This study, therefore, champions practical holiness by positioning walking in the light in the mission of God.As a result, it redefines the EARM's religious identity, illustrated by a proper exposition of scripture, Trinitarian worship, discreet confession of sin and moral legalism that provides for informed evangelism and social responsibility.Gitari (n.d.:2) asserts that Brethren will know that one is living a sanctified life when one attends fellowship meetings regularly, testifies about his trials and temptations, and walks in the light.The weekly fellowship meeting gives each person an opportunity to share with the Brethren the kind of life he has lived since the last fellowship meeting.Because they consider the devil to be always at war with believers, the testimony must include a statement of the temptations one has gone through and the way he has turned to Jesus for victory.This, in a nutshell, is what it means to walk in the light from the Brethren's point of view.If one's lifestyle is to the contrary, one may be declared as no longer saved.Furthermore, Brethren who have nothing to say during a fellowship meeting may cause concern and might even be declared lukewarm or spiritually cold. As a result, public testimony and confession have become the most enduring phenomenon of the EARM achieved by walking in the light, albeit they are sometimes contentious and confrontational.This phenomenon is controversial because of a tendency to confess past misdeeds that might breed serious relational consequences with the aggrieved member of the community who hitherto had no information of betrayal until then.It could also prove confrontational.For instance, the Principal of Crowther Hall, Birmingham, utterly unaware of the revival practice, was accosted by an English lady back from Uganda.She asked whether she could be in the light with him, that is, whether he would allow her to point out one or two of his shortcomings, after which he was entirely reinstated to fellowship with her (Barrington-Ward 2012:54).In spite of the fact that this practice of public confession is dying, as noted by Karanja (2012:146), it is still one of the most cherished ways to explicate the principle of a daily walk in the light. Indeed, 'walking in the light' has become a catchphrase among the members of the EARM.For example, in Kenya, they are often referred to in Kiswahili as watu wa nuru [people of the light].In fact, a slot has always been given during weekly revival fellowships for members to shed light.That is to confess to one another the sins of the previous week.The phrase has also been used within the EARM as a way of enlightening each other about the coming events.However, the formal statement is what mostly describes the Brethren concerning the practice of a daily walk with the Lord, that is, a life of daily sanctification.Such an experience, in the eyes of the Brethren, invariably describes a saved person.Hooper (2007:87) states that the EARM expects a saved person to daily yield to the Holy Spirit and Christ by faith.This habit of yielding or brokenness in the daily walk with God, in the power of Jesus' cleansing blood and with the mediation of the Holy Spirit, influences an abiding attitude of prayer and crying to God (Hooper 2007:88) 'Abba, Father', in what looks like real communion with God (Rm 8:15f.). Further, Senyonyi (2013:8) states that the revivalists have dire need to unmask anything that could prejudice their freedom to share their walk with God.They believe that if they walk in the light as God is in the light, Jesus Christ's blood will cleanse them from all sin (1 Jn 1:7).Indeed, Kariuki (1985:52, 53), one of the early Anglican bishops in Kenya, recalls his interaction with Nsibambi that led to his understanding of Christ as his light and saviour.Indeed, personal light arose out of the belief that revival worked in an individual before it could work in the Brethren fellowship.In consultation with the church, these fellowships sometimes become particularly significant in planning and executing the Brethren's mission among other agenda's.The extent to which walking in the light has been buttressed in the mission of God might require unpacking. Walking in the light in the missio Dei The concept of missio Dei underpins all the socio-ethical teachings of the Bible as far as the EARM's mission of walking in the light is concerned.Wright (2006:357) argues that the ethical challenge to God's people is twofold.On the one hand, it is to recognise the mission of God as the heartbeat of their very existence and on the other to respond in ways that express and facilitate it rather than deny and hinder it.Wright (2006:358) further notes that the Bible's grand narrative is about the mission of God and that it demands appropriate dimensions of ethical response from humanity.Abraham (Gn 22:16-18) serves as a model for the continuing education of his descendants, who must walk in the way of the Lord in righteousness and justice so that God can accomplish the missional purpose of Abraham's election (Wright 2006:358).This is well articulated in Genesis 18:18-19, which expresses a moral agenda for the nations on earth.Wright (2006:359) singles out Sodom as a model of the fallen world and demonstrates God's response to (judgment on) evildoers, those who negate the way of the Lord.Abraham is posited as a model of God's mission, albeit in the context of the wickedness of Sodom (Wright 2006:360).Wright (2006:363) examines the ethical content of the phrases 'the way of the Lord' and 'doing righteousness and justice'.These two protracted phrases anchor ethical expressions in this section as per the teachings and expectations of the Israelites about Yahweh.They also provide insight into understanding the principal theme, walking in the light in the EARM. On the one hand, Wright (2006:363) argues the expression 'keeping the way of the Lord' or 'walking in the way of the Lord' was a metaphor used in the Old Testament to contrast with the ways of other gods or the ways of sinners -in this particular case, the way of Yahweh and the way of Sodom.Wright (2006:364, 365) notes that the expression 'walking in the way of the Lord' is mostly used to construe obeying God's command so as to reflect God in human life.That is, doing for your neighbour what God has done for you. On the other hand, the expression 'righteousness and justice' speaks of conformity to what is right or expected, rightly expressed as social justice, actual things that you do (Wright 2006:365, 367).This missional ethics concept could further be explicated in two ways.Firstly, mission can be seen as an instrument to dispense release to the oppressed.This understanding arises from the belief that the way of the Lord is to provide righteousness and justice to the downtrodden, against the oppressor.While expressing the importance of ethics in God's mission to bless the nations, Wright (2006:368) contends that ethics sandwiches election and mission.This portrays the missional logic of Genesis 18:19 as effected through Abraham's election, which was anticipated to bring out a community dedicated to the ethical reflection of God's character.Secondly, Wright (2006:369) asserts that God's aim to dispense blessings to the nations is tied to God's ethical demands on the people he has created to be the agent of that blessing.This moral imperative has practical dimensions explicated by missional ethics of practical holiness because 'being holy meant living lives of integrity, justice, and compassion in every area of life' (Wright 2006:373). Walking in the light within a mission of God's framework Because the people of God are called to be a light to the nations, they ought to walk in the light in the transformed lives of a people.Thus, the problem that is attended to in this context is that walking in the light has led to categorising one group of Christians as saved while the other is not.This has hampered the mission of the church.The Brethren appear to focus more on the outward conformity exemplified by socio-ethical beliefs and practices than inward compliance achieved through the power of the Holy Spirit.However, a concept of walking in the light is needed that operates within a comprehensive mission framework, which helps the church participate fully in the missio Dei.Indeed, Daugherty (2007:165) argues that if the church's mission is to extend the missio Dei, then it can be nothing short of continuing that embodiment of God in Christ among the people of the world. However, the missio Dei concept is not primarily an activity of the church but an attribute of the missionary God.Therefore, it is not the church that has the mission but the Triune God. Thus the concept of walking in the light raises a critical question about its place in the Trinitarian God.This is because the Brethren seem to emphasise the centrality of the cross (Christ) while the other members of the Trinity are relegated to the peripheries.Aagaard (1973:13) observes that mission ought to be seen as a movement from God to the world, and the church should be viewed as an instrument for that mission.Indeed, there was no better way for God to exemplify his love for humanity except through the glorious incarnation of his son and our saviour, Jesus Christ.Hence, as partakers of mission in God, Christians are bound to walk in his holiness (in his light). Because the Bible is about the mission (Wright 2006:29), walking with God is in itself walking in the mission of God.Thus, the biblical concept of walking in the light is without a doubt synonymous with walking with God.This idea is postulated in in both Testaments.Whereas Genesis 5:24 and 6:9 indicate Enoch and Noah had a righteous walk with God, 2 Peter 3:9 shows that believers are to walk in the light of the Lord's return, given the judgment that is coming to the world.So, the Genesis texts are indicative of the status of the walk, which is in the Spirit and perfect.This suggests that God's mission is a way of life of the people of God.Also, Peter's Epistle text brings to the fore the imperative aspect of God's expectations towards humanity with regard to his mission.The declarations seem to indicate a calling of the people of God to a particular vocation whose characteristics demand righteous disposition towards God and his mission.This confirms that God has not left his great commission at the mercy of humanity as he swore to build his missional church (Mt 16:18) (Piper 2001:75).The mission is therefore, as Bosch (1991:390) observes, a movement from God to the world and the church is a vessel for that mission. In addition, Wright (2006:22-23) argues that if mission is biblically informed and authenticated, then it should underpin the church's committed participation as God's people, at God's invitation and command, in God's mission within the history of God's world for the redemption of God's creation.Therefore, God is the owner of the mission while the church is a participant at the invitation and command of God.Moreover, the fact that Wright (2006:23) mentions the purpose of God's mission as the redemption of his creation fits well with John Piper's (2001:206) conception of the missionary text in John 10:16, which affirms God's missionary purpose of gathering his sheep, or building his church (Mt 16:18) from all the nations.This resonates with Bosch's (1991:390) argument that the church is an instrument of God's love in the world because he is a fountain of love.If this is the case, the church (through which the EARM operates) ought to champion practical holiness by positioning walking in the light into its right perspective in the mission of God. Evaluation of the East African Revival Movement's walking in the light against the missio Dei Thus, specific features that define the EARM's religious identity within a framework of walking in the light help to evaluate the EARM against missio Dei.These features include but are not exclusive to the exposition of scripture, centrality of Christ, public testimony and legalism. Exposition of the scripture The Brethren are ardent readers of the Bible, albeit thematic devotions.They make little attempt at exegesis and thus lack a theological dimension.This seems to have been a historical problem whereby some founders of Keswick theology, 3 like Robert and Hannah Smith, had little or no theological education and training.Thus, the Smiths' mishandling of Romans 6:6 (Naselli 2010:102) consequently amplified higher life messages of a second blessing, leading to a religious hypocrisy (Pollock 1964:36).Though the EARM did not embrace the teaching of a second blessing, it nevertheless inherited their literal approach to scriptural interpretation, oblivious to the context.The reading of the Ephesian 5:14 has been blamed for the split in the EARM (Nthamburi 1991:117), resulting in some members aligning to Arise and others to Stand factions. According to Bosch (1991:426), the historical world is a constitutive element in the understanding of mission and not just a peripheral state for the church's mission.Moreover, Wright (2006:279) argues against spiritualising interpretations, particularly the typological method of relating the Old Testament to the New Testament as if the Old Testament merely foreshadows the New Testament, thus losing its historical significance.The Bible shows that when it is read correctly, it challenges readers to recognise their participatory role in God's mission and to avoid the Pharisaic hypocrisy of religious justification. Historical-critical scholarship could be a formidable mission tool to help members of the EARM in the biblical application to participate fully in God's mission.It is by so doing that we shall agree with Paul's sentiments: 'I, therefore, the prisoner of the Lord, beseech you that you work worthy of the vocation where you have been called' (Eph 4:1).This realisation is essential because the success of the church's mission is the Lord's work, done the Lord's way.Indeed, a successful mission of the church must be found at the cross of Christ. Centrality of the cross The Brethren's emphasis on the cross of Christ as the basis of their salvation no doubt puts evangelical Christian orthodoxy into its right perspective of proclaiming the gospel and calling the world to repentance and faith.Wright (2006:314) argues that the cross was the inevitable cost of God's whole mission and the unavoidable centre of our mission because all Christian mission flows from the cross.Thus, the centrality of Christ in the salvation of the world provides a critical link for the missio Dei in the Old and New Testaments.Osborn (2000:87) observes that the overriding theme of the revival meetings and Keswick conventions were the messages of sin, repentance and forgiveness by the blood of Christ. Osborn further states that Joe Church and his associates were said to preach only the crucified Christ (2000:87).Senyonyi (2013:4), a Ugandan Brethren scholar, has been specific about Revivalists' statements about the centrality of Jesus in their preaching and teachings, based on the belief that Jesus paid 3.According to Naselli (2008:29), this phrase denotes 5 days of progressive teaching, commonly referred to as 'spiritual clinic'.Naselli (2008:29) further contends that this teaching characterised early Keswick conventions , which had a stereotyped sequence. the price for their sins.Thus, the name Jesus and the cross have been viewed synonymously.The Brethren pray to the Holy Spirit to show them only Jesus because to them real revival is walking with Jesus, victoriously, moment by moment, day by day.However, when this spirituality is viewed from the perspective of missio Dei, it seems to lack balance.Wright (2006:315) claims that the cross must permeate both social and evangelistic engagements. Although the EARM appears to understand this to the fullest, their application of it tends to lean inwardly towards self rather than outwardly towards those outside their camp.Thus, it tends to fall short of a holistic mission informed by a comprehensive mission of the cross.Bosch (1991:390) also observes that following the Willingen Conference of 1952, the mission came to be understood as flowing from the very nature of God, thus Trinitarian.There is no doubt that God affirmed his supremacy in missions by confirming the supremacy of his son, Jesus Christ, as the conscious centre of the church (Piper 2001:133).Thus, the EARM's Christocentric emphasis could be understood in that perspective.However, it becomes a problem when the Trinitarian thrust of mission appears blurred within the revival fellowships that mostly focus on one member of the Trinity.Bosch (1991:390) observes the doctrine of the missio Dei as God the father sending the Son, and the Father and the Son sending the Spirit, and the Father, the Son and the Holy Spirit sending the church into the world.Thus, a movement towards Trinitarian worship and holistic mission needs to be encouraged as a new model in the EARM's theology of mission.This realisation should not only pervade the Brethren's worship pattern but should also be the basis of their public testimony. Public confession of sins We have seen that public confession of sins is rooted in both scripture and African cultural practices.In both cases, it has earned its place in the light of setting norms and boundaries against which the law is breached and cleansing and confession required.It has been a common practice in the EARM to give public testimony or confession.The Brethren believe that by expunging their misdeeds openly, they will clear their conscience not only before God but humanity as well.Winter (2010:183) observes that it is paramount for people to witness the glory of God in the lives of believers as a reason to turn away from evil to God.Perhaps that could be one of the reasons the Brethren seek to confess their sins openly so that others can see and glorify God. One of the primary scriptural passages that appear to approve public confession is James 5:16.Whereas it is right to seek assembly of the saints for confession, the amount of publicity given depends upon how public the sin was .If the sin is publicly known, then to specify it during public confession is a matter of the responsible ethics of a good neighbour.However, as Price (2017:Online) argues, wisdom should be used in declaring the sin -not so much because it might seem disgraceful to tell exactly what the sin was but to spare the sinner unnecessary hardship over a sin he has repudiated.It is a good rule of scripture to say that sin should be explicitly confessed to the extent that knowledge of sin exists.This could be a real mission emphasis because it handles the complexities that could arise in the church where a public sin goes unacknowledged.Indeed, because human mission has no life of its own, except in the hands of the sending God who is the initiator of missionary enterprises (Bosch 1991:390), acknowledging public sin is a welcome mission factor.If the Brethren could be discreet in handling various sins, informed legalism could help participation in the missio Dei. Legalism The members of the EARM display passion for God in their conventions and fellowship meetings as they achieve experiential sanctification in their lives.This practical holiness, blended with Keswick theology, has not only been contextualised in the EARM but has also acquired socioethical dimensions.There is nothing wrong with being ethical.However, the moral problem has been viewed from the perspective of creating two categories of Christians in the EARM based on the beliefs and practices of walking in the light, hedged with dos and don'ts.Langley and Kiggins (1974:202) observe that conformity to an accepted pattern of behaviour becomes the gauge for one's religious commitment and this displaces the gospel of God's love and grace.Thus legalistic tendencies ensue 4 -do not drink; do not smoke; do not wear short skirts; do not take bank loans; do not receive or give a dowry.From this viewpoint, walking in the light is not within the precepts of the mission of God.Bosch (1991:390) claims that human mission has no life of its own, except in the hands of the sending God, who is the initiator of missionary enterprises. Moral transformation by all definitions is not a problem in itself but in the way it has been applied or misapplied within a framework of community rules of living vis-à-vis the biblical framework of an ethical community.Wright (2006:364) images ethical obedience from the perspective of walking in the ways of the Lord, as to reflect God in the relationship between a just human life and an ethical community.Therefore, the concept of walking with God is a practice that all godly, loving people ought not only to envy but also strive to achieve.Unfortunately, it seems that the ensuing ethical obedience has created a wedge in the EARM.This ethical and divisive attitude appears to put more emphasis on outward moral conformity as expressed in walking in the light at the expense of the Gospel and mission of Christ. As earlier stated, the Brethren's moral formation provides for evangelism and social responsibility but falls short of replicating the same to those outside their camp.Unlike the past, when evangelism and Christian social action went together (Langley & Kiggins 1974:201), it is mostly not the 4.See Keswick teachings against indulgence or amusements like beer, theatre, dance, tobacco and questionable employment.Anything done to please the self apart from Christ as master and lord and neighbour in all things lawful was discouraged (Pierson 1907:91, 93, 94). case currently as exemplified by Mfuko ya Bwana 5 -the Lord's Bag (Mambo 1973:115).The Lord's Bag has been exclusively for the Brethren's activities oblivious of the general need in the church.David Bosch (2011:418) argues for justice to evangelism and social responsibility in dispensing the promises and gifts of the kingdom of God.Thus, there is a need for a paradigm shift from the prevailing socio-ethical informed morality to a gospel-focused mission because, as Wright (2006:390) argues, the mission of the church includes both verbal proclamation and ethical living.We must not conform to the world in any way (Rm 12:1-2); we must be careful; just because the majority of the EARM accept certain ethical behaviours 6 within their camp does not mean that it is right in the light of missio Dei.Indeed, missio Dei requires behaviours that are inclusive -for example, worship styles that allow the use of public musical systems and dancing in the Spirit; flexible ways of giving testimony; a dress code and hairstyles that depict freedom with responsibility; embracing social welfare activities like bank loans and brotherly-sisterly coexistence. Further, if a person is not a regular member of Brethren fellowships, he or she may be labelled as not saved because salvation is understood and expressed through walking in the light.From this viewpoint, walking in the light is not within the precepts of the mission of God.Bosch (1991:390) claims that human mission has no life of its own, except in the hands of the sending God, who is the initiator of missionary enterprises. Conclusion In this article the EARM has been evaluated against the missio Dei and to define the movement's religious identity.The article has endeavoured to elucidate the concept of walking in the light and has placed it in the context of missio Dei.It has evaluated the EARM against missio Dei and has established the outstanding features that buttress the EARM. Further, the article noted that the EARM has not only contextualised much of its inheritance from Keswick theology but also seems to have gone a notch higher in its expression of practical holiness.The socio-ethical beliefs and practices of walking in the light appear to underpin the boundaries within which the saved ones trace their daily walk with God. Following the above proposed outcomes, the following insights can be learned from this narrative: 1.It will be an exercise in futility if those who achieve practical holiness through the dictums of walking in the light are not faithful participants in missio Dei. 2. It is critical for us to work towards godly driven missionary tasks because the mission of God does not follow us, but we should follow or participate in the mission of God. 5.A fund contributed by the Brethren according to their ability to assist with the organising of conventions and other social needs among the Brethren. 6.Some ethical behaviours include not drinking alcohol, not smoking, not parting your hair, not wearing short skirts, not keeping beards, not borrowing money from the bank, not keeping dogs and not accepting dowry. 3. If we can shelve our self-interest to express salvation within the covenant of grace, then our faithful participation in the missio Dei is conceivable.4. We should respond to God's call to worship in the truth and in the Spirit in fruitfulness and abundance within the confines of the great commission. 5.A holier-than-thou predisposition should be replaced with a self-effacing or modest conceptualisation of conversion and worship style, which informs the beliefs and practices of our walk in the light.
2018-12-27T15:28:38.624Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "0d2c53f8ccfca52877305c3acb89b97ad169bd80", "oa_license": "CCBY", "oa_url": "https://hts.org.za/index.php/hts/article/download/4868/11134", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0d2c53f8ccfca52877305c3acb89b97ad169bd80", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
9911709
pes2o/s2orc
v3-fos-license
A Graph Reduction Step Preserving Element-Connectivity and Applications Given an undirected graph G=(V,E) and subset of terminals T \subseteq V, the element-connectivity of two terminals u,v \in T is the maximum number of u-v paths that are pairwise disjoint in both edges and non-terminals V \setminus T (the paths need not be disjoint in terminals). Element-connectivity is more general than edge-connectivity and less general than vertex-connectivity. Hind and Oellermann gave a graph reduction step that preserves the global element-connectivity of the graph. We show that this step also preserves local connectivity, that is, all the pairwise element-connectivities of the terminals. We give two applications of this reduction step to connectivity and network design problems: 1. Given a graph G and disjoint terminal sets T_1, T_2, ..., T_m, we seek a maximum number of element-disjoint Steiner forests where each forest connects each T_i. We prove that if each T_i is k-element-connected then there exist \Omega(\frac{k}{\log h \log m}) element-disjoint Steiner forests, where h = |\bigcup_i T_i|. If G is planar (or more generally, has fixed genus), we show that there exist \Omega(k) Steiner forests. Our proofs are constructive, giving poly-time algorithms to find these forests; these are the first non-trivial algorithms for packing element-disjoint Steiner Forests. 2. We give a very short and intuitive proof of a spider-decomposition theorem of Chuzhoy and Khanna in the context of the single-sink k-vertex-connectivity problem; this yields a simple and alternative analysis of an O(k \log n) approximation. Our results highlight the effectiveness of the element-connectivity reduction step; we believe it will find more applications in the future. Introduction In this paper we consider several connectivity and network design problems. Given an undirected graph G and two nodes u, v we let λ G (u, v) and κ G (u, v) denote the edge and vertex connectivities between u and v in G. It is well-known that edge-connectivity problems are "easier" than their vertex-connectivity counterparts. Vertexconnectivity exhibits less structure than edge-connectivity and this often translates into significant differences in the algorithmic and computational difficulty of the corresponding problems. As an example, consider the well-known survivable network design problem (SNDP): the input consists of an undirected edge-weighted graph G and connectivity requirements r : V × V → Z + between each pair of vertices. The goal is to find a min-cost subgraph H of G such that each pair u, v has r(u, v) disjoint paths between them in H. If the paths are required to be edge-disjoint (λ H (u, v) ≥ r(u, v)) then the problem is referred to as EC-SNDP and if the paths are required to be vertex-disjoint the problem is referred to as VC-SNDP. Jain [23] gave a 2-approximation for EC-SNDP based on the powerful iterated rounding technique. On the other hand, VC-SNDP is known to be hard to within polynomial factors [28,4]. To address this gap, Jain et al. [25] introduced a connectivity measure intermediate to edge and vertex connectivities known as element-connectivity. The vertices are partitioned into terminals T ⊆ V and non-terminals V \ T . The element-connectivity between two terminals u, v, denoted by κ ′ G (u, v) is defined to be the maximum number of paths between u and v that are pairwise disjoint in edges and non-terminals (the paths can share terminals). In some respects, element-connectivity resembles edge-connectivity: For example, κ ′ (u, w) ≥ min(κ ′ (u, v), κ ′ (v, w)) for any three terminals u, v, w; this triangle inequality holds for edge-connectivity but does not for vertex-connectivity. In element-connectivity SNDP (ELC-SNDP) the requirements are only between terminals and the goal is to find a min-cost subgraph H such that κ ′ H (u, v) ≥ r(u, v) for each u, v ∈ T . Fleischer, Jain and Williamson [16] (see also [11]) generalized the iterated rounding technique of Jain for EC-SNDP to give a 2-approximation for ELC-SNDP. In other respects, element-connectivity is related to vertex connectivity. One class of problems motivating this paper is on generalizing the classical theorem of Menger on s-t vertex-connectivity; we discuss this below. In studying element-connectivity, we often assume without loss of generality that there are no edges between terminals (by subdividing each such edge) and hence κ ′ (u, v) is the maximum number of non-terminal disjoint u-v paths. Menger's theorem shows that the maximum number of internally vertex-disjoint s-t paths is equal to κ(s, t). Hind and Oellermann [21] considered a natural generalization to multiple terminals. Given a terminal set T ⊆ V , what is the maximum number of trees that each contain T and are disjoint in V \ T ? The natural upper bound here is the element connectivity of T in G, in other words, k = min u,v∈T κ ′ (u, v). In [21] a graph reduction step was introduced to answer this question. Cheriyan and Salavatiour [9] called this the problem of packing element-disjoint Steiner trees; crucially using the graph reduction step, they showed that there always exist Ω(k/ log |T |) element-disjoint Steiner trees and moreover, this bound is tight (up to constant factors) in the worst case. In contrast, if we seek edge-disjoint Steiner trees then Lau [32] has shown that if T is 26k edge-connected in G, there are k edge-disjoint trees each of which spans T . Finally, we remark that in some recent work Chuzhoy and Khanna [12] gave an O(k log |T |) approximation for the special case of VC-SNDP in which a terminal set T needs to be k-vertex-connected (this is equivalent to the single-sink problem). Their algorithm and analysis are based on a structural characterization of feasible solutions -they use element-connectivity (they call it weak connectivity) as a key stepping stone. Subsequent to this paper, Chuzhoy and Khanna [13] gave a simple and elegant reduction from the the general VC-SNDP problem to ELC-SNDP, obtaining an O(k 3 log n)-approximation and reinforcing the connection between elementand vertex-connectivity. The discussion above suggests that it is fruitful to study element-connectivity as a way to generalize edgeconnectivity and attack problems on vertex-connectivity. In this paper we consider the graph reduction step for element-connectivity introduced by Hind and Oellermann [21] (and rediscovered by Cheriyan and Salavatipour [9]). We generalize the applicability of the step and demonstrate applications to several problems. Theorem 1.2 (Mader [35]). Let G = (V ∪ {s}, E) be an undirected multi-graph, where deg(s) = 3 and s is not incident to a cut edge of G. Then s has two neighbours u and v such that the graph G ′ obtained from G by replacing su and sv by uv satisfies λ G ′ (x, y) = λ G (x, y) for all x, y ∈ V \ {s}. Generalization to directed graphs are also known [35,17,26]. The splitting-off theorems have numerous applications in graph theory and combinatorial optimization. See [34,18,31,24,6,32,33,27] for various pointers and applications. Although splitting-off techniques can be sometimes be used in the study of vertexconnectivity, their use is limited and no generally applicable theorem akin to Theorem 1.2 is known. On the other hand, Hind and Oellermann [21] proved an elegant theorem on preserving global element connectivity. In the sequel we use κ ′ G (S) to denote min u,v∈S κ ′ G (u, v) and G/pq to denote the graph obtained from G by contracting vertices p, q. Theorem 1.3 (Hind & Oellermann [21]). Let G = (V, E) be an undirected graph and T ⊆ V be a terminal-set This theorem has been used in two applications on element-connectivity [9,27]. We generalize it to handle local connectivity, increasing its applicability. Reduction Lemma. Let G = (V, E) be an undirected graph and T ⊆ V be a terminal-set. Let (p, q) be any edge where p, q ∈ V \ T and let G 1 = G − pq and G 2 = G/pq. Then one of the following holds: Remark 1.4. The Reduction Lemma, applied repeatedly, transforms a graph into another graph in which the non-terminals form a stable set. Moreover, the reduced graph is a minor of the original graph. We give applications of the Reduction Lemma (using additional ideas) to two problems that we had briefly alluded to already. We discuss these below. Packing Element-Disjoint Steiner Trees and Forests: There has been much interest in the recent past on algorithms for (integer) packing of disjoint Steiner trees in both the edge and element-connectivity settings [31,24,32,33,8,9,6]. (A Steiner tree is simply a tree containing the entire terminal set T .) See [20] for applications of Steiner tree packing to VLSI design. An outstanding open problem is Kriesell's conjecture which states that if the terminal set T is 2k-edge-connected then there are k-edge-disjoint Steiner trees each of which spans T ; this would generalize a classical theorem of Nash-Williams and Tutte on edge-disjoint spanning trees. Lau made substantial progress [32] and proved that 26k-connectivity suffices for k edge-disjoint Steiner trees; he extended his result for packing Steiner forests [33]. We remark that Mader's splitting-off theorem plays an important role in Lau's work. The element-disjoint Steiner tree packing problem was first considered by Hind and Oellermann. As we mentioned, Cheriyan and Salavatipour [9] gave a nearly tight bound for this problem. Their result relies crucially on Theorem 1.3 followed by a simple randomized coloring algorithm whose analysis extends a similar algorithm for computing the domatic number of a graph [15]. In [3] the random coloring idea was shown to apply more generally in the context of packing bases of an arbitrary monotone submodular function; in addition, a derandomization was provided in [3] via the use of min-wise independent permutations. It is also known that the problem of packing element-disjoint Steiner trees is hard to approximate to within an Ω(log n) factor [8]. Here, we consider the more general problem of packing Steiner forests that was posed by [9]. The input consists of a graph G = (V, E) and disjoint terminal sets T 1 , T 2 , . . . , T m , such that What is the maximum number of element disjoint forests such that in each forest T i is connected for 1 ≤ i ≤ k? Our local connectivity reduction step is primarily motivated by this question. For general graphs we prove that there exist Ω(k/(log |T | log m)) element disjoint forests, where T = i T i . This can also be viewed as an O(log |T | log m) approximation for the problem. We apply the Reduction Lemma to obtain a graph in which the non-terminals are a stable set. We cannot however apply the random coloring approach directly -in fact we can show that it does not work. Instead we decompose the graph into highly connected subgraphs and then apply the random coloring approach in each subgraph separately. We also study the packing problem in planar graphs and graphs of fixed genus, and prove substantially stronger results. Here too, the first step is to use the Reduction Lemma (recall that the reduced graph is a minor of the original graph and hence is also planar). After the reduction step, we employ a very different approach from the one for general graphs. Our main insight is that planarity restricts the ability of non-terminals to provide high element-connectivity to the terminals. We formalize this intuition by showing that there are some two terminals u, v that have Ω(k) parallel edges between them which allows us to contract them and recurse. Using these ideas, for planar graphs we prove that there exist ⌈k/5⌉−1 disjoint forests. Our method also extends to give an Ω(k) bound for graphs of a fixed genus, and we conjecture that one can find Ω(k) disjoint forests in graphs excluding a fixed minor; we give evidence for this by proving it for packing Steiner trees in graphs of fixed treewidth. Note that these bounds also imply corresponding approximation algorithms for maximizing the number of disjoint forests. These are the first non-trivial bounds for packing element-disjoint Steiner forests in general graphs or planar graphs. Since element-connectivity generalizes edge-connectivity, our bounds in planar graphs are considerably stronger than those of given by Lau [32,33] for edge-connectivity. Our proof is simple, however, we remark that the simplicity of the proof comes from thinking about element-connectivity (using the Reduction Lemma) instead of edge-connectivity! Our proof also gives the strong property that the non-terminals in the forests all have degree 2. Single-Sink k-vertex-connectivity: Polynomial factor inapproximability results for VC-SNDP [28,4] have focused attention on restricted, yet useful, special cases of the problem. In recent work Chakraborty, Chuzhoy and Khanna [4] considered the single-sink k-vertex-connectivity problem for small k; the goal is to k-vertexconnect a set of terminals T to a given root r. This problem is approximation-equivalent to the subset kconnectivity problem in which T needs to be k-connected [4]. If k = 1, this is the NP-Hard Steiner tree problem and a 2-approximation is well-known. For k = 2, a 2-approximation follows from [16] whose algorithm can handle the more general VC-SNDP with requirements in {0, 1, 2}. For k > 2 the first non-trivial approximation algorithm was given in [4]; the approximation ratio was k O(k 2 ) log 4 n. Improvements were given in [12,5] with Chuzhoy and Khanna [12] achieving the currently best known approximation ratio of O(k log |T |). The algorithms are essentially the same in [4,12,5] and build upon the insights from [4]; the analysis in [12] relied on a beautiful decomposition result for k-connectivity which is independently interesting from a graph theoretic view point. The proof of this theorem in [12] is long and complicated although it is based on only elementary operations. Using the Reduction Lemma, we give an alternate proof of the main technical result which is only half a page long! We mention that the decomposition theorem has applications to more general network design problems such as the rent-or-buy and buy-at-bulk network design problems as shown in [5]. Due to space constraints we omit these applications in this paper. Related Work: We have already mentioned most of the closely related papers. Our work on packing Steiner forests in planar graphs was inspired by a question by Joseph Cheriyan [7]. Independent of our work, Aazami, Cheriyan and Jampani [1] proved that if a terminal set T is k-element-connected in a planar graph then there exist k/2 − 1 element-disjoint Steiner trees, and moreover this is tight. They also prove that it is NP-hard to obtain a (1/2 + ε) approximation for this problem. Our bound for packing Steiner Trees in planar graphs is slightly weaker than theirs; however, our algorithms and proofs are simple and intuitive, and generalize to packing Steiner forests. Their algorithm uses Theorem 1.3, followed by a reduction to a theorem of Frank et al. [19] that uses Edmonds' matroid partition theorem. One could attempt to pack Steiner forests using their approach (with the stronger Reduction Lemma in place of Theorem 1.3), but the theorem of [19] does not have a natural generalization for Steiner forests. The techniques of both [1] and this paper extend to graphs of small genus or treewidth; we discuss this further in Section 3.2. We refer the reader to [4,12,5] for more discussion of recent work on single-sink vertex connectivity, including hardness results [4] and extensions to related problems such as the node-weighted case [12] and buy-at-bulk network design [5]. Nutov [36] has recently given alternate algorithms, based on the primal-dual method, for the single-sink vertex-connectivity network design with approximation ratios comparable to those from [12]. These algorithms do not have have the advantage of the structural decomposition of [12]. We mention that if T = V , that is, we wish to find a min-cost subgraph of G that is k-connected then an O(log 2 k) approximation is known [14,30,10]. We also refer the reader to a survey on network design by Kortsarz and Nutov [29]. The Reduction Lemma Let G(V, E) be a graph, with a given set T ⊆ V (G) of terminals. For ease of notation, we subsequently refer to terminals as black vertices, and non-terminals (also called Steiner vertices) as white. The elements of G are white vertices and edges; two paths are element-disjoint if they have no white vertices or edges in common. Recall that the element-connectivity of two black vertices u and v, denoted by κ ′ G (u, v), is the maximum number of element-disjoint (that is, disjoint in edges and white vertices) paths between u and v in G. We omit the subscript G when it is clear from the context. For this section, to simplify the proof, we will assume that G has no edges between black vertices; any such edge can be subdivided, with a white vertex inserted between the two black vertices. It is easy to see that two paths are element-disjoint in the original graph iff they are element-disjoint in the modified graph. Thus, we can say that paths are element disjoint if they share no white vertices, or that u and v are k-element-connected if the smallest set of white vertices whose deletion separates u from v has size k. Similarly, since κ ′ G 2 (x, y) = k 2 − 1, there is a vertex-tri-partition (X, N ′ , Y ) in G 2 with |N ′ | = k 2 − 1 and x ∈ X and y ∈ Y . We claim that N ′ contains the contracted vertex pq for otherwise N ′ would be a cutset of size k 2 − 1 in G. Therefore, it follows that (X, N, Y ) where N = N ′ ∪ {p, q} − {pq} is a vertex-tri-partition in G that separates x from y. Note that |N | = k 2 and N includes both p and q. For the latter reason we note that (X, N, Y ) is a vertex-tri-partition also in G 1 . Subsequently, we work with the two vertex tri-partitions (S, M, T ) and (X, N, Y ) in G 1 (we stress that we work in G 1 and not in G or G 2 ). Recall that s, p ∈ S, and t, q ∈ T , and that M has size k 1 − 1; also, N separates x from y, and p, q ∈ N . Fig. 1 (a) below shows these vertex tri-partitions. Since M and N contain only white vertices, all terminals are in S or T , and in X or Y . We say that S ∩ X is diagonally opposite from T ∩ Y , and S ∩ Y is diagonally opposite from T ∩ X. Let A, B, C, D denote S ∩ N, X ∩ M, T ∩ N and Y ∩ M respectively, with I denoting N ∩ M ; note that A, B, C, D, I partition M ∪ N . In parts (b) and (c), we consider possible locations of the terminals s, t, x, y. We assume w.l.o.g. that x ∈ S. If we also have y ∈ S, then x ∈ S ∩ X and y ∈ S ∩ Y ; therefore, one of x, y is diagonally opposite from t, suppose this is x. Fig. 1 (b) illustrates this case. Observe that A∪I ∪B separates x from y; since x and y are k 2 -connected and |N = A∪I ∪C| = k 2 , it follows that |B| ≥ |C|. Similarly, C ∪I ∪D separates t from s, and since C contains q, Fact 1 implies that Therefore, |C| > |B|, and we have a contradiction. Hence, it must be that y / ∈ S; so y ∈ T ∩ Y . The argument above shows that x and t cannot be diagonally opposite, so t must be in T ∩ X. Similarly, s and y cannot be diagonally opposite, so s ∈ S ∩ Y . Fig. 1 (c) shows the required positions of the vertices. Now, N separates s from t and contains p, q; therefore, from fact 1, |N | ≥ k 1 > |M |. But M separates x from y, and fact 2 implies that x, y are k 2 -connected in G 1 ; therefore, |M | ≥ k 2 = |N |, and we have a contradiction. Packing Element-Disjoint Steiner Trees and Forests Consider a graph G(V, E), with its vertex set V partitioned into T 1 , T 2 , . . . T m , W . We refer to each T i as a group of terminals, and W as the set of Steiner or white vertices; we use T = i T i to denote the set of all terminals. A Steiner Forest for this graph is a forest that is a subgraph of G, such that each T i is entirely contained in a single tree of this forest. (Note that T i and T j can be in the same tree.) For any group T i of terminals, we define κ ′ (T i ), the element-connectivity of T i , as the largest k such that for every u, v ∈ T i , the element-connectivity of u and v in the graph G is at least k. We say two Steiner Forests for G are element-disjoint if they share no edges or Steiner vertices. (Every Steiner Forest must contain all the terminals.) The Steiner Forest packing problem is to find as many elementdisjoint Steiner Forests for G as possible. By inserting a Steiner vertex between any pair of adjacent terminals, we can assume that there are no edges between terminals, and then the problem of finding element-disjoint Steiner forests is simply that of finding Steiner forests that do not share any Steiner vertices. A special case is when m = 1 in which case we seek a maximum number of element-disjoint Steiner trees. Cheriyan and Salavatipour [9] proved that if there is a single group T of terminals, with κ ′ (T ) = k, then there always exist Ω(k/ log |T |) Steiner trees. Their algorithm proceeds by using Theorem 1.3, the global element-connectivity reduction of [21], to delete and contract edges between Steiner vertices, while preserving κ ′ (T ) = k. Then, once we obtain a bipartite graph G ′ with terminals on one side and Steiner vertices on the other side, randomly color the Steiner vertices using k/6 log |T | colors; they show that w.h.p., each color class connects the terminal set T , giving k/6 log |T | trees. The bipartite case can be cast as a special case of packing bases of a polymatroid and a variant of the random coloring idea is applicable in this more general setting [3]; a derandomization is also provided in [3], thus yielding a deterministic polynomial time algorithm to find Ω(k/ log |T |) element-disjoint Steiner trees. In this section, we give algorithms for packing element-disjoint Steiner Forests, where we are given m groups of terminals T 1 , T 2 , . . . T m . The approach of [9] encounters two difficulties. First, we cannot reduce to a bipartite instance, using only the global-connectivity version of the Reduction Lemma. In fact, our strengthening of the Reduction Lemma to preserve local connectivity was motivated by this; using it allows us once again assume that we have a bipartite graph G ′ (T ∪ W, E). Second, we cannot apply the random coloring algorithm on the bipartite graph G ′ directly; we give an example in Appendix A to show that this approach does not work. One reason for this is that, unlike the Steiner tree case, it is no longer a problem of packing bases of a submodular function. To overcome this second difficulty we use a decomposition technique followed by the random coloring algorithm to prove that there always exist Ω(k/(log |T | log m)) element-disjoint forests. We believe that the bound can be improved to Ω(k/ log |T |). We also consider the packing problem in restricted classes of graphs, in particular planar graphs. We obtain a much stronger bound, showing the existence of ⌈k/5⌉ − 1 Steiner forests. The (simple) technique extends to graphs of fixed genus to prove the existence of Ω(k) Steiner forests where the constant depends mildly on the genus. We believe that there exist Ω(k) Steiner forests in any H-minor-free graph where H is fixed; it is shown in [1] that there exist Ω(k) Steiner trees in H-minor-free graphs. Our technique for planar graphs does not extend directly, but generalizing this technique allows us to make partial progress; by using our general graph result and some related ideas, in Section 3.3, we prove that in graphs of any fixed treewidth, there exist Ω(k) element-disjoint Steiner Trees if the terminal set is k-element-connected. An O(log |T | log m)-approximation for Packing in General Graphs In order to pack element-disjoint Steiner forests we borrow the basic idea from [6] in the edge-connectivity setting for Eulerian graphs; this idea was later used by Lau [33] in the much more difficult non-Eulerian case. The idea at a high level is as follows: If all the terminals are k-connected then we can treat the terminals as forming one group and reduce the problem to that of packing Steiner trees. Otherwise, we can find a cut (S, V \ S) that separates some groups from others. If the cut is chosen appropriately we may be able to treat one side, say S, as containing a single group of terminals and pack Steiner trees in them without using the edges crossing the cut. Then we can shrink S and find Steiner forests in the reduced graph; unshrinking of S is possible since we have many trees on S. In [6,33] this scheme works to give Ω(k) edge-disjoint Steiner forests. However, the approach relies strongly on properties of edge-connectivity as well as the properties of the packing algorithm for Steiner trees. These do not generalize easily for element-connectivity. Nevertheless, we show that the basic idea can be applied in a slightly weaker way (resulting in the loss of an O(log m) factor over the Steiner tree packing factor). We remark that the reduction to a bipartite instance using the Reduction Lemma plays a critical role. A key definition is the notion of a good separator given below. Definition 3.2. Given an graph G(V, E) with terminal sets T 1 , T 2 , . . . T m , such that for all i, κ ′ (T i ) ≥ k, we say that a set S of white vertices is a good separator if (i) |S| ≤ k/2 and (ii) there is a component of G − S in which all terminals are k/2 log m-element-connected. Note that the empty set is a good separator if all terminals are k/2 log m-element-connected. Proof: Let G(V, E) be an instance of the Steiner Forest packing problem, with terminal sets T 1 , T 2 , . . . T m such that each T i is k-element-connected. If T is k 2 log m -element connected, the empty set S is a good separator. Otherwise, there is some set of white vertices of size less than k 2 log m that separates some of the terminals from others. Let S 1 be a minimal such set, and consider the two or more components of G − S 1 . Note that each T i is entirely contained in a single component, since T i is at least k-element-connected, and |S 1 | < k. Among the components of G − S 1 that contain terminals, consider a component G 1 with the fewest sets of terminals; G 1 must have at most m/2 sets from T 1 , . . . T m . If the set of all terminals in G 1 is k 2 log m connected, we stop, otherwise, find in G 1 a set of white vertices S 2 with size less than k 2 log m that separates terminals of G 1 . Again, find a component G 2 of G 1 − S 2 with fewest sets of terminals, and repeat this procedure until we obtain some subgraph G ℓ in which all the terminals are k 2 log m -connected. We can always find such a subgraph, since the number of sets of terminals is decreasing by a factor of 2 or more at each stage, so we find at most log m separating sets S j . Now, we observe that the set S = ℓ j=1 S j is a good separator. It separates the terminals in G ℓ from the rest of T , and its size is at most log m × k 2 log m = k/2; it follows that each set of terminals T i is entirely within G ℓ , or entirely outside it. By construction, all terminals in G ℓ are k 2 log m connected. We can now prove our main result, that we can always find a packing of Ω( Proof: The proof is by induction on m. The base case of m = 1, follows from [9,3]; G contains at least k 6 log |T | element-disjoint Steiner Trees, and we are done. We may assume G is bipartite by using the Reduction Lemma. Find a good separator S, and a component G ℓ of G − S in which all terminals are k 2 log m -connected. Now, since the terminals in G ℓ are k 2 log m -connected, use the algorithm of [9] to find k 12 log m log |T | element-disjoint Steiner trees containing all the terminals in G ℓ ; none of these trees uses vertices of S. Number these trees from 1 to k 12 log m log |T | ; let T j denote the jth tree. The set S separates G ℓ from the terminals in G − G ℓ . If S is not a minimal such set, discard vertices until it is. If we delete G ℓ from G, and add a clique between the white vertices in S to form a new graph G ′ , it is clear that the element-connectivity between any pair of terminals in G ′ is at least the element-connectivity they had in G. The graph G ′ has m ′ ≤ m − 1 groups of terminals; by induction, we can find k 12 log |T | log m < k 12 log |T | log m ′ element-disjoint Steiner forests for the terminals in G ′ . As before, number the forests from 1 to k 12 log m log |T | ; we use F j to refer to the jth forest. These Steiner Forests may use the newly added edges between the vertices of S; these edges do not exist in G. However, we claim that the Steiner Forest F j of G ′ , together with the Steiner tree T j in G ℓ gives a Steiner Forest of G. The only way this might not be true is if F j uses some edge added between vertices u, v ∈ S. However, every vertex in S is adjacent to a terminal in G ℓ , and all the terminals of G ℓ are in every one of the Steiner trees we generated. Therefore, there is a path from u to v in T j . Hence, deleting the edge between u and v from F j still leaves each component of F j ∪ T j connected. Therefore, for each 1 ≤ j ≤ k 12 log m log |T | , the vertices in F j ∪ T j induce a Steiner Forest for G. Packing Steiner Trees and Forests in Planar Graphs We now prove much improved results for restricted classes of graphs, in particular planar graphs. If G is planar, we show the existence of ⌈k/5⌉ − 1 element-disjoint Steiner Forests. 1 The intuition and algorithm are easier to describe for the Steiner tree packing problem and we do this first. We achieve the improved bound by observing that planarity restricts the use of many white vertices as "branch points" (that is, vertices of degree ≥ 3) in forests. Intuitively, even in the case of packing trees, if there are terminals t 1 , t 2 , t 3 , . . . that must be in every tree, and white vertices w 1 , w 2 , w 3 . . . that all have degree 3, it is difficult to avoid a K 3,3 minor. Note, however, that degree 2 white vertices behave like edges and do not form an obstruction. We capture this intuition more precisely by showing that there must be a pair of terminals t 1 , t 2 that are connected by Ω(k) degree-2 white vertices; we can contract these "parallel edges", and recurse. We describe below an algorithm for packing Steiner Trees. Through the rest of the section, we assume k > 10; otherwise, ⌈k/5⌉ − 1 ≤ 1, and we can always find 1 Steiner Tree in a connected graph. Given an instance of the Steiner Tree packing problem in planar graphs, we construct a reduced instance as follows: Use the Reduction Lemma to delete and contract edges between white vertices to obtain a planar graph with vertex set T ∪ W , such that W is a stable set. Now, for each vertex w ∈ W of degree 2, connect the two terminals that are its endpoints directly with an edge, and delete w. (All edges have unit capacity.) We now have a planar multigraph, though the only parallel edges are between terminals, as these were the only edges added while deleting degree-2 vertices in W . Note that this reduction preserves the element-connectivity of each pair of terminals; further, any set of element-disjoint trees in this reduced instance corresponds to a set of element-disjoint trees in the original instance. We need the following technical result: Theorem 3.5 (Borodin,[2]). If G is a planar graph with minimum degree 3, it has an edge of weight at most 13, where the weight of an edge is the sum of the degrees of its endpoints. Lemma 3.6. In a reduced instance of the Planar Steiner Tree Packing problem, if T is k-element-connected, there are two terminals t 1 , t 2 with at least ⌈k/5⌉ − 1 parallel edges between them. Proof: We prove this lemma in Appendix A.1; here, we give a proof showing the weaker result that there exist terminals t 1 , t 2 with ⌈k/10⌉ edges between them. Let G be the planar multigraph of the reduced instance. Since T is k-element-connected in G, every terminal has degree at least k in G. Construct a planar graph G ′ from G by keeping only a single copy of each edge. We argue below that some terminal t 1 ∈ T has degree at most 10 in G ′ ; it follows that G must contain at least ⌈k/10⌉ copies of some edge incident to t 1 , as t 1 has degree at least k in G. These edges must be incident to another terminal t 2 , completing the proof. To see that some terminal t 1 has degree at most 10 in G ′ , we first assume that no terminal has degree ≤ 2, or we are already done. Now, as every vertex of W in a reduced instance has degree at least 3, we may use Theorem 3.5; this implies that G ′ has an edge e, such that the sum of the degrees of the endpoints of e is at most 13. The edge e must be incident to a terminal t 1 , as the white vertices are a stable set. The other endpoint of e has degree at least 3, so the degree of t 1 is at most 10. It is now easy to prove by induction that we can pack ⌈k/5⌉ − 1 disjoint trees. Otherwise, apply the Reduction Lemma to construct a reduced instance G ′ , preserving the element-connectivity of T . Now, from Lemma 3.6, there exist a pair of terminals t 1 , t 2 that have ⌈k/5⌉ − 1 parallel edges between them (Note that the parallel edges between t 1 and t 2 may have non-terminals on them in the original graph but they have degree 2.). Contract t 1 , t 2 into a single terminal t, and consider the new instance of the Steiner Tree packing problem with terminal set T ′ = T ∪ {t} − {t 1 , t 2 }. It is easy to see that the element-connectivity of the terminal set is still at least k; by induction, we can find ⌈k/5⌉ − 1 Steiner trees containing all the terminals of T ′ , with the property that all non-terminals have degree 2. Taking these trees together with ⌈k/5⌉ − 1 edges between t 1 and t 2 gives ⌈k/5⌉ − 1 trees in G ′ that span the original terminal set T . Packing Steiner Forests in Planar Graphs: The algorithm described above for packing Steiner trees encounters a technical difficulty when we try to extend it to Steiner forests. Lemma 3.6 can be used at the start to merge some two terminals. However, as the algorithm proceeds it may get stuck in the following situation: it merges all terminals from some group T i into a single terminal. Now this terminal does not require any more connectivity to other terminals although other groups are not yet merged together. In this case we term this terminal as dead. In the presence of dead terminals Lemma 3.6 no longer applies; we illustrate this with a concrete example in Appendix A.2. We overcome this difficulty by showing that a dead terminal may be replaced by a grid of white vertices -the grid is necessary to ensure that the resulting graph is still planar. We can then apply the Reduction Lemma to remove edges between the newly added white vertices and proceed with the merging process. See Appendix A.2 for details. Extensions: Our result for planar graphs can be generalized to graphs of fixed genus; Ivanco [22] generalized Theorem 3.5 to show that a graph G of genus g has an edge of weight at most 2g + 13 if 0 ≤ g ≤ 3 and an edge of weight at most 4g + 7 otherwise. This allows us to prove that there exist ⌈k/c⌉ forests where c ≤ 4g + 8; we have not attempted to optimize this constant c. Aazami et al. [1] also give algorithms for packing Steiner Trees in these graph classes, and graphs excluding a fixed minor. We thus make the following natural conjecture: We note that Lemma 3.6 fails to hold for H-minor-free graphs, and in fact fails even for bounded treewidth graphs. Thus, our approach cannot be directly generalized. However, instead of attempting to contract together just two terminals connected by many parallel edges, we may be able contract together a constant number of terminals that are "internally" highly connected. Using Theorem 3.4 and other ideas, we prove in the next section that this approach suffices to pack many trees in graphs with small treewidth. We believe that these ideas together with the structural characterization of H-minor-free graphs by Robertson and Seymour [37] should lead to a positive resolution of Conjecture 1. Packing Trees in Graphs of Bounded Treewidth Let G(V, E) be a graph of treewidth ≤ r − 1, with terminal set T ⊆ V such that κ ′ (T ) ≥ k. In this section, we give an algorithm to find, for any fixed r, Ω(k) element-disjoint Steiner Trees in G. Our approach is similar to that for packing Steiner Trees in planar graphs, where we argued in Lemma 3.6 that there exist two terminals t 1 , t 2 with Ω(k) parallel edges between them, so we could contract them together and recurse on a smaller instance. In graphs of bounded treewidth, this is no longer the case; see the end of Appendix A for an example in which no pair of terminals is connected by many parallel edges. However, we argue that there exists a small set of terminals T ′ ⊂ T that is highly "internally connected", so we can find Ω(k) disjoint trees connecting all terminals in T ′ , without affecting the connectivity of terminals in T − T ′ . We can then contract together T ′ and the white vertices used in these trees to form a single new terminal t, and again recurse on a smaller instance. The following lemma captures this intuition: Lemma 3.8. If G(V, E) is a bipartite graph of treewidth at most r − 1, with terminal set T ⊂ V such that T ≥ 2 r , κ ′ (T ) ≥ k, there exists a set S ⊆ V − T such that there is a component G ′ of G − S containing k/12r 2 log(3r) element-disjoint Steiner trees for the (at least 2) terminals in G ′ . Moreover, these trees in G ′ can be found in polynomial time. Given this lemma, we prove below that for any fixed r, we can pack Ω(k) element-disjoint trees in graphs of treewidth at most r − 1. The proof combines ideas of Theorem 3.7 and Theorem 3.4. Theorem 3.9. Let G = (V, E) be a graph of treewidth at most r − 1. For any terminal set T ⊆ V with κ ′ G (T ) ≥ k, there exist Ω(k/12r 2 log(3r)) element-disjoint Steiner trees on T . Proof: As for Theorem 3.7, we prove this theorem by induction. Let G be a graph of treewidth at most r − 1, with terminal set T . If |T | ≤ 2 r , we have k/6 log |T | ≥ k/6r element-disjoint trees from the tree-packing algorithm of Cheriyan and Salavatipour [9] in arbitrary graphs. Otherwise, we use the Reduction Lemma to ensure that G is bipartite. Let S be a set of white vertices guaranteed to exist from Lemma 3.8. If S is not a minimal such set, discard vertices until it is. Now, find k/12r 2 log(3r) element-disjoint trees containing all terminals in some component G ′ of G − S; note that each vertex of S is incident to some terminal in G ′ , and hence to every tree. (This follows from the minimality of S and the fact that G is bipartite.) Modify G by contracting all of G ′ to a single terminal t, and make it incident to every vertex of S. It is easy to see that all terminals in the new graph are k-element-connected; therefore, we now have an instance of the Steiner Tree packing problem on a graph with fewer terminals. The new graph has treewidth at most r − 1, so by induction, we have k/12r 2 log(3r) element-disjoint trees for the terminals in this new graph; taking these trees together with the k/12r 2 log(3r) trees of G ′ gives k/12r 2 log(3r) trees of the original graph G. We devote the rest of this section to proving the crucial Lemma 3.8. Subsequently, we may assume, w.l.o.g. (after using the Reduction Lemma) that the graph G is bipartite; we may further assume that k ≥ 12r 2 log(3r) and |T | ≥ 2 r . First, observe that G has a small cutset that separates a few terminals from the rest. Proof Sketch: Fix a tree-decomposition T of G; every non-leaf node of T corresponds to a cutset, and each node of T contains at most r vertices of G. Start at a leaf of T , and walk upwards until reaching a node v such that the subtree of T rooted at some child of v contains between r and 2r terminals. (This is always possible since walking up one step only gives at most r more terminals.) We find the set S and component of G − S in which we contract together a small number of terminals by focusing on the cutset C and component of G − C that are guaranteed to exist from the previous proposition. We introduce some notation before proceeding with the proof: 1. Let C be a cutset of size at most r, and let V ′ be the vertices of a component of G−C containing between r and 2r terminals. 2. Since terminals in V ′ are k-connected to the terminals in the rest of the graph, and |C| ≤ r ≪ k, C contains at least one black vertex. Let C ′ be the set of black vertices in C. Let We omit a proof of the following straightforward proposition; the second part of the statement follows from the fact that each terminal in V ′ is k-connected to terminals outside G ′ , and these paths to terminals outside G ′ must go through the cutset C ′ of size at most r. Proposition 3.11. The graph G ′ contains between r and 3r terminals (as C ′ may contain up to r terminals), and each terminal in V ′ is at least k/r-connected to some terminal in C ′ . Let T ′ be the set of terminals in G ′ . If κ ′ G ′ (T ′ ) ≥ k/2r 2 , we can easily find a set of white vertices satisfying Lemma 3.8: Let S be the set of vertices of G that are adjacent (in G) to vertices of G ′ . It is obvious that S separates G ′ from the rest of G, and all terminals in T ′ are highly connected; from the tree packing result of [9], we can find the desired disjoint trees in G ′ . Finally, note that all vertices of S are white, as the only neighbors of G ′ are either white vertices of the cutset C or the neighbors of the black vertices in C, all of which are white as G is bipartite. However, it may not be the case that all terminals of T ′ are highly connected in G ′ . In this event, we use the following simple algorithm (very similar to that in the proof of Lemma 3.3) to find a highly-connected subset of T ′ : Begin by finding a set S 1 of at most k/2r 2 white vertices in G ′ that separates terminals of T ′ . Among the components of G ′ − S 1 , pick a component G 1 with at least one terminal of V ′ . If all terminals of G 1 are k/2r 2 connected, stop; otherwise, find in G 1 a set S 2 of at most k/2r 2 white vertices that separates terminals of G 1 , pick a component G 2 of G 1 − S 2 that contains at least one terminal of V ′ , and proceed in this manner until finding a component G ℓ in which all terminals are k/2r 2 connected. Claim 3.12. We perform at most r iterations of this procedure before we stop, having found some subgraph G ℓ in which all the (at least 2) terminals are k/2r 2 connected. Proof: At least one terminal of C ′ must be lost every time we find such a set S i ; if this is true, the claim follows. To see that this is true, observe that when we find a cutset S i+1 in G i , there is a component that we do not pick that contains a terminal t. If this terminal t is in C ′ , we are done; otherwise, it must be in V ′ . But from Proposition 3.11 all terminals in V ′ are k/r connected to some terminal in C ′ , and so some terminal of C ′ must be in the same component as t. When we stop with the subgraph G ℓ , it contains at least one terminal t ′ ∈ V ′ , and at least one terminal of C ′ to which t ′ is highly connected; therefore, G ℓ contains at least 2 terminals. All terminals in the subgraph G ℓ are k/2r 2 -connected, and there are at most 3r of them, so we can find k/12r 2 log(3r) disjoint trees in G ℓ that connect them, using the tree-packing result of [9]. Let S be the set of vertices of G that are adjacent (in G) to vertices of G ℓ ; obviously, S separates G ℓ from the rest of G, and to satisfy Lemma 3.8, it merely remains to verify that S only contains white vertices. Every terminal in G ′ − G ℓ was separated from G ℓ by white vertices in some S i , and terminals in G − G ′ can only be incident to white vertices of the cutset C, which are not in G ′ , let alone G ℓ . This completes the proof of Lemma 3.8. Single-Sink Vertex-Connectivity Recall that in the SS-k-CONNECTIVITY problem, one is given an undirected graph G = (V, E) with edge costs, a specified sink/root vertex r, and a subset of terminals T ⊆ V , with |T | = h. The goal is to find a minimum cost subgraph H that contains k vertex-disjoint paths from each terminal t ∈ T to the root. In this section we give a very simple proof of the main technical result in [12] using the Reduction Lemma. We lead up to the technical lemma via a description of the (simple) algorithm for SS-k-CONNECTIVITY. The basic algorithmic idea comes from [4]; this is the idea of using augmentation. Let T ′ ⊆ T be a subset of terminals and let H ′ be a subgraph of G that is feasible for T ′ . For a terminal t ∈ T \ T ′ , a set of k paths p 1 , . . . , p k is said to be an augmentation for t with respect to T ′ if (i) p i is a path from t to some vertex in T ′ ∪ {r} (ii) the paths are internally vertex disjoint and (iii) a terminal t ′ ∈ T ′ is the endpoint of at most one of the k paths. Note that the root is allowed to be the endpoint of more than one path. The following proposition is easy to prove via a simple min-cut argument. Given T ′ and t, the augmentation cost of t with respect to T ′ is the cost of a min-cost set of paths that augment t w.r.t. to T ′ . We can find the augmentation cost for a terminal t by solving a simple min-cost flow problem. The key theorem in [12] is the following. Theorem 4.2 (Vertex-Connectivity, [12]). If OPT denotes the cost of an optimal solution to SS-k-CONNECTIVITY, and AugCost(t) the cost of an augmentation for terminal t w.r.t. T − {t}, t AugCost(t) ≤ 8k · OPT. We now briefly describe the algorithm of [5] for SS-k-CONNECTIVITY; a variant is used in [4,12]. Permute the terminals randomly; let t j denote the jth terminal in the permutation and let T j = {t 1 , . . . , t j }. Subgraph H ← ∅ For i = 1 to |T |. Add to H a min-cost augmentation of t i with respect to T i−1 . Output the subgraph H. Note that the above is a greedy algorithm except for the initial randomization. Interestingly, as noted in [5], the randomization is key; even for k = 2 there exist permutations that yield a solution of cost Ω(|T | · OPT). Using Theorem 4.2 it is easy to prove that the above algorithm is a randomized O(k log |T |)-approximation for SS-k-CONNECTIVITY: simply observe that the expected augmentation cost for the last terminal in the permutation is at most 8kOPT/|T |; a straightforward inductive argument then completes the proof. The main ingredient in the proof of Theorem 4.2, as shown by [12], is the following weaker statement involving paths that are element-disjoint, as opposed to vertex-disjoint. Lemma 4.3 (Element-Connectivity, [12]). Given an instance of SS-k-CONNECTIVITY, let ElemCost(t) denote the minimum cost of a set of k internally vertex-disjoint paths from any terminal t to T ∪ {r} − t. Then, t∈T ElemCost(t) ≤ 2OPT, where OPT is the cost of an optimal solution to this instance. It is shown in [12] that one can prove Theorem 4.2 by repeatedly invoking Lemma 4.3 to obtain a large collection of paths from each t ∈ T to other terminals, and applying a flow-scaling argument. The heart of the proof of the crucial Lemma 4.3, is a structural theorem of [12] on spiders: A spider is a tree containing at most a single vertex of degree greater than 2. If such a vertex exists, it is referred to as the head of the spider, and each leaf is referred to as a foot. Thus, a spider may be viewed as a collection of disjoint paths (called legs) from its feet to its head. If the spider has no vertex of degree 3 or more, any vertex of the spider may be considered its head. Vertices that are not the head or feet are called intermediate vertices of the spider. The Reduction Lemma allows us to give an extremely easy inductive proof of the Spider Decomposition Theorem below, 2 greatly simplifying the proof of [12]. 2. Each black vertex is a foot of exactly k spiders, and each white vertex appears in at most one spider. If a white vertex is the head of a spider, the spider has at least two feet. Before giving the formal short proof we remark that if the graph is bipartite then the collection of spiders is trivial to see: they are simply the edges between the black vertices and the stars rooted at each white vertex! Thus the Reduction Lemma effectively allows us to reduce the problem to a trivial case. Proof: We prove this theorem by induction on the number of edges between white vertices in G. As the base case, we have a graph G with no edges between white vertices; therefore, G is bipartite. (Recall that there are no edges between black vertices.) Each pair of black vertices is k-element connected, and hence every black vertex has at least k white neighbors. Let every b ∈ B mark k of its (white) neighbors arbitrarily. Every white vertex w that is marked at least twice becomes the head of a spider, the feet of which are the black vertices that marked w. For each white vertex w marked only once, let b be its neighbor that marked it, and b ′ be another neighbor. We let b − w − b ′ be a spider with foot b and head b ′ . It is easy to see that the spiders are disjoint, and that they satisfy all the other desired conditions. For the inductive step, consider a graph G with an edge pq between white vertices. If all black vertices are k-element connected in G 1 = G − pq, then we can apply induction, and find the desired subgraph of G 1 and hence of G. Otherwise, by Theorem 1, we can find the desired set of spiders in G 2 = G/pq. If the new vertex v = pq is not in any spider, this set of spiders exists in G, and we are done. Otherwise, let S be the spider containing v. If v is not the head of S, let x, y be its neighbors in S. Either x and y are both adjacent to p, or both adjacent to q, or (w.l.o.g.) x is adjacent to p and y to q. Therefore, we can replace the path If v is the head of S, we know that it has at least 2 feet. If at least 2 legs of S are incident to each of p and q, we can create two new spiders S p and S q , with heads p and q respectively; S p contains the legs of S incident to p, and S q the legs incident to q. If all the legs of S are incident to p, we let p be the head of the spider in G; the case in which all legs are incident to q is symmetric. If neither of these cases holds, it follows that (w.l.o.g.) exactly one leg ℓ of S is incident to p, with the remaining legs being incident to q. We let q be the head of the new spider, and add p to the leg ℓ. The authors of [12] showed that, once we have the Spider Decomposition Theorem, it is very easy to prove Lemma 4.3. Proof of Lemma 4.3:([12]) In an optimal solution H to an instance of SS-k-CONNECTIVITY, every terminal is k-vertex-connected to the root. Let the terminals be black vertices, and non-terminals be white; it follows that all the terminals are k-element connected to the root in H, and hence to each other. Therefore, we can find a subgraph of H of total cost at most OPT which can be partitioned into spiders as in Theorem 4.4. For each spider S and every terminal t that is a foot of S, we find a path entirely contained within S from t to another terminal. Each edge of S is in at most two such paths; since the spiders are disjoint and each terminal is a foot of k spiders, we obtain the desired result. If the head of S is a terminal, the path for each foot is simply the leg of S from that foot to the head. Each edge of S is in a single path. If the head of S is a white vertex, it has at least two feet. Fix an arbitrary ordering of the feet of S; the path for foot i follows leg i from the foot to the head, and then leg i + 1 from the head to foot i + 1. (The path for the last foot follows the last leg, and then leg 1 from the head to the foot.) It is easy to see that each edge of S is in exactly two paths; this completes the proof. Finally, we give a proof of Theorem 4.2 that relies only on the statement of Lemma 4.3. Our proof is a technical modification of the one in [12] and as previously remarked, does not need rely on the additional condition on the spiders that [12] guarantees. Our proof also gives a slightly stronger bound on t AugCost(t) (8k · OPT instead of (18k + 3) · OPT). Proof of Theorem 4.2: We give an algorithm to find an augmentation for each terminal that proceeds in 4k 2 iterations: In each iteration, for every terminal t, it finds a set of k internally vertex-disjoint paths from t to other terminals or the root. Let P i (t) denote the set of paths found for terminal t in iteration i. These paths have the following properties: 1. For each terminal t, every other terminal is an end-point of fewer than 4k 2 + 2k paths in i P i (t). Given these two properties, we can prove the theorem as follows: Separately for each terminal t, send 1 unit of flow along each of the paths in i P i (t); we thus have a flow of 4k 2 · k units from t to other terminals. Scale this flow down by 4k 2 · (k + 1 2 )/k, to obtain a flow of k 2 k+1/2 > k − 1/2 from t to other terminals. After the scaling step, the net flow through any vertex (terminal or non-terminal) is at most 1, since the maximum flow through a vertex before scaling was 4k 2 + 2k. Let F lowCost(t) denote the cost of this scaled flow for terminal t; if we now scale the flow up by a factor of 2, we obtain a flow of value greater than 2k − 1 from t to other terminals, in which the flow through any vertex besides t is at most 2. Therefore, by the integrality of min-cost flow, we can find an integral flow of 2k − 1 units from t to other terminals, of total cost at most 2F lowCost(t). Let E t be the set of edges used in this integral flow; it follows that cost(E t ) ≤ 2F lowCost(t). It is also easy to see that E t contains k disjoint paths from t to k distinct terminals, by observing that a hypothetical cutset of size k − 1 contradicts the existence of the flow of value 2k − 1 in which the flow through a vertex is at most 2. Therefore, we have found k disjoint paths from t to k other terminals, of total cost 2F lowCost(t). To bound the cost over all terminals, we note that from the second property above, we have t F lowCost(t) ≤ 4k 2 · 4kOPT/ 4k 2 k+1/2 k , which is less than 4kOPT. It follows that the total cost of the set of paths is at most 2 t F lowCost(t) < 8kOPT. It remains only to show that we can find a set of paths for each terminal in every iteration that satisfies the two desired properties. The proof below uses induction on the number of iterations i to prove property 1: After i iterations, for each terminal t, every other terminal is an end-point of fewer than i + 2k paths in i P i (t). In iteration i, for each terminal t, let Blocked(t) denote the set of terminals in T − t that have been the endpoints of at least (i − 1) + k paths in i−1 j=1 P j (t). (Note that the root r is never in any Blocked(t).) Since the total number of paths that have been found so far is (i − 1)k, |Blocked(t)| < k. Construct a directed graph D on the set of terminals, with edges from each terminal t to the terminals in Blocked(t). Since the out-degree of each vertex in D is at most k − 1, there is a vertex of in-degree at most k − 1; therefore, the digraph D is 2k − 2 degenerate and so can be colored using 2k − 1 colors. Let C 1 , C 2 , . . . C 2k−1 denote the color classes in a proper coloring of D; if t 1 , t 2 ∈ C j , then in iteration i, t 1 / ∈ Blocked(t 2 ) and t 2 / ∈ Blocked(t 1 ). For each color class C j in turn, consider the terminals of C j as black, and the non-terminals and terminals of other classes as white. There is a graph of cost OPT in which every terminal of C j is k-vertex-connected to the root, so C j is k-element-connected to the root in this graph even if terminals not in C j are regarded as white vertices. From Lemma 4.3, for every C j , we can find a set of internally disjoint paths from each t ∈ C j to C j ∪ {r} − {t} of total cost at most 2OPT. If these paths contain other terminals in T − C j as intermediate vertices, trim them at the first terminal they intersect. It follows that j t∈C j Cost(P i (t)) < 4kOPT, establishing property 2 above. To conclude, we show that for each terminal t, after iteration i, every other terminal is an end-point of fewer than i + 2k paths in i j=1 P j (t). Let C be the color class containing t; if t ′ ∈ Blocked(t), at most one new path in P i (t) ends in t ′ , as the paths for t are disjoint except at terminals in C, and t ′ / ∈ C. By induction, before this iteration t ′ was the endpoint of fewer than (i − 1) + 2k paths for t, and so after this iteration, it cannot be the endpoint of i + 2k paths for t. If t ′ / ∈ Blocked(t), it was the endpoint of at most (i − 1) + k − 1 paths for t before this iteration; even if all the k paths for t in this iteration ended at t ′ , it is the endpoint of at most i + 2k − 2 paths for t after the iteration. This gives us the desired property 1, completing the proof. Theorem 4.2 and Lemma 4.3 have applications to more general problems including the node-weighted version of SS-k-CONNECTIVITY [12] and rent-or-buy and buy-at-bulk network design [5]. We omit discussion of these applications in this version of the paper. Conclusions Having generalized the reduction step of [21] to handle local element connectivity, we demonstrated applications of this stronger Reduction Lemma to packing element (and edge) disjoint Steiner trees and forests, and also to SS-k-CONNECTIVITY. We believe that the Reduction Lemma will find other applications in the future. We close with several open questions: • We believe that our bound on the number of element-disjoint Steiner forests in a general graph can be improved from Ω(k/(log |T | log m)) to Ω(k/ log |T |). • Prove or disprove Conjecture 1, on packing disjoint Steiner Forests in graphs excluding a fixed minor. • In a natural generalization of the Steiner Forest packing problem, each non-terminal/white vertex has a capacity, and the goal is to pack forests subject to these capacity constraints. In general graphs, it is easy to reduce this problem to the uncapacitated/unit-capacity version (for example, by replacing a white vertex of capacity c by a clique of size c), but this is not necessarily the case for restricted classes of graphs. In particular, it would be interesting to pack Ω(k) forests for the capacitated planar Steiner Forest problem. • The known hardness of approximation factor for SS-k-CONNECTIVITY is Ω(log n) when k is a polynomial function of n, the number of vertices [28]. Can the current ratio of O(k log |T |) be improved? x y p q p q x y Figure 3: The construction of G 3 . • The vertices s and t are k-element-connected in G k . • For every copy of H k , the vertices x and y are k-white connected in G k . • The graph G k is bipartite, with the white vertices and the black vertices forming the two parts. We use G k as an instance of the Steiner-forest packing problem; s and t form one group of terminals, and for each copy of H k , the vertices x and y of that copy form a group. From our claims above, each group is k-element-connected. If we use the algorithm of Cheriyan and Salavatipour, there are no edges between white vertices to be deleted or contracted, so we move directly to the coloring phase. If colors are assigned to the white vertices randomly, it is easy to see that no color class is likely to connect up s and t. The probability that a white vertex is given color i is c log |T | k , for some constant c. The vertices s and t can be connected iff the same color is assigned to all the white vertices on one of the k paths from s to t in the graph formed from G k by contracting each H k to a single vertex. The probability that every vertex on such a path will receive the same color is c log |T | k k ; using the union bound over the k paths gives us the desired result. A.1 Packing Trees in Planar Graphs Lemma A.1. Let G(T ∪ W, E) be a planar graph with minimum degree 3, in which W is a stable set. There exists a vertex t ∈ T of degree at most 10, with at most 5 neighbors in T . Proof: Our proof uses the discharging technique. Assume, for the sake of contradiction, that every vertex t ∈ T has degree at least 11, or has at least 6 neighbors in T . By multiplying Euler's formula by 4, we observe t has degree at most 10, and at most 5 black neighbors. Let w denote the number of white neighbors of t, and b the number of black neighbors. Since each white vertex is incident to only a single copy of each edge in G, there must be at least ⌈(k − w)/b⌉ copies in G of some edge between t and a black neighbor. But b ≤ 5 and b + w ≤ 10; it is easy to verify since k ≥ 10, the smallest possible value of ⌈(k − w)/b⌉ is ⌈(k − 5)/5⌉ = ⌈k/5⌉ − 1. A.2 An Algorithm for Packing Steiner Forests in Planar and Bounded-genus Graphs For the Planar Steiner Forest Packing problem, we use an algorithm very similar to that for packing Steiner Trees in Section 3.2. Now, as input, we are given sets T 1 , . . . T m of terminals that are each internally k-connected, but some T i and T j may be poorly connected. Precisely as before, as long as each T i contains at least 2 terminals, Lemma 3.6 is true, so we can contract some pair of terminals t 1 , t 2 that have ⌈k/5⌉ − 1 parallel edges between them. Note that if t 1 , t 2 are in the same T i , after contraction, we have an instance in which T i contains fewer terminals, and we can apply induction. If t 1 , t 2 are in different sets T i , T j , then after contracting, all terminals in T i and T j are pairwise k-connected, so we can merge these two groups into a single set. In proving the crucial Lemma 3.6, we argued that in the multigraph G of the reduced instance, every terminal has degree at least k (since it is k-element-connected to other terminals), and in the graph G ′ in which we keep only a single copy of each edge, some terminal has degree at most 10; therefore, there are ⌈k/10⌉ copies of some edge. However, in the Steiner Forest problem, some T i may contain only a single terminal t (after several contraction steps). The terminal t may be poorly connected to the remaining terminals; therefore, it may have degree less than k in the multigraph G. If t is the unique low-degree terminal in G ′ , we may not be able to find a pair of terminals with a large number of edges between them. As a concrete example, consider the graph G k defined at the beginning of this appendix. (See also Fig. 3, and note that G k is planar.) We have one terminal set T 1 = {s, t}, and other sets T i containing the two terminals of each copy of H k . After several contraction steps, each copy of H k may have been contracted together to form a single terminal; each such terminal is only 2-connected to the rest of the graph. In the reduced instance, there is only a single copy of each edge, and Lemma 3.6 does not hold. We solve this problem by eliminating a set T i when it has only a single terminal; at this point, we can apply induction and proceed. We formalize this intuition in the following lemma: Lemma A.2. Let G(V, E) with a given T ⊆ V be a planar graph, and t ∈ T be an arbitrary terminal of degree d. Let G ′ be the graph constructed from G by deleting t, and inserting a d × d grid of white vertices, with the edges incident to t in G made incident to distinct vertices on one side of the new grid in G ′ . Then: 1. G ′ is planar. For every pair u, v of terminals in 3. Any set of element-disjoint subgraphs of G ′ corresponds to a set of element-disjoint subgraphs of G. Proof Sketch: See Figure 5 showing this operation; it is easy to observe that given a planar embedding of G, one can construct a planar embedding of G ′ . It is also clear that a set of element-disjoint subgraphs in G ′ correspond to such a set in G; every subgraph that uses a vertex of the grid can contain the terminal t. It remains only to argue that the element-connectivity of every other pair of terminals is preserved. Let u, v be an arbitrary pair of terminals; we show that their element-connectivity in G ′ is at least their connectivity κ ′ (u, v) in G. Fix a set of κ ′ (u, v) paths in G from u to v; let P be the paths that use the terminal t, and let ℓ = |P|. We locally modify these ℓ paths in P by routing them through the grid, so we obtain κ ′ (u, v) element-disjoint paths in G ′ . Let P u denote the set of prefixes from u to t of the ℓ paths in P, and let P v denote the suffixes from t to v of these paths. Let H denote the d × d grid that replaces t in G ′ ; we use P ′ u and P ′ v to denote the corresponding t t Figure 5: Replacing a terminal by a grid of white vertices preserves planarity and element-connectivity. paths in G ′ from u to vertices of H, and from vertices in H to v respectively. Let I and O denote the vertices of H incident to paths in P ′ u and P ′ v . It is not difficult to see that there are a set of disjoint paths in the grid H connecting the ℓ distinct vertices in I to those in O; using the paths of P ′ u , together with the paths through H and the paths of P ′ v gives us a set of disjoint paths in G ′ from u to v. A Counterexample to the existence of 2 terminals with Ω(k) "Parallel edges" between them: Recall that in the case of planar graphs (or graphs of bounded genus), we argued that there must be two terminals t 1 , t 2 with Ω(k) "parallel edges" between them. (That is, there are Ω(k) degree-2 white vertices adjacent to t 1 and t 2 .) This is not necessarily the case even in graphs of treewidth 3: The graph K 3,k , the complete bipartite graph with 3 vertices on one side and k on the other, has treewidth 3. If the three vertices on one side are the terminal set T and the k vertices of the other side are non-terminals, it is easy to see that κ ′ (T ) = k, but every white vertex has degree 3. In this example, there are only 3 terminals, so the tree-packing algorithm of Cheriyan and Salavatipour [9] would allow us to find Ω(k/ log |T |) = Ω(k) trees connecting them. Adding more terminals incident to all the white vertices would raise the treewidth, so this example does not immediately give us a low-treewidth graph with a large terminal set such that there are few parallel edges between any pair of terminals. However, we can easily extend the example by defining a graph G m as follows: Let T 1 , T 2 , . . . T m be sets of 2 terminals each, let W 1 , W 2 , . . . W m−1 each be sets of k white vertices, and let all the vertices in each W i be adjacent to both terminals in T i and both terminals in T i+1 . (See Fig. 6 below.) The graph G m has 2m terminals, T = i T i is k-element-connected, and it is easy to verify that G m has treewidth 4. However, every white vertex has degree 4, so there are no "parallel edges" between terminals. (One can modify this example to construct a counterexample graph G m with treewidth 3 by removing one terminal from each alternate T i .) Figure 6: A graph of treewidth 4 with many terminals, but no "parallel edges".
2009-02-16T13:29:26.000Z
2009-02-16T00:00:00.000
{ "year": 2009, "sha1": "02a83e5690e70aa99bfb01c9170d97097b10055d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0902.2795v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "02a83e5690e70aa99bfb01c9170d97097b10055d", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
6075739
pes2o/s2orc
v3-fos-license
Freeze-fracture study of the Drosophila photoreceptor membrane: mutations affecting membrane particle density. The photoreceptor membrane of Drosophila melanogaster (wild type, vitamin A-deprived wild type, and the mutants ninaAP228, ninaBP315, and oraJK84) was studied by freeze-fracture electron microscopy. The three mutations caused a decrease in the number of particles on the protoplasmic face of the rhabdomeric membrane. The ninaAP228 mutation affected only the peripheral photoreceptors (R1-6), while the ninaBP315 mutation affected both the peripheral (R1-6) and the central photoreceptors (R7). The oraJK84 mutation, which essentially eliminates R1-6 rhabdomeres, was found to drastically deplete the membrane particles in the vestigial R1-6 rhabdomeres but not in the normal rhabdomeres of R7 photoreceptors, suggesting that the failure of the oraJK84 mutant to form normal R1-6 rhabdomeres may be due to a defect in a major R1-6 photoreceptor-specific protein in the mutant. In all cases in which both the rhabdomeric particle density and rhodopsin content were studied, the mutations or vitamin A deprivation was found to reduce both these quantities, supporting the idea that at least the majority of the rhabdomeric membrane particles are closely associated with rhodopsin. Vitamin A deprivation and the mutations also reduced the number of particles in the plasma membrane as in the rhabdomeric membrane, suggesting that both classes of membrane contain rhodopsin. Freeze-fracture studies of the photoreceptors have shown that there are numerous membrane particles on the fracture face of both the outer segment disk membrane of vertebrate photoreceptors (36,37, and references cited therein) and the rhabdomeric microvillar membrane of invertebrate photoreceptors (3,5,8,9,14,22,26,32) . Several lines of evidence suggest that these membrane particles are correlated with the presence of rhodopsin. For example, vitamin A deprivation, which reduces the rhodopsin content, has been found to reduce the number of disk membrane particles in vertebrate photoreceptors (16) and rhabdomeric membrane particles in invertebrate photoreceptors (3,14,22) . In the case of Drosophila, it is also possible to reduce the rhodopsin content by means of single-gene mutations. Among the mutants of Drosophila melanogaster that we have isolated for the study of the photoreceptor process are those with drastically reduced rhodopsin content (22,29,30) . The studies that have been carried out on some of these mutants suggest that the mechanism of rhodopsin depletion in these mutants can be very different from that of vitamin A deprivation (22) . THE It thus appeared worthwhile to examine the photoreceptor membrane microstructure of several representative rhodopsindeficient mutants not only to characterize the mutants in terms of membrane microstructure but also to reexamine some of the questions regarding the nature of the particles in the photoreceptor membrane . The structure of the Drosophila compound eye is well-known from several studies on larger flies (2,25,40) as well as on Drosophila (6,33) . It consists of -800 ommatidia, each containing a group of eight retinula cells (photoreceptors) . Each retinula cell has a rhabdomere composed of hexagonally packed microvilli. On the basis of the position of the rhabdomeres in the ommatidium, the eight photoreceptor cells of each ommatidium are classified into the six peripheral (R1-6) and the two central (R7 and R8) cells . A cross section through a distal region of the ommatidium shows the rhabdomeres arranged in a characteristic trapezoidal pattern (Fig . I). The rhabdomeres of R1-6 cells are located in the periphery of the trapezoid, and the rhabdomeres of R7 and R8 cells are located near the center of the trapezoid (Fig . 1), with R7 rhabdomere (6) and Pergamon Press . on top of R8 rhabdomere . The rhabdomere is known to contain the visual pigment (11,18,21). In the case of muscoid diptera, all RI-6 photoreceptors contain the same visual pigment, which absorbs maximally at -480 nm and photointerconverts with a metarhodopsin absorbing maximally at -580 nm (12,15,28,38) . The visual pigments contained in the other classes of photoreceptors, R7 and R8, are spectrally different from those in R1-6 photoreceptors (13,15,17) . In this paper we report on freeze-fracture analyses of the photoreceptor membranes of three membrane particle-deffcient mutants, ninaA p22a , ninaBp3rs, and ora~K~. These mutants were chosen because they display substantially different phenotypes, suggesting that the mechanism of particle reduction in each may be different . The following are some of the questions we investigated in this work : (a) How do the mutations ninaA, ninaB, and ora affect the microstructure of the photoreceptor membrane? (b) Do our data from these mutants support the hypothesis that the majority of the rhabdomeric intramembrane particles originate from rhodopsin? (c) How does the microstructure of the nonrhabdomeric photoreceptor membrane located adjacent to the rhabdomeres differ from that of the rhabdomeric membrane? MATERIALS AND METHODS The following Drosophila melanogaster stocks were used in this work : wild-type flies of the Oregon-R strain, vitamin A-deprived wild-type flies of the same strain, the mutants ninaA rw", ninaBY'"'S , and ora", and the double mutant sei,""; ninaA"22". Most of the flies used had their screening pigments in the compound eye eliminated using the mutation white (w), because white-eyed flies are more convenient for determining the mutant phenotype . No statistically 96 2 THE JOURNAL OF CELL BIOLOGY " VOLUME 93, 1982 significant differences were observed between the freeze-fracture data obtained from normal, red-eyed flies and those from white-eyed flies of the same strain . All flies were raised in an incubator (25°C, 55% relative humidity, 12-h light/ 12-It dark cycle) on a cornmeal-yeast-agar medium . The age of the flies ranged between 2 and 18 d . Neither the age nor different illumination conditions had any significant effect on either the rhodopsin content or the microstructure of the rhabdomere . The mutation ora''K" (outer rhabdomeres absent) was obtained from Dr . J . Merriam of the University of California at Los Angeles. It is a recessive, third chromosome mutation mapping at 65 .3 ± 0.4 (19,20) . The mutation reduces the rhabdomeres of the R I-6 retinula cells to vestigial remains without affecting the rhabdomeres of the central retinula cells R7/8 (14,19,20). The mutation sevf " (sevenless) was obtained from the Benzer laboratory at California Institute of Technology . The mutation maps at 33.2 ± 0.2 on the X chromosome (15), and it specifically eliminates the R7 photoreceptors (4 ; see also reference 15) . The double mutant sev`Y''; ninaA p228 was constructed by chromosome assortment . Vitamin A-deprived flies were obtained by raising wild-type flies on vitamin A-deficient, Sang's medium C (35). To eliminate bacteria as a source of vitamin A, the medium was autoclaved and the antibiotics, penicillin G potassium and streptomycin sulfate, were added . To avoid the effects of individual variations among vitamin A-deprived flies, we carried out both freeze-fracture and spectrophotometric studies on the same eye of two 5-to 6-d-old flies, one that had been vitamin A deprived for a generation and the other for two generations . The deep pseudopupil was used to measure in vivo the absorbance changes of rhabdomeres due to photoconversions of visual pigment between the rhodopsin (Am , -480 nm) and metarhodopsin (Am_ = 580 nm) states in RI-6 photoreceptors (35,24) . Because in Drosophila metarhodopsin has a higher extinction coefficient than rhodopsin (14,27), measurements were made near the absorption at a specimen stage temperature of -106°to -I l6°C . Immediately after the fracture, the specimen was covered with platinum to a depth of -30 .$, coated with carbon, and cleaned in household bleach . A mirror-image replica device (Balzers high-vacuum technique) was also occasionally used. Pictures of the replicas were taken with a Philips EM 300 electron microscope on 70-mm negatives . We monitored magnification using a "waffle" type carbon grating and took care to avoid lens hysteresis . Only those prints in which the particles appeared clearly as three-dimensional structures were used to determine the number and diameter of the membrane particles. Since the freeze-fracture photograph of a microvillus represents the projection of its cylindrical surface onto a plane, the measured area on the photograph was corrected for the distortion caused by the projection . ' The PDA is a sustained potential that keeps the photoreceptor membrane depolarized after the termination of an intense blue stimulus that converts a substantial net amount of rhodopsin to metarhodopsin . It is terminated by an orange stimulus that photoconverts a substantial To obtain the particle diameter density distribution, particle diameters were RESULTS grouped into size classes (bins) of 20-r1 width, and the particle density (number/ unit area) was determined for each diameter class for each cell studied. The Morphology of the Retinula Cells means and standard errors for each diameter class were calculated from a population of cells. Statistical comparison of particle densities was carried out The relative positions (Figs . 2 a and 3 a) and size data from using the r distribution . The level of significance was P = 0.01 . thin-sectioned eyes) of the rhabdomeres of the two nina mu- tants were not substantially different from those of wild type, In the mutant orajK8°, however, the peripheral rhabdomeres were drastically reduced in size (Fig. 4a) . Furthermore, in all the replicas of the ora compound eye examined, the rhabdomere of at least one peripheral retinula cell was missing in each ommatidium. Since the vestigial rhabdomeres are presumably located near the distal tip of the retinula cells (10,15), the missing rhabdomeres could indicate either these retinula cells had no rhabdomeres at all or the fracture plane happened to fall proximal to their vestigial rhabdomeres. The cell bodies of the retinula cells R1-6 appeared normal . The rhabdomeres of all three mutants, including the vestigial rhabdomeres of oreK84 , consisted of numerous tightly packed microvilli, each of which had a diameter of -500 A, as in wild type . Microstructure of the Rhabdomeric Membrane The protoplasmic face (PF) of the freeze-fractured rhabdomeric membrane of the wild-type fly showed numerous membrane particles (see also references 14, 22), while the exoplasmic face (EF) showed only a very few particles and appeared smooth . The three mutants studied also displayed membrane particles on the PF of the microvillar membrane, and their EF appeared smooth and showed only a few particles . All three mutants, however, differed from wild type in having a markedly lower number of rhabdomeric membrane particles in the peripheral retinula cells R1-6 (Figs. 2 b, 3 b, 4 b, and Table I) . In the case of ninaB P3' s, a similar decrease in rhabdomeric Comparison of Rhabdomeric Membrane Particle Density in Two Classes of Retinula Cells The data are presented in the form, mean ± SD (n), where SD and n stand for the standard deviation, and number of cells, respectively . The data were obtained from both red-and white-eyed flies and were corrected for the curvature of microvilli (Materials and Methods) . The number of fractured eyes were 3, 2, 4, and 2 for wild type, ninaBP" ninaA P228 and ora", respectively . Only the data from unequivocally identified R7 cells are included in this column . Data presented in the form mean t SD (n) where n = number of eyes, number of extracts, and number of cells, for the first, second, and third column, respectively . All values were normalized to the corresponding wild-type value to obtain "relative values" shown. All data are from white-eyed flies only . * The rhodopsin measurements obtained from spectrophotometry of digitonin extracts (1,000 heads/extract) by Larrivee et al . (22) are included in the second column of the table for comparison . No attempts were made to correct for small differences in the size of rhabdomeres that might be present in some mutants in calculating the "relative rhodopsin contents ." The relative particle densities (third column) were calculated from the data displayed in the first column of Table III . For ninaA"" and ora", only the data from the rhabdomeres of R1-6 cells are presented in the table. For other classes of flies, the data from other cell types are included . § For vitamin A-deprived flies, the same eyes were used for both in vivo spectrophotometry (first column) and particle density measurements (third column) . 964 THE JOURNAL OF CELL BIOLOGY " VOLUME 93, 1982 TABLE II particles was also observed in the central retinula cell R7 (Fig. 2 c). The other two mutants, on the other hand, appeared to have a normal number of particles on the rhabdomeres of R7 cells (Figs. 3 c and 4c). Table I displays the results of membrane particle density measurements. In wild type, the membrane particle densities in the two classes ofrhabdomeres were nearly the same: 3,000 particles/Pm 2. This is somewhat lower than that reported by Harris et al. (14) probably because we corrected for the curvature of microvilli (Materials and Methods) . In the case of the mutants ninaA P228 and ora j" 4, R1-6 rhabdomeres had substantially reduced particle counts while the particle density in R7 rhabdomeres was comparable to that in wild-type rhabdomeres. By contrast in the case of the mutant ninaBP3t5 , the particle density was significantly lower than that of wild type in both RI-6 and R7 rhabdomeres . Thus, the effect of the mutations ninaA P228 and orajK ' on rhabdomeric membrane particles appears to be confined to RI-6 cells, while ninaBPX 5 affects the central retinula cell R7 as well, To determine whether ninaA P22e spares R8 rhabdomeres as well as R7 rhabdomeres, we examined a double mutant carrying both the ninaA P"B and sevLY3 mutations (Materials and Methods), because in wild-type flies it is difficult to distinguish the rhabdomeres of the two central retinula cells (R7 and R8) from each other unambiguously. As far as we could determine, the phenotype of the double mutant was the sum of the effects of the two constituent mutations . Thus, the only central cell rhabdomeres remaining in the double mutant were those of R8 photoreceptors . Fig . 5 shows a replica of a cross-fractured ommatidium of the double mutant . Although the rhabdomeres do not form a clear trapezoidal pattern in this fracture plane, one can readily identify the rhabdomere designated by rh, as the one belonging to the central retinula cell . The enlargements of the rhabdomeres shown in insets to the left of the figure show that this is the only rhabdomere with a normal complement of membrane particles. Thus, ninaAP228, indeed, appears to have no effect on the R8 particle count . rhodopsin contents determined by the two methods agree reasonably well. As is apparent from the table, the two nina mutants and vitamin A-deprived (A -) flies all display rhodopsin contents significantly lower than that of wild type . No attempts were made to determine absorbance changes in ora because of its small RI-6 rhabdomeres . Also shown in Table 11 are rhabdomeric particle densities normalized to that of wild type . In the case of the mutants ninaA pm and ora"8°, only the data from identified R1-6 cells are included in the table . In the case of the other classes of flies, the data from R1-6 and R7/8 rhabdomeres were combined, since in these flies no significant difference in particle density was observed between RI-6 and R7 rhabdomeres (Table 1) . In the descending order of particle density, the flies FIGURE 5 Fracture replica of an ommatidium of the double mutant sev`Y3 ; ninaA P228 cross-fractured at a proximal level of the ommatidium . The seven rhabdomeres that are seen are labeled th e-rh 9. Enlarged views of these rhabdomeres are shown in insets labeled a-g to the left of the figure . The rhabdomere labeled rh, belongs to the central retinula cell R8. Note that it is the only rhabdomere with a normal number of particles . The arrow in a circle in the lower right hand corner indicates the direction of platinum shadowing . listed in Table II form the following sequence : wild type > ninaB315 _-. ninaAP228 > Aflies > ora jK'. The same sequence also describes the order of rhodopsin content . In all of the particle-deficient flies with data on both the particle density and rhodopsin content, however, the decrease in R 1-6 rhodopsin level was consistently greater than the decrease in R1-6 rhabdomeric particle density (see also references 14 and 22) . Particle Diameter Density Distribution To compare the size of particles among the different classes of flies, we constructed for each class of flies a "particle Table II were obtained were used to obtain these data . In the case of ninaA P228, additional eyes were used . 96 6 Freeze-fracture replicas of a portion of the plasma membrane obtained from wild type (Fig . 7), ninaB P3' 5 ( Fig . 8), vitamin A-deprived wild type (Fig, 9), ninaA P226 (Fig . 10), and ora J"' (Fig . 11). In the case of ninaA (Fig . 10) and ora (Fig . 11), the pictures were obtained from identified R1-6 cells. It may be seen that the PF of the plasma membrane is continuous with that of the rhabdomeric membrane (rh), as indicated by arrows in Figs . 7, 8, 10, and 11 . The hill-like structures above the arrow in Fig. 8 are tips of short rhabdomeric microvilli (also observed in wild type) . All three mutants and vitamin A-deprived flies display a markedly lower plasma membrane particle density than wild type . Bar, 0.2 ym . THE JOURNAL Or CELL BIOLOGY " VOLUME 93, 1982 diameter density distribution," which plots the density (number/ß,m2) of particles of each diameter class against the diameter (Materials and Methods) . Displayed in Fig . 6 are diameter density distributions of RI-6 rhabdomeric membrane particles for wild type (Fig. 6a, open histogram), ninaB P'"'5 ( Fig. 6a, shaded histogram), ninaA`' 228 (Fig. 6b, open histogram), A-flies (Fig. 66, shaded histogram), and ora JK` (Fig. 6 c). The diameter classes for which the particle densities of the deficient flies differ from that of wild type by a statistically significant amount are indicated by a bracket below each set of histograms. It may be seen that the diameter distributions of the particle-deficient flies differ from that of wild type over a considerable range of sizes, although there is a tendency for these differences to occur in the smaller diameter range. Nonrhabdomeric Membrane We also examined that portion of the nonrhabdomeric plasma membrane of the photoreceptor located between the rhabdomere and the region of contact between neighboring photoreceptor cells (indicated by dark lines in Fig. 1 a for retinula cell R1), referred to simply as the "plasma membrane" in this paper. The membrane particles of the "plasma membrane" did not appear qualitatively different from those of the rhabdomeric membrane in either wild type (Fig. 7) or any of the particle-deficient flies (Figs . [8][9][10][11] . Moreover, all four classes of particle-deficient flies showed a marked decrease in the number of particles in the PF of the plasma membrane when compared to that of wild type (Figs. 7-11) . Table III compares the membrane particle densities of the plasma membrane with those of the rhabdomeric membrane for ninaA, ninaB, A -, and wild-type flies . The data for rhabdomeres are the same ones from which the relative particle densities shown in Table II were calculated . As may be seen in the table, the particle density was reduced by a significant amount in both the plasma and rhabdomeric membranes in all three classes of particle-deficient flies examined. In the case of wild-type flies, we also examined the diameters of membrane particles in the two types of membrane. The mean diameter obtained for the rhabdomeric membrane particles was 106 ± 12A (standard deviation, n = 24 cells), while that for the plasma membrane particles was 112 ± 9A (n = 12 cells) . The diameter density distributions for the plasma and rhabdomeric membrane particles also showed no statistically significant differences in the density of particles at any diameter class (data not shown) . DISCUSSION One of the objectives of this study was to obtain information on the rhabdomeric microstructure of the three mutants, ninaA P828 ninaB P", and ora JK84 . Our results show unambiguously that the ninaA P228 mutation reduces the membrane particle density in RI-6 rhabdomeres but not in R7 rhabdo- Data presented in the form mean ± SD (n) where n = number of cells. All data are from white-eyed flies only . * These data were used to calculate the relative particle densities shown in Table II . Corrected for curvature of microvilli . The number of fractured eyes was 5, 8, 3, and 5 for wild type, ninaB" S, ninaA P228, and ora"', respectively . $ The numbers of freeze-fractured eyes were 5, 5, 3 and 2 for wild type, ninaB" S, ninaAP228, and vitamin A-deprived wild type, respectively. meres ( Fig . 3; Table I), consistent with the earlier results of Larrivee et al . (22) . To see what effect the mutation ninaA P228 might have on the other class of central rhabdomeres, R8, we examined the double mutant seVLY3; ninaA P228 . The results showed that the R8 rhabdomere of the double mutant has a normal number of membrane particles, (Fig. 5), indicating that the effect of ninaA P 228 is, indeed, specific for R I-6 photoreceptors. The specificity of the ninaA PZZe mutation for R1-6 photoreceptors, containing only R1-6 rhodopsin, suggests that ninaA P22e affects the apoprotein, opsin, of R 1-6 rhodopsin. The ninaB PB`'S mutation does not show similar specificity (Fig . 2b and c; Table I), as is also the case with vitamin A deprivation . Indeed, the ninaB phenotypes that have been uncovered to date are virtually indistinguishable from those of A-flies, providing that the amount of rhodopsin remaining in the A-flies is made comparable to that in ninaB P3''s, suggesting that the mechanism of action of the mutation ninaB may be to restrict the amount of chromophore available for rhodopsin formation. In fact, Stephenson and Pak (39) have shown recently that the ninaB defect can be "cured" by raising the mutant on a medium that contains an excess amount of retinal. The same treatment, however, had no effect on the ninaA mutant . Thus, while the mutations ninaA and ninaB both apparently exert their effects on rhodopsin, one (ninaA) appears to express its effect on the opsin portion of one particular class of rhodopsin (see also 22), whereas the other (ninaB) seems to affect the availability of chromophore. One of the more surprising findings of this study is that the membrane particle density in the vestigial R1-6 rhabdomeres of ora' is extremely low (Fig . 4; Tables I and II). In fact, the particle density in these vestigial rhabdomeres is the lowest we have obtained in any rhabdomeres of any particle-deficient flies studied (Tables I and II). It has been known for some time that ora JK84 interferes with the formation of the R1-6 rhabdomeres (15,19,20). One plausible mechanism for the failure of RI-6 rhabdomeres to form in ora JK84 i s that the mutation blocks the differentiation of R1-6 rhabdomeres during development . Such a mechanism, however, need not necessarily affect the density of membrane particles in R1-6 photoreceptors . The fact that the membrane particle density is extremely low in R1-6 photoreceptors suggests another possibility: that the ora jK"' mutation might block the synthesis of (a) major polypeptide(s) specific for R1-6 cells and that the loss of polypeptide(s) in turn leads to loss of rhabdomeres. Another objective of the present study was to assess to what extent the results obtained from the particle-deficient Drosoph-968 THE JOURNAL OF CELL BIOLOGY " VOLUME 93, 1982 i1a mutants support the view that the majority of the rhabdomeric membrane particles are structural correlates of rhodopsin. As may be seen in Table II, whenever a decrease in rhabdomeric membrane particle density was observed in a class of flies, the rhodopsin content was also found to be reduced in the same class of flies. In fact, the particle-deficient flies arranged in descending order of particle density (Table II) were found to be in descending order of rhodopsin content as well . Moreover, in ninaA PZ2b there is evidence suggesting that the amount of visual pigment is reduced in R 1-6 photoreceptors but not in R7 photoreceptors (22) . All these observations are in strong support of qualitative correlation between the rhabdomeric particles and rhodopsin molecules. Quantitative relationships are difficult to establish, however. One of the difficulties is that the fractional decrease in rhodopsin content is not equal to the fractional decrease in the membrane particle density in any given class of particle-deficient flies considered (Table II).' In every case so far examined, the amount of decrease in rhodopsin level was consistently greater than the amount of decrease in rhabdomeric particle density (Table II ; see also references 14 and 22). To consider the significance of this difference between rhodopsin content and rhabdomeric particle density, it is necessary to have independent measurements of opsin content, because opsin molecules, with no chromophore, could contribute to rhabdomeric particle measurements but not to spectrophotometric rhodopsin measurements . Available evidence suggests, however, that opsin does not account for the observed difference between rhodopsin content and particle density (22). Another difficulty in quantitatively relating the rhabdomeric particles to rhodopsin molecules is that the diameter of the rhabdomeric membrane particles are on the average relatively large (106 A for wild type) and vary over a wide range (40-220A; Fig. 6). The molecular weight of Drosophila rhodopsin is reported to be 37,000 daltons (27), corresponding to a diameter of about 50 A if globular in shape. Thus, the rhabdomeric particles seem too large and vary too widely in size to correspond to individual rhodopsin molecules. A part of the discrepancy between the calculated rhodopsin diameter and observed particle diameters probably is due to freeze-fracture artifacts. These artifacts, however, do not seem likely to be solely responsible for the discrepancy because techniques that should have minimized artifacts did not materially reduce the average particle size (-90 A) or eliminate the size variation. ' Nevertheless, the present work is in strong support of the conclusion that at least the majority of the membrane particles on the rhabdomeres of Drosophila photoreceptors are formed ' Rhodopsin contents shown in Table II represent the total rhodopsin level in the rhabdomeres viewed by the deep pseudopupil technique. Therefore, if the rhabdomere sizes of the particle-deficient flies differ significantly from that of wild type, the relative rhodopsin contents shown will not correspond to "relative rhodopsin concentrations ." We found no obvious differences in rhabdomere size among the particledeficient flies examined for rhodopsin content and made no attempt to correct our rhodopsin measurements. ' Unpublished data by G. Bellin, Department of Cell Biology, Swiss Federal Institute of Technology, communicated to R. H. Schinz ; and unpublished data by T. Suda and R. H. Schinz, Strahlenbiologisches Institut der Universitat Ziirich. The techniques used were rapid freezing with pressurized liquid nitrogen and fracturing at low specimen temperature in high vacuum (x -170°C and 1 .5 x 10 -' ton). These techniques should have eliminated artifacts due to chemical treatment of specimens and minimized plastic deformations of the membrane components. The third objective of the present work was to see if there are any microstructural differences between the rhabdomerec membrane and the adjacent nonrhabdomeric plasma membrane. Three criteria have been employed to compare the two types of membrane : (a) the density of membrane particles (Table 111), (b) particle diameter distribution (data not shown), and (c) the effects of the mutations and vitamin A deprivation on particle density (Table 111; Figs. [8][9][10][11]. None of the three criteria succeeded in revealing any striking differences between the two membranes. The similarity in particle density (Table 111) and particle density distribution between the two classes of membrane suggest that the same population(s) of membrane particles are present in the two membranes. Moreover, the parallel decrease in the number of rhabdomerec and plasma membrane particles in particle-deficient flies support the view that rhodopsin is present in both classes of membrane, at least for that part of the nonrhabdomeric membrane examined in this work . Brown and Schwemer (P . K. Brown, personal communication) have reached similar conclusions from their studies of normal and vitamin A-deprived blowflies. Fernandez and Nickel (9), on the other hand, have reported that in the crayfish the particle density in the nonrhabdomeric membrane is considerably lower than that in the rhabdomerec membrane, as did Chi and Carlson (5) for the housefly . The source of disagreement is not clear. Thus, as in vertebrates (1,7,16,31,41), rhodopsin does not appear to be confined to the differentiated membrane of the light-receptive organelle in certain invertebrates (at least in certain species of flies) . Nevertheless, because the rhabdomeres contain most of the rhodopsin-bearing membrane, these differentiated membrane structures are expected to be responsible for most of the photon capture by the photoreceptor.
2014-10-01T00:00:00.000Z
1982-06-01T00:00:00.000
{ "year": 1982, "sha1": "58a1f3a153041b1e2e77aa3b2421ce6b17011465", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/93/3/961/1076089/961.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "58a1f3a153041b1e2e77aa3b2421ce6b17011465", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
205604520
pes2o/s2orc
v3-fos-license
Critical Role of ADAMTS-4 in the Development of Sporadic Aortic Aneurysm and Dissection in Mice Sporadic aortic aneurysm and dissections (AADs) are common vascular diseases that carry a high mortality rate. ADAMTS-4 (a disintegrin-like and metalloproteinase with thrombospondin motifs-4) is a secreted proteinase involved in inflammation and matrix degradation. We previously showed ADAMTS-4 levels were increased in human sporadic descending thoracic AAD (TAAD) samples. Here, we provide evidence that ADAMTS-4 contributes to aortic destruction and sporadic AAD development. In a mouse model of sporadic AAD induced by a high-fat diet and angiotensin II infusion, ADAMTS-4 deficiency (Adamts-4−/−) significantly reduced challenge-induced aortic diameter enlargement, aneurysm formation, dissection and aortic rupture. Aortas in Adamts-4−/− mice showed reduced elastic fibre destruction, versican degradation, macrophage infiltration, and apoptosis. Interestingly, ADAMTS-4 was directly involved in smooth muscle cell (SMC) apoptosis. Under stress, ADAMTS-4 translocated to the nucleus in SMCs, especially in apoptotic SMCs. ADAMTS-4 directly cleaved and degraded poly ADP ribose polymerase-1 (a key molecule in DNA repair and cell survival), leading to SMC apoptosis. Finally, we showed significant ADAMTS-4 expression in aortic tissues from patients with sporadic ascending TAAD, particularly in SMCs. Our findings indicate that ADAMTS-4 induces SMC apoptosis, degrades versican, promotes inflammatory cell infiltration, and thus contributes to sporadic AAD development. Aortic aneurysm and dissection (AAD) are common vascular diseases 1 . Aortic rupture frequently leads to death, particularly when the disease involves the thoracic aorta. Sporadic AADs, which account for more than 70% of cases, are caused by the progressive loss of smooth muscle cells (SMCs) and the destruction of extracellular matrix (ECM) 2,3 . Identifying molecules involved in these processes is critical for developing pharmacologic strategies to prevent AAD formation and progression. Matrix proteases, such as matrix metalloproteinases (MMPs) 4 , play a critical role in the degradation of aortic ECM and in the formation of AAD. ADAMTSs (a disintegrin and metalloproteinase with thrombospondin motifs) [5][6][7][8] are key extracellular metalloproteinases involved in ECM turnover. Similar to MMPs, ADAMTSs have been implicated in tissue destruction [9][10][11] and inflammation 12,13 . We have recently reported the increased expression of ADAMTS-4 in aortic tissues from patients with descending thoracic AAD 14 . However, it is unclear whether ADAMTS-4 plays a role in AAD development. In this study, we tested the hypothesis that ADAMTS-4 plays a critical role in AAD development. Specifically, we assessed the role of ADAMTS-4 in sporadic AAD development in mice, studied its effects on macrophage migration and SMC apoptosis in cultured cells, and examined its expression in aortic tissues from patients with sporadic ascending thoracic aortic aneurysm and dissection (aTAAD). Our findings show that ADAMTS-4 plays a critical role in sporadic AAD formation by degrading veriscan, promoting inflammatory cell infiltration, and inducing SMC apoptosis. Furthermore, ADAMTS-4 levels were significantly increased in sporadic TAAD. ADAMTS-4 expression is increased in mice with AAD. To determine the role of ADAMTS-4 in AAD development, we first examined ADAMTS-4 expression in the aortas of mice in which a sporadic AAD was induced by a high-fat diet and angiotensin II infusion 15 . ADAMTS-4 was barely detectable in the aortas of unchallenged mice but was significantly induced in the aortic tissue of challenged mice (Fig. 1A), especially in SMCs and macrophages in the aortic wall (Fig. 1B). Similarly, ADAMTS-1 was also increased in aortas from challenged mice (Fig. S1). ADAMTS-4 deficiency in mice significantly reduces AAD formation and rupture. We next examined the role of ADAMTS-4 in AAD development by comparing AAD formation in challenged wild-type (WT) mice and in challenged ADAMTS-4-deficient (Adamts-4−/−) mice that had no detectable ADAMTS-4 protein (Fig. 1C). As expected, challenge with a high-fat diet and angiotensin II infusion led to aortic destruction (Fig. 1D) and aortic enlargement (Fig. 1E) in WT mice. Importantly, aortic destruction was partially prevented (Fig. 1D) and aortic enlargement (Fig. 1E) was significantly reduced in challenged Adamts-4−/− mice as compared with challenged WT mice. Additionally, aortic challenge caused significant aortic dilatation (aortic diameter >1.25× the mean aortic diameter of unchallenged mice), aneurysm (aortic diameter >1.5× the mean aortic diameter of unchallenged mice), dissection (the presence of intramural thrombus), and rupture in thoracic and abdominal segments. The incidence of AAD (aortic aneurysm, dissection and rupture), severe AAD (dissection and rupture), and rupture was reduced in ascending ( Fig. 1F), descending thoracic (Fig. 1G), and suprarenal abdominal ( Fig. 1H) aortic segments. Furthermore, the overall incidence of aortic dilatation, AAD, severe AAD, and aortic rupture in all aortic segments was significantly reduced in challenged Adamts-4−/− mice as compared with challenged WT mice (Fig. 1I). Finally, we observed a significant increase in survival at 28 days in Adamts-4−/− mice compared with WT mice (Fig. 1J). Together, these data clearly indicate the critical role of ADAMTS-4 in AAD development in both thoracic and abdominal aortic regions. ADAMTS-4 deficiency in mice partially prevents aortic destruction and proteoglycan degradation. We further performed a histologic analysis of the aorta in challenged WT and Adamts-4−/− mice. Hematoxylin and eosin staining and Verhoeff-van Gieson elastic fibre staining showed severe destruction and elastic fibre fragmentation in the aortas of challenged WT mice ( Fig. 2A). However, aortic destruction was reduced in Adamts-4−/− mice, suggesting the involvement of ADAMTS-4 in aortic destruction. The main substrates of ADAMTS-4 are proteoglycans, which are important for vascular functions. Using an antibody that specifically detects degradation products of the proteoglycan versican, we found an increase in cleavage products in the aortas of challenged WT mice when compared with aortas of unchallenged mice ( Fig. 2B and C). Moreover, the amount of degradation product was reduced in the aortas of challenged Adamts-4−/− mice ( Fig. 2B and C), indicating involvement of ADAMTS-4 in versican degradation during AAD formation. ADAMTS-4 deficiency in mice decreases inflammatory cell infiltration in the aorta. Inflammatory cell infiltration in the aortic wall is a well-known trigger for ECM degradation and aortic injury that leads to AAD 16,17 . Therefore, we examined the presence of inflammatory cells in the aortas of challenged WT and Adamts-4−/− mice. Figure 3A shows that CD68+ and F4/80+ macrophages were highly abundant in the aortas of challenged WT mice, whereas significantly fewer CD68+ and F4/80+ macrophages were detected in the aortas of challenged Adamts-4−/− mice, possibly because of less inflammatory cell invasion. To further examine whether ADAMTS-4 directly affects macrophage invasion, we examined the invasion ability of macrophages from Adamts-4 −/− mice and WT mice. Interleukin-1 (IL-1) stimulated macrophage invasion through the ECM-coated membrane (Fig. 3B). However, significantly less IL-1-induced invasion was observed in macrophages from Adamts-4−/− mice than in macrophages from WT mice (Fig. 3B), indicating a potential role of ADAMTS-4 in promoting macrophage invasion. ADAMTS-4 deficiency in mice reduces aortic apoptosis. Aortic cell apoptosis, a significant feature of AAD, may contribute to aortic destruction and disease development [18][19][20] . Increasing evidence suggests that ADAMTS-4 is involved in apoptosis 21,22 . We therefore examined the role of ADAMTS-4 in aortic apoptosis. As shown in Fig. 4A, TUNEL-positive cells were abundant in the aortic wall of challenged WT mice but were significantly reduced in the aortic wall of challenged Adamts-4−/− mice. Consistent with these results, we observed decreased cleavage of PARP-1, an important molecule involved in DNA repair and cell survival, in the aortas of challenged Adamts-4−/− mice when compared with the aortas of challenged WT mice (Fig. 4B). Together, these findings suggest the potential involvement of ADAMTS-4 in apoptosis during AAD formation. ADAMTS-4 is directly involved in SMC apoptosis. Among the potential mechanisms by which ADAMTS-4 contributes to aortic destruction, we focused on its role in aortic apoptosis. Although ADAMTS-4 has been shown to participate in apoptosis 21,22 , the underlying mechanisms are poorly understood. ADAMTS-4 may promote apoptosis by degrading versican, which has been shown to inhibit apoptosis [23][24][25][26] . However, ADAMTS-4 may have a more direct role in apoptosis. To examine this possibility, we first tested several agents for their potency in stimulating the induction of ADAMTS-4 expression, including palmitic acid (PA), tumour necrosis factor-α, angiotensin II, and H 2 O 2 . PA was the most potent stimuli (Fig. 5A) that induced a dose-dependent increase in ADAMTS-4 (Fig. 5B). Additionally, PA mimics in cells the elevated serum free fatty acid levels in our sporadic AAD model (Fig. S2). Thus, we used PA in these studies as a trigger to induce ADAMTS-4 expression in human thoracic aortic SMCs. We showed that PA induced apoptosis (Fig. 5C) and the cleavage of PARP-1 (Fig. 5D), both of which were partially prevented by ADAMTS-4 siRNA, indicating that ADAMTS-4 may directly promote SMC apoptosis. ADAMTS-4 translocates to the nucleus of apoptotic cells. To further understand the direct role of ADAMTS-4 in PARP-1 cleavage and apoptosis, we studied the cellular location of ADAMTS-4. Immunostaining showed that ADAMTS-4 was localized primarily in the cytoplasm (Fig. 6A) in cells not exposed to PA, whereas a significant amount of ADAMTS-4 was detected in the nucleus of PA-treated cells (Fig. 6A). Western blot analysis confirmed that ADAMTS-4 was present in the nuclear fraction and that the levels of nuclear ADAMTS-4 were increased in PA-treated cells (Fig. 6B). Consistent with our findings in cultured cells, ADAMTS-4 was also detected in the nuclei of SMCs in diseased aortas in our mouse AAD model (Fig. 6C). Furthermore, while ADAMTS-4 was present in the cytoplasm of healthy SMCs, it was highly abundant in the nucleus of apoptotic SMCs (ie, cells with condensed nuclei) (Fig. 6D). Co-immunofluorescence staining showed the presence of ADAMTS-4 in the nuclei of TUNEL+ apoptotic cells (Fig. 6E). These findings indicate that ADAMTS-4 may translocate to the nucleus when SMCs are under stress. ADAMTS-4 directly interacts with and cleaves PARP-1. PARP-1 is a nuclear protein that plays a critical role in DNA repair and cell survival 27,28 . We therefore examined whether ADAMTS-4 can induce apoptosis by targeting PARP-1. Double immunofluorescence staining showed that ADAMTS-4 colocalized with PARP-1 in the nucleus of PA-treated SMCs (Fig. 6F). Co-immunoprecipitation experiments showed that ADAMTS-4 interacted with PARP-1 in PA-treated SMCs (Fig. 6G). Finally, by using a direct cleavage assay, we demonstrated that ADAMTS-4 can directly cleave PARP-1. Incubating recombinant PARP-1 with recombinant ADAMTS-4 Quantification studies showing less elastic fibre fragmentation in aortas from challenged Adamts-4−/− mice than in aortas from challenged WT mice. Western blot analysis (B) and representative immunofluorescence staining images (C) indicating less versican degradation in aortas from challenged Adamts-4−/− mice than in aortas from challenged WT mice. ( Fig. 6H) markedly increased PARP-1 cleavage and reduced full-length PARP-1. The PARP-1 cleavage was partially prevented by a pan ECM protease inhibitor. Together, our data suggest that, in response to stress, the ECM protease ADAMTS-4 can translocate to the nucleus and directly cleave/degrade PARP-1, which may result in impaired DNA repair and lead to SMC apoptosis. ADAMTS-4 expression is increased in human sporadic aTAAD tissue. Finally, we examined ADAMTS-4 expression in aortic tissues from patients with sporadic ascending thoracic aortic aneurysm (aTAA) and dissection (aTAD). ADAMTS-4 levels were significantly higher in aortic tissues from aTAA and aTAD patients than in ascending aortic tissue from age-matched organ donors (controls) (Fig. 7A). ADAMTS-4 was present in SMCs in the medial layer of aTAAD tissues (Fig. 7B). We also detected TUNEL+ apoptotic SMCs (Fig. 7C) and the presence of ADAMTS-4 in the nuclei of TUNEL+ apoptotic cells (Fig. 7D) in diseased tissues. These findings indicate significant upregulation of ADAMTS-4 and its potential association with SMC apoptosis in sporadic aTAAD patient tissues. Discussion In this study, we provide evidence that ADAMTS-4 plays an important role in aortic destruction and AAD formation. Using a mouse model of sporadic AAD, we have shown that Adamts-4 deficiency significantly reduced aortic enlargement, AAD formation, and aortic rupture in both thoracic and abdominal segments. Adamts-4 deficiency also reduced aortic destruction, versican degradation, inflammatory cell infiltration, and SMC apoptosis. In cultured SMCs, we found that ADAMTS-4 moved into the nucleus, directly cleaved PARP-1, and induced SMC apoptosis. Finally, we found significant expression of ADAMTS-4 in aortic tissue samples from patients with aTAAD. Together, our results suggest that ADAMTS-4 plays a critical role in the development of AAD. The ADAMTS enzyme group has been implicated in inflammation and tissue destruction in cancer metastasis [9][10][11] and in inflammatory diseases such as arthritis 12,13 . Several studies have suggested that the dysregulation of ADAMTS-4 may also contribute to the development of vascular diseases. For example, Wagsater and colleagues 29 showed that ADAMTS-4 is highly expressed in macrophage-rich areas of human atherosclerotic plaque and may be involved in atherosclerotic plaque formation and stability. We and others 14,30 have shown that ADAMTS-4 levels are increased in aortic tissues from patients with sporadic thoracic AAD. Here, we provide evidence that Adamts-4 deficiency in mice reduced challenged-induced aortic destruction, aortic enlargement, and AAD formation and rupture. Our study establishes a critical role for ADAMTS-4 in AAD development and indicates that ADAMTS-4 may be a potential target for TAAD treatment. ADAMTS-4 may induce aortic destruction through several mechanisms. ADAMTS-4 degrades proteoglycans, which are ECM components. Versican 21 , the most studied target of ADAMTS-4, plays an important role in maintaining vascular structural and functional integrity 23 by retaining water, creating reversible compressive structures 24 , regulating elastic fibre assembly, inhibiting cell apoptosis, and stimulating cell growth and angiogenesis [24][25][26][27] . Degradation of versican by ADAMTS-4 may be an underlying mechanism for its contribution to vascular diseases 23 . We recently observed an increase in versican degradation in human descending thoracic AAD tissue samples, and versican degradation was correlated with increased ADAMTS-4 protein levels 16 . In our current study, we showed significantly reduced versican degradation products in Adamts-4−/− mice, suggesting that ADAMTS-4-mediated versican degradation may be partially responsible for aortic destruction and AAD formation. Our findings also suggest that ADAMTS-4 may induce AAD formation by promoting aortic inflammation. Inflammatory cells such as macrophages play a critical role in the progression of aortic aneurysms 31 . Extracellular matrix proteases such as MMPs have been shown to promote inflammatory cell infiltration by digesting ECM 31 . Similarly, ADAMTS-1 is capable of stimulating cell invasion during cancer metastasis by digesting ECM [32][33][34] . Consistent with previous findings 15 , we detected an abundance of macrophages in the aortic wall of challenged WT mice. Importantly, inflammatory cell infiltration was significantly lower in challenged Adamts-4−/− mice than in challenged WT mice. Although it is possible that the reduced inflammatory infiltration was due to less aortic injury and inflammatory cell attraction in challenged Adamts-4−/− mice, our findings suggest that the reduced infiltration may be due to decreased inflammatory cell invasion. Adamts-4−/− macrophages showed reduced ECM invasion in response to IL-1 stimulation, indicating that ADAMTS-4 may digest ECM and promote macrophage infiltration. Our study suggests that ADAMTS-4 may also contribute to AAD by promoting aortic SMC apoptosis, which has been associated with AAD development [18][19][20] . The mechanisms underlying the potential role of ADAMTS-4 in apoptosis 21,22 are not well understood, but one potential mechanism may involve ADAMTS-4-induced degradation of versican, which functions to inhibit apoptosis 21,23-26 . Our novel findings indicate that ADAMTS-4 may directly induce apoptosis in SMCs. We showed that the siRNA-mediated knockdown of ADAMTS-4 in PA-treated SMCs reduced the activation of apoptosis pathways and apoptotic cell death. These results support those of a recent study that also showed a direct role for ADAMTS-4 in apoptosis 22 . Thus, ADAMTS-4 may promote apoptosis by degrading versican or by directly inducing apoptosis in SMCs. When we further investigated the mechanisms by which ADAMTS-4 directly induces SMC apoptosis, we found that ADAMTS-4 is translocated from the cytoplasm into the nucleus in the presence of PA, indicating a potential nuclear function of this extracellular protease. In addition, when we searched for potential targets of ADAMTS-4 in the nucleus, we found that ADAMTS-4 directly bound and cleaved PARP-1, which modulates protein functions 27,28,35 by adding poly (ADP ribose) to its targets. Activated by various cellular stress signals 27 , PARP-1 plays a critical role in regulating many cellular functions, including cell metabolism 36 and inflammation 35 . The most critical function of PARP-1 is detecting and repairing DNA damage 27,28 , a process that is necessary for cell survival. Moreover, increasing evidence has suggested that PARP-1 also promotes cell death 27,28,35 . The switch between PARP-1-mediated cell survival or death is partially controlled by its cleavage 37 ; full-length PARP-1 promotes cell survival, whereas cleaved PARP-1 can induce cell death 37 . By directly cleaving PARP-1, ADAMTS-4 may prevent PARP-1-mediated DNA repair and cell survival and induce apoptosis. A recent study has suggested that the catalytic activity of ADAMTS-4 may not be required for apoptosis, as the catalytically inactive full form and the C-terminal domain of ADAMTS-4 could both increase apoptosis 22 . Further studies are necessary to understand whether ADAMTS-4 induces apoptosis through both catalytic activity-dependent and independent mechanisms and what the relationship is between these mechanisms. Other mechanisms may also be responsible for the contributions of ADAMTS-4 to vascular tissue destruction. ADAMTS-1 inhibits cell proliferation by binding and sequestering growth factors, such as vascular endothelial growth factor and fibroblast growth factor 32 . ADAMTS-4 may act in a similar fashion by inhibiting SMC growth and thus preventing aortic repair, although this possibility requires further investigation. In addition, questions remain about what causes the increase in ADAMTS-4 levels in AAD and how to prevent ADAMTS-4 induction and activation. Future studies aim to identify the factors and pathways that promote ADAMTS-4 induction. Finally, because of its role in AAD development, ADAMTS-4 may serve as a therapeutic target. Several recently developed ADAMTS inhibitors have been shown to prevent inflammation and tissue destruction in a rat model of arthritis [38][39][40][41] . The potential use of ADAMTS-4 inhibitors in treating AAD warrants investigation. However, ADAMTS isoforms play diverse and possibly opposite roles in AAD development as has been shown by a recent study suggesting that ADAMTS-1 may be an important mediator of vascular wall homeostasis 42 . Thus, in considering the therapeutic use of ADAMTS-4, it is important to develop isoform-specific inhibitors rather than non-specific ADAMTS inhibitors for AAD prevention/treatment. In conclusion, this study indicates that ADAMTS-4 plays an important role in AAD formation by promoting ECM destruction, inflammation, and apoptosis in the aorta. Furthermore, ADAMTS-4 can translocate into the nucleus, interact with and cleave PARP-1, and directly induce apoptosis. Future studies are needed to examine whether the pharmacologic inhibition of ADAMTS-4 prevents aortic destruction and AAD progression. Materials and Methods Patient enrolment. The protocol for collecting human tissue samples was approved by the institutional review board at Baylor College of Medicine. Informed consent forms were signed by all participants before enrolment. All experiments conducted on human tissue samples were performed in accordance with the relevant guidelines and regulations. For this study, we enrolled patients with sporadic aTAA (n = 10) and acute aTAD (n = 10) who were undergoing elective open operation to replace the diseased aorta with a graft. During aneurysm repair, we routinely excise tissue from the anterior-lateral portion of the aortic wall (the outer wall of the false lumen in dissection cases) in the region of the largest aortic diameter; a portion of this discarded tissue was collected for this study. Patient characteristics are shown in Table 1. In addition, we used aortic tissues from age-matched organ donors without aortic aneurysm, dissection, coarctation, or previous aortic repair as control samples (International Institute for the Advancement of Medicine, Jessup, PA, USA). To minimize aortic damage due to prolonged ischemia, we selected donors with a cardiac arrest time of less than 60 minutes; the aortas were collected within 60 minutes of termination of life support and were preserved in UW Belzer solution and shipped to our lab on wet ice. The time from control aorta collection to tissue processing/banking was <24 hours. Periaortic fat and intraluminal thrombus were trimmed from all aortic samples. The samples were rinsed with cold 0.9% normal saline and then divided; one portion was immediately snap-frozen in liquid nitrogen and stored at -80 °C for protein extraction, and the other portions were embedded in optimal cutting temperature (OCT) compound for immunofluorescence staining. At the end of the 8-week study period, mice were euthanized, and their aortas were irrigated with cold phosphate-buffered saline (PBS). Aortas were embedded in OCT compound for histology and immunofluorescence staining or were snap-frozen for protein analysis. Serum samples were collected, and the level of free fatty acids in the serum was detected by using commercially available kits (Abcam) according to the manufacturer's instructions. Mouse study design. Criteria for aortic aneurysm, dissection, and rupture. We measured the diameter of the ascending, arch, and descending thoracic aortic segments of the extracted aortas. In euthanized mice, we exposed and rinsed the aorta with cold PBS and removed the periaortic tissues. The aorta was excised and further cleaned and rinsed with cold PBS to remove any residual blood in the lumen. Images of the aorta were obtained using an Olympus SZX7 microscope at a magnification of 0.4X (scale bar 2mm), and the diameter of each aortic segment was measured with DP2-BSW software (Olympus Life Science Solutions, Center Valley, PA, USA) by two independent observers who were blinded to the animal group. The mean aortic diameters of unchallenged WT mice (n = 10) and Adamts-4−/− mice (n = 10) served as a baseline diameter to determine aortic aneurysm formation in challenged mice. The mean diameter of the different regions was calculated and compared among the groups. For each aortic segment, dilatation in challenged WT or Adamts-4−/− mice was defined as an aortic diameter ≥1.25 but <1.5 times the mean aortic diameter of the segment in unchallenged mice with the same genetic background; aneurysm in challenged WT or Adamts-4−/− mice was defined as an aortic diameter ≥1.5 times the mean aortic diameter of the segment in unchallenged mice with the same genetic background. We defined aortic dissection as the presence of a tear in the aortic media or in the media-adventitia boundary with the presence of intramural thrombus or a false lumen hematoma in an aortic cross section. Aortic rupture and premature death were documented. Severity of aortic aneurysm and dissection. The gross appearance of each aorta was assessed for severity of AAD formation according to a scheme based on the classification system described by Daugherty and colleagues 20 : aortic dilatation (aortic diameter ≥1.25 but <1.5 times the aortic diameter of unchallenged mice with the same genetic background); aortic aneurysm (aortic diameter ≥1.5 times the aortic diameter of unchallenged mice with the same genetic background); and dissection (indicated by intramural thrombus). We further defined AAD (includes aortic aneurysms, dissection and rupture) and severe AAD (includes aortic dissection and aortic rupture). Each aorta was evaluated independently by 2 observers blinded to the animal group. In cases of discrepancies, the observers discussed the cases and reached an agreement on the classification. Analysis of aortic structure. Paraffin-embedded aortic sections were subjected to haematoxylin and eosin staining and Verhoeff-van Gieson elastin staining (Sigma-Aldrich) according to the manufacturer's instructions. Two independent observers who were blinded to the animal group allocation examined 3 aortic sections per aorta from 5-10 mice per group. The extent of elastic fibre fragmentation was scored on a scale of 0 to 3 (grade 0 = none, grade 1 = minimal, grade 2 = moderate, and grade 3 = severe). TUNEL assay and immunofluorescent staining. To study apoptosis, we performed TUNEL assays using an in-situ cell death detection kit (Roche Applied Science, Indianapolis, IN, USA) according to the manufacturer's instructions. For TUNEL assay and immunofluorescent co-staining, frozen sections of aorta were fixed with Cytofix (BD Biosciences), permeabilized with Perm/Wash (BD Biosciences), and blocked with 10% normal goat serum in PBS for 1 h. After TUNEL assay, sections were stained with anti-SM22-α antibody at 4 °C for overnight, followed by staining with an Alexa Fluor 488 goat anti-rabbit IgG antibody at room temperature for 1 h. Sections or cells were observed with a Leica SP5 confocal microscope (Leica Microsystems) or an Olympus immunofluorescence microscope. For each aorta or cell treatment condition, images from 5 randomly selected views were captured. For each image, the number of positive cells and total nuclei were quantified, and the percentage of positive cells was calculated. Western blot analysis. Protein lysates from treated cells or aortic tissues were prepared as previously described 19 . Nuclear and cytoplasmic proteins were isolated using NE-PER Nuclear and Cytoplasmic Extraction Kit (Thermo Fisher Scientific), following the manufacturer's instructions. Protein samples (15 µg per lane) were subjected to sodium dodecyl sulfate (SDS) polyacrylamide gel electrophoresis and were transferred to PVDF membranes. The membranes were blocked for 1 h in blocking solution comprising Tris-buffered saline containing 5% nonfat dried milk and 0.5% Tween 20 and then were incubated with a primary antibody against ADAMTS-4 PARP-1, cleaved PARP (Asp214) (9548, Cell Signaling), cleaved caspase-3 (9661, Cell Signaling), or versican degradation product. The blots were then washed with PBS with tween, incubated with horseradish peroxidase-conjugated anti-rabbit or anti-mouse secondary antibodies (Cell Signaling), and developed with Clarity Enhanced Chemiluminescence (ECL; Bio-Rad). The blots were exposed with HyBlot ES autoradiography film (Denville Scientific Inc., Holliston, MA) and quantified by using Image J Software (National Institutes of Statistical analysis. All quantitative data are presented as the mean ± standard deviation. Data were analysed by using SPSS software, version 11.0 (SPSS Inc., Chicago, IL, USA). Normality of the data was examined by using the Kolmogorov-Smirnov test. Comparisons between two groups were performed by using independent t tests; multiple groups were compared by using one-way analysis of variance (ANOVA) or the Kruskal-Wallis test, as appropriate. P values were adjusted with Bonferroni method for pairwise comparisons when indicated. The incidence of aortic dissection or aneurysm was analysed by Fisher's exact test. Kaplan-Meier survival curves were plotted to analyse the mouse survival rates, and the differences were analysed with the log-rank (Mantel-Cox) test. For all statistical analyses, 2-tailed probability values were used. A probability value of P < 0.05 was considered significant.
2018-02-16T23:00:14.574Z
2017-09-27T00:00:00.000
{ "year": 2017, "sha1": "e3169894bc7ab2b09e03112bde608b65f73745bb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-12248-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3169894bc7ab2b09e03112bde608b65f73745bb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
128209741
pes2o/s2orc
v3-fos-license
Electronic Structure , Spectroscopic ( IR , Raman , UV-Vis , NMR ) , Optoelectronic , and NLO Properties Investigations of Rubescin E ( C 31 H 36 O 7 ) Molecule in Gas Phase and Chloroform Solution Using Ab Initio and DFT Methods 1University of Yaounde I, Faculty of Science, Department of Physics, P.O. Box 812, Yaounde, Cameroon 2CETIC (Centre d’Excellence Africain en Technologies de l’Information et de la Communication), Université de Yaoundé I, B.P. 8390, Yaoundé, Cameroon 3University of Bamenda, National Higher Polytechnic Institute, Department of Electrical and Electronic Engineering, P. O. Box 39, Bambili, Cameroon 4University of Dschang, IUT Bandjoun, Department of General and Scientific Studies, P.O. Box 134, Bandjoun, Cameroon 5University of Yaounde I, Faculty of Science, Department of Chemistry, P.O. Box 812, Yaounde, Cameroon Introduction Many molecules from plant research were found nowadays to have application in the field of medicine, where there are use for the treatment of many diseases among which we found malaria caused by plasmodium falciparum.The new limonoid name Rubescin E (C 31 H 36 O 7 ), extracted from the roots of Trichilia Rubescens, collected from Cameroon, has been evaluated against erythrocytic stages of strain 3D7 plasmodium falciparum and also exhibited significant antiplasmodial in vitro activity with IC 50 value of 1.13M [1].The FT-IR performed on Rubescin E molecule revealed the presence of , -unsaturated carbonyl moiety at 1720 cm −1 and 1664 cm −1 .These values can be obtained theoretically by performing the vibrational frequencies calculation on the title molecule and used to explain the different motion of atoms or group of atoms in a molecular system.The 1D ( 1 H, 13 C NMR) and 2D NMR spectra were run on a Bruker AV spectrometer [1] in order to predict the structure of the title molecule and were done in this work in order to take out similarities between experiment done previously and theoretical calculation performed here. In this work, quantum chemical calculation was performed in order to take out the electronic structure (energy 2 Advances in Condensed Matter Physics gap, charge distributions, NLO properties, vibrational frequencies, NMR, and UV-vis calculation) and some physicochemical properties ( 3 J H-H chemical coupling-coupling constant, the global reactivity descriptors, and some geometrical parameters such as bonds lengths and bonds angles) of Rubescin E molecule.To the best of our knowledge, no theoretical study was performed yet on the title molecule, that is what motivated us to investigate the electronic structure, the spectroscopic, and some physicochemical properties of Rubescin E molecule.Except for NMR, UV-vis, 3 J H-H chemical coupling-coupling constant, and the vibrational frequencies obtained for the two , -unsaturated carbonyl moiety, most of our results were not compared and we are optimistic that it can be used as threshold for future experimental or theoretical research.Hartree Fock and DFT (using B3LYP and B3PW91 functionals) methods were used for these purposes.These properties were calculated by employing the triple split valence basis set along with polarization functions with and without diffuse functions as implemented in Gaussian 09, Rev. A02 in both gas phase and in a solution of chloroform.The methods and basis sets used are among the most widely used [2][3][4][5] and provide excellent results which are generally very close to experiments [6][7][8]. Computational Methods Theoretical calculations were performed on Rubescin E using HF and DFT methods at the B3LYP and B3PW91 levels as implemented in Gaussian 09W code [9].All these calculations were done in gas phase and in a solution of chloroform.No geometry restriction was applied during the optimization procedure.The solvent effects were treated within the conductor-like polarizable continuum model (CPCM).For the geometry optimization, the 6-311G(d,p) and 6-311++G(d,p) basis set were used in both gas and solvent.Convergence criteria in which both the maximum force and displacement are smaller than the cut-off of 0.000015 and 0.000060 and RMS force and displacement less than the cut-off values of 0.000010 and 0.000040 were used in the calculations in order to increase the accuracy of our results.The chemical 3 J H-H proton-proton coupling constant function of angle between two C-H vectors was calculated from the optimization output using the original Karplus equation [10].The optimized form of our molecule was then used to determine the global reactivity descriptors, electronic and NLOs properties.The net charges were also evaluated using MPA, ESP, and NBOs methods at the three levels mentioned above, and all this was done in both gas phase and chloroform with the 6-311++G(d,p) basis set.In order to confirm the stability of our molecule, the vibrational frequencies (IR and Raman) were evaluated at the 6-311G(d,p) and no imaginary frequencies were found leading us to the results that our molecule was stable at the levels and basis set considered.The time dependent density functional theory (TD-DFT) field was used in gas phase with the 6-311++G(d,p) basis in order to understand the electronic transition of our molecule and the obtained results were compared to experiment.The GIAO (gauge independent atomic orbital) method was used on the optimized form of our molecule in a solution of chloroform to determine the 1 H and 13 C NMR spectra parameters at the three levels and with the 6-311++G(d,p) basis set.In order to compare the calculated values of 1 H and 13 C chemical shift with experimental results, the reference and widely used molecule TMS (tetramethylsilane) for this purpose were exploited at the same level, at the same phase, and with the same basis set. Results and Discussion 3.1.Optimized Structure.The optimized geometry of Rubescin E obtained using the B3LYP/6-311++G(d,p) method in chloroform is shown in Figure 1.The value of the total electronic energy of the molecule obtained at the B3LYP shows that Figure 1 is the most stable structure of the molecule.The total electronic energy calculated within the two methods in gas and in a solution of chloroform with the 6-311++G(d,p) is given in Table 1. Structural Properties. A part of the optimized geometrical parameters (bond length, bond angle) and total electronic energy of the title molecule both in gas and in a solution of chloroform are given in Table 1 using the three levels and with the 6-311++G(d,p) basis set.The total description of the molecular geometry of Rubescin E molecule in gas phase and in a solution of chloroform using ab initio (RHF) and DFT (B3LYP and B3PW91) methods with the 6-311++G(d,p) basis set can be obtained from Supplementary Material S1. The atom numbering scheme adopted for this purpose is the same as in Figure 1.The energy differences between the two used phases increase when we move from B3PW91 to B3LYP and to RHF and are found to be approximatively 0.48 eV, 0.49 eV, and 0.57 eV, respectively.The optimized bond length and bond angle of Rubescin E are also listed in Table 1 with some specific experimental values [12][13][14] found in the literature for some groups of compounds such as furan, ethylene oxide, and tetrahydrofuran present in our molecule.It can be observed from Table 1 that the values of the bond length obtained at B3LYP are slightly higher than those obtained at the B3PW91 level.These differences are found between 0.0034 Å and 0.0107 Å for C-C; 0.0061 Å and 0.0095 Å for C-O; and 0.0007 Å and 0.0013 Å for C=C in gas phase.The value of C=O bond length is better at the DFT methods since its values are closer to 2.10 Å found in literature [11].It can also been observed that the calculated bonds length using Hartree Fock and DFT methods are very close to the values found in literature for the specific groups of compounds present in our molecule.These observed differences varied from 0.0012 Å at the B3LYP level to 0.0363 Å at the RHF level; from 0.0002 Å at the B3PW91 level to 0.0288 Å at the B3LYP level; and from 0.0019 Å at the B3LYP level to 0.0259 Å at the RHF level for C-C, C-O, and C=C bonds both in gas phase and in chloroform solution, respectively. The bonds angles of the studied molecule are slightly different when we move from one phase to another at each level with larger values obtained at the RHF level.From our results, it can be seen that the C-C-C bond angle varies from 96.3773 ∘ to 129.3418 ∘ , from 96.6032 ∘ to 128.8385 ∘ , and from 96.4146 ∘ to 128.7371 ∘ at the gas phase, respectively, at the RHF, B3LYP, and B3PW91 level of the theory.In CDCl3, the C-C-C bond angles are similar to those obtained at the gas phase.The smallest value of C-C-C bond angle was C 20 -C 8 -C 29 bond angle and the largest C 51 -C 14 -C 57 bond angle.For the C-C-O angle, the smallest value was 104.4386 ∘ obtained at the RHF and the largest value was 123.472 ∘ obtained at the B3LYP level both in the gas phase.The C-O-C bond angle was found between 107.1084 ∘ and 123.4264 ∘ obtained at the RHF level.These bonds angles compared to some known values found in literature [12,14] for specific compound present in our structure show good similarities.The little differences are found between 0.0268 ∘ and 1.5507 ∘ for C-C-C bond, between 0.0595 ∘ and 3.0614 ∘ for C-C-O bond, and between 0.0202 ∘ and 0.781 ∘ for C-O-C bond.These observed differences are due to the fact that these groups of compounds were not isolated. 3 - Coupling Constant.The chemical 3 J H-H proton-proton coupling constant was calculated using the original Karplus [10] equation in gas and solvent and its results compared to experimental values [1] obtained by extracting Rubescin E in a solution of chloroform.From our results, we found that the calculated parameters both in gas and in chloroform are all similar at all the levels used.These obtained results are also very close to experiment.As predicted in literature [10], we observed from Table 2 that when the angles between the two C-H vectors are close enough to 0 0 or 180 0 , the value of 3 J H-H coupling constant is greater (with 3 180 0 > 3 0 0 ) and is very small when the angle is close to 90 0 .the levels in gas phase and chloroform show positive charge for all the hydrogen atoms.The net charge on all the atoms varies from -1.109653e to 1.980512e, from -1.164916e to 1.904034e, and from -0.891775e to 1.524787e, respectively, in gas phase at the RHF, B3PW91, and B3LYP levels.In a solution of chloroform, the charges varied from -1.064962e to 1.826589e, from -1.206706e to 1.904292e, and from -0.945041e to 1.550492e with some oxygen atoms charges being positive and can be explained by the fact that the oxygen is related to extremely negative carbon atoms.The most positive charge atoms are C 63 , C 5 , C 8 and the most negative charge atoms are C 71 , C 62 , C 67 . Electronic Properties The electrostatic charges were evaluated in this work using the CHelpG scheme of Breneman model.We found from our results that the most positive charges atom is C 4 followed by C 62 and C 2 and the most negative charge atom is C 12 followed by C 5 and C 7 .The observation made at all levels and basis set in gas phase and in a solution of chloroform is that the most positive charge atoms are directly related to the most negative charge atoms. The natural atomic charges, obtained using the natural bonding orbital method, were also used to evaluate the atomic charge of Rubescin E. Positive and negative charges were found for all hydrogen and oxygen atoms, respectively.In this case, all carbon atoms directly linked to hydrogen atoms were found to have negative charges except for those linked to oxygen atoms.The most negative charge atom was calculated using HF method and was observed for O 65 (-0.69456e) and O 60 (-0.68330e), respectively, in chloroform and gas phase.The most positive charge atom was found to be C 62 in both gas (0.97067e, 0.80601e, and 0.81407e, respectively, at the RHF, B3PW91, and B3LYP levels) and solvent (0.98887e, 0.81804e, and 0.82650e, respectively, at the RHF, B3PW91, and B3LYP levels); this is due to the fact that C 62 is related to negative charge atoms (O 65 , O 60 , and C 63 ).Mulliken, electrostatic, and natural atomic charge distributions are graphically shown in Figure 2. From Figure 2, one can observe that, for almost all the methods used for charge description, the most positive and negative charge atoms were calculated at the RHF level in both gas and chloroform and this is due to the fact that the effect of electron correlation is not well described in HF method. Global Reactivity Descriptors. In order to understand the relationships between structure, stability, and reactivity of Rubescin E molecule, the global reactivity descriptors parameters such as chemical hardness (H), chemical potential ( ), chemical softness (s), electronegativity (), and electrophilicity index () were calculated.The finite difference equation given by ( 1) was used to calculate the ionization potential and electron affinity, which are generally used to calculate the above cited parameters. The IP and EA calculated from (1) were then used to calculate , , s, , and using equations found in the literature [15][16][17].All these parameters calculated using the two methods in gas phase are presented in Table 3.A high value of and characterizes a good electrophile, while a small value stands for good nucleophile. J H-H proton-proton coupling constant of Rubescin E in gas phase and in chloroform solution.The calculated vertical IP values in gas phase are bigger than their corresponding values in solvent.From Table 3, we also found that putting the molecule in solvent increases its electron affinity.From the calculated IP and EA values, one can conclude that solvent effect increases the capacity of molecule of gaining an electron compared to donating it.It also reduces the harness of our molecule and increases the softness.Hence, the presence of solvent increases the reactivity of the molecule Rubescin. Frontier Molecular Orbitals. The frontier molecular orbitals of Rubescin E were evaluated using the ab initio and DFT methods.The 6-311G(d,p) and 6-311++G(d,p) basis sets were used for this purpose in gas phase and in chloroform solution.The results show that the energy gap of our molecule decreases when diffuse functions are added onto all the atoms.We also found that whenever the basis set and methods used, the energy gap is greater than 4, showing that our molecule is hard and can be used as insulator in many electronic devices.In Figure 3, the 3D plots of the HOMO and LUMO orbitals computed at the RHF, B3PW91, and B3LYP levels with the 6-311G(d,p) basis set are illustrated in gas phase.We observed that the HOMO of Rubescin E is located over the furan ring at the three levels and also at the C-C of cyclohexane ring and C-O of oxiran ring.By contrast, the LUMO orbital is located over the cyclohex-2-enone ring, C-C and C-O bond of tetrahydrofuran ring.We can therefore conclude that electron can easily be transferred from furan ring to tetrahydrofuran ring. The total density of states (DOS) spectrum of Rubescin E at the gas phase and in chloroform is given in Figure 4 for each level at the 6-311++G(d,p) basis set.These DOSs spectra presented in Figure 4 were obtained from Gauss-Sum 3.0 program [18] which was used in order to show the contributions of different group to molecular orbital (HOMO and LUMO).From Figure 4, we observe that the HOMO-LUMO energy gap is smaller when we move from RHF to B3PW91 and from B3PW91 to B3LYP level, respectively, for both gas and chloroform phases, with larger values obtained in chloroform. UV-Vis Spectra Analysis.Time dependent density functional theory (TD-DFT) was used in gas phase at the two levels B3PW91 and B3LYP with the 6-311++G(d,p) basis set in order to determine the first six excited states to investigate the UV-vis absorption spectra of the molecule.The excitation energy (E), wavelength (), and oscillator strength (f) along with their major contributions are given in Table 4 and their results are compared to experiment.Two intense electronic transitions were predicted at 4.4934 eV (275.92nm) and 3.4415 eV (360.27nm) with oscillator strengths of 0.0043 and 0.0014, respectively, at the B3PW91 level and 4.5123 eV (274.77nm) and 3.4603 eV (358.31nm) with oscillator strengths of 0.0041 and 0.0014, respectively, at the B3LYP level.We observed from the spectra that the maximum absorption wavelength corresponds to the electronic transition from HOMO to LUMO+1 with 100% contribution followed by the electronic transition from HOMO to LUMO with 99% contribution at the two levels.The experimental absorption spectra of the title molecule predict two bands at 254 nm and 365 nm.The error between the theoretical and experimental results range from -4.73 nm to 21.92 nm at the B3PW91 and from -6.69 nm to 20.77 nm at the B3LYP level.These errors are due to the fact that only one molecule was considered for simulation.The theoretical UVvis absorption spectra of Rubescin E in gas phase are shown in Figure 5. Dipole Moment (𝜇 𝐷𝑀 ), Average Polarizability (𝛼), First Static Hyperpolarizability (𝛽), and Anisotropy of Polarization. In this work, the dipole moment , average polarizability , first static hyperpolarizability , and anisotropy of polarizability Δ of Rubescin E were evaluated in both gas phase and chloroform solution in order to define the nonlinearity of Rubescin E. The finite-field approach was used for this purpose.Equations ( 2), (3), (4), and (5) were used to calculate the polarizability, dipole moment, anisotropy of polarizability, and first static hyperpolarizability, respectively, using the x, , components obtained from Gaussian 09 W output.The calculated parameters were presented in Table 5 at the three levels with the 6-311++G(d,p) basis set. (3) The calculated values of polarizability and first static hyperpolarizability obtained from Gaussian output are in atomic unit.These values were then converted into electrostatic unit (esu) for comparison purpose (for : 1 a.u = 0.1482 x 10 −24 esu, for : 1 a.u = 8.6393 x 10 −33 esu) [19][20][21][22].From a giving molecule, when these values ( and ) are greater than those of urea, the molecule is said to have good active NLO properties.We observed from our results that the values of , , and are higher in solvent than their corresponding value in gas phase. and of Rubescin E calculated at the 6-311++G(d,p) basis set using different methods were greater than those of urea.These values calculated using the HF/6-311D(d,p) method ( = 5.2175D and = 1760.3169x10−33 esu) were also higher than those of urea ( = 3.8851D and = 372.810−33 esu), obtained using the same method and basis set [21].Hence Rubescin E can be considered to have good active NLO properties and this is due to the delocalize electron on the furan ring. We observed from Table 6 that the results of the calculated parameters are slightly different when we move from one level to another and also when the medium changes.The value of electric field is greater in a solution of chloroform than its corresponding value in gas phase.This is because the polarizability increases in presence of a solvent.The values of electric susceptibility, dielectric constant, and refractive index are greater at B3LYP level compared to their corresponding value at the RHF.All the calculated parameters of optoelectronic properties obtained at the B3LYP level are similar to those obtained at the B3PW91 level.None of these parameters have been determined before either theoretically or experimentally.One of the central goals of this study is to understand the underlying structure-property relationships which might form the basis for a "molecular engineering" approach to electronics, optoelectronics, and photonics.The molar refractivity of our molecule, known to be an important parameter in quantitative structure-property relationship analysis was calculated for this purpose.The value of the molar refractivity was calculated at the three levels, in both gas and chloroform using the 6-311++G(d,p) basis set.The Lorenz-Lorentz equation was used for this calculation [26,27] and its results are listed in Table 6. The high values of molar refractivity, polarizability, anisotropy of polarizability, and first static hyperpolarizability of Rubescin E molecule show that the molecule has good quantitative structure-property relationship analysis and might therefore form the basis for a "molecular engineering" approach to electronics, optoelectronics, and photonics. NMR Study of Rubescin E. After the optimization of the Rubescin E molecule, the 1 H and 13 C chemical shifts were calculated at the RHF, B3LYP, and B3PW91 levels of the theory using the 6-311++G(d,p) basis set.In order to compare the calculated values of 1 H and 13 C chemical shifts with experimental results, we also need to calculate the absolute shielding value of 1 H and 13 C for the tetramethylsilane (TMS) using the same methods above.The GIAO (Gauge Invariant Atomic Orbitals) approach known to provide satisfactory chemical shifts for different nuclei with larger molecules [28] was used for this purpose and the following equation: where is the atom type and was used to convert the chemical shielding to chemical shifts.The experimental and calculated chemical shifts of 1 H along with their corresponding error are listed in Table 7. From our results we observed that all the methods provide results which are very close to experiment since the errors between the experimental and calculated results are smaller. In order to compare experimental and theoretical results, a linear correlation of 1 H-NMR chemical shifts was established as shown in Figure 6.The regression line was plotted using the following equations: = 0.98880 − 0.17198, = 0.97379 + 0.18796, and = 0.97069 + 0.19387, respectively, at the RHF, B3PW91, and B3LYP levels of the theory.The theoretical results obtained from using the 6-311++G(d,p) basis set show good correlation with experiment since, and the calculated R-square values are found to be close to 1 at each level as shown by Figure 6. The calculated and experimental 13 C chemical shifts of our molecule are given in Table 8 and their comparison can be found in Figure 7.The linear regression line plotted in Figure 7 shows that theoretical results are in good agreement with experiment.This is confirmed by the linear correlation coefficient calculated here as R-square at the RHF, B3PW91, and B3LYP levels using the 6-311++G(d,p) basis set. The following regression line plotted for each level using the general equation = + , where a and b are given in Figure 7, shows that the calculated 13 C chemical shifts correlate very well with experiment.The linear correlation coefficient calculated as R-square found in Figure 7 also confirms this. Vibrational Frequencies Analysis. The vibrational frequencies of our molecule were computed by using B3LYP/6-311G(d,p) method in both gas phase and chloroform.The experimental IR vibrational frequencies obtained for the two carbonyl moiety present in our structure along with the calculated scaled and unscaled vibrational frequencies, IR, and Raman frequencies with their approximate descriptions are given in Table 9.The rest of the vibrational parameter of Rubescin E molecule which is not described in Table 9 can be obtained from Supplementary Material S2.The scale factor was determined as the mean value of the scale factor that matches correctly for the C=O stretching and the given experimental value.The obtained scale factor was 0.9706.No imaginary frequencies were found showing that structure of the molecule Rubescin E is stable in both gas and solvent.Figure 8 gives the representation of the scaled IR intensity and Raman scattering activity. The C=O double bond gives rise to a very intense absorption band in IR spectrum.The position and intensity of this band range from 1870 cm −1 to 1540 cm −1 depending on the physical state, electronic, and mass effects of neighboring substituents, intra-and intermolecular interactions, and conjugations [29].The C=O double bond absorption spectra were observed experimentally at 1720 cm −1 and 1664 cm −1 [1].In this study, the vibrational mode of C=O was found at 1726.20 cm −1 and 1690.57cm −1 gas phase and at 1701.01 cm −1 and 1667.59cm −1 in chloroform.There is good agreement between the vibrational modes with experimental values. Conclusion In this study, the geometry optimization of Rubescin E has been carried out using ab initio HF and density functional theory DFT (B3LYP and B3PW91) methods in both gas phase and chloroform solution with the 6-311++G(d,p) basis set.The optimized parameters were compared to those of some existing groups of compound present in our molecule, since none of this have been done before for the title molecule and good agreement was found.In order to confirm the geometry Advances in Condensed Matter Physics of our molecule, the 3 - proton-proton coupling constant was evaluated and the results compared to experiment were similar.The calculated results have showed that Rubescin E possesses a HOMO-LUMO energy gap greater than 4, which indicate a hard molecule that can be used as an insulator in many electronic devices.We can also conclude from the HOMO-LUMO analysis that the electron can easily be transferred from the furan to tetrahydrofuran ring.The charge analysis performed using Mulliken population, CHepG, and NBO methods showed positive charge for all hydrogen atoms; it was observed that the most positive (respectively, negative) charge atoms were directly linked to the most negative (respectively, positive) charge atoms and also that all the carbon atoms linked to hydrogen were all negatively charged.The calculated first static hyperpolarizability was found to be more than four times greater than the reported value found in the literature for urea leading us to the conclusion that Rubescin E has very good NLO properties.The calculated optoelectronic properties show large values of refractive index, dielectric constant, and electrical susceptibility, leading us to the conclusion that Rubescin E has strong optical and phonon application.Good agreement was found between the calculated and experimental UV spectrum.The theoretical proton ( 1 H) and carbon ( 13 C) chemical shift values (with respect to TMS) were reported and compared with experimental data, showing a very good agreement for both 1 H and 13 C NMR.The calculated vibrational frequencies done using the B3LYP/6-311G(d,p) functional in both gas and chloroform solutions were all positive leading us to the conclusion that Rubescin E was stable.Approximate descriptions of the vibrational assignments were done in order to take out the different motions of atoms in the title molecule. 3.4.1.Mulliken, ESP, and Natural Charge Distribution.The Mulliken atomic charges of our molecule calculated at all Figure 2 : Figure2: Charge distribution on Rubescin E calculated at the RHF, B3PW91, and B3LYP levels in both gas phase and chloroform solution and with the 6-311++G(d,p) basis set. Figure 3 : Figure 3: Molecular orbital and the HOMO and LUMO energy of Rubescin E in gas phase. Figure 4 : Figure 4: Total density of state (DOS) spectrum of Rubescin E at the RHF, B3PW91, and B3LYP levels in both gas and chloroform phase and with the 6-311++G(d,p) basis set. Figure 5 : Figure 5: Theoretical absorption spectra of Rubescin E at the B3PW91 and B3LYP levels in gas with the 6-311++G(d,p) basis set. Figure 6 :Figure 7 : Figure 6: Comparison of experimental and theoretical 1 H chemical shifts of Rubescin E calculated at the RHF, B3PW91, and B3LYP using the 6-311++G(d,p) basis set in chloroform. Table 1 : Optimized geometric parameters in gas phase and in chloroform solution of Rubescin E at the RHF, B3LYP, and B3PW91 level with the 6-311++G (d,p) basis sets. Table 2 : Experimental and calculated 3 Table 3 : Global reactivity descriptors of Rubescin E at the RHF, B3LYP, and B3PW91 levels in gas phase and in chloroform solution using the 6-311++G(d,p) basis set. Table 4 : Theoretical absorption wavelength (), excitation energy (E), and oscillator strengths of Rubescin E at the B3PW91 and B3LYP levels in gas with the 6-311++G(d,p) basis set. Table 7 : Experimental and calculated 1 H NMR chemical shifts (ppm) of Rubescin E at the RHF, B3LYP, and B3PW91 levels in chloroform solution using the 6-311++G(d,p) basis set. 3 ; rock.s of (C m (C s (C m (C m (C m (C
2019-02-25T16:04:43.787Z
2019-01-02T00:00:00.000
{ "year": 2019, "sha1": "43019db6d39733341dfc8ae1de977774e3e95fa7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/4246810", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "43019db6d39733341dfc8ae1de977774e3e95fa7", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
14567646
pes2o/s2orc
v3-fos-license
Neutron single-particle strength in silicon isotopes: Constraining the driving forces of shell evolution Shell evolution is studied in the neutron-rich silicon isotopes 36,38,40 Si using neutron single-particle strengths deduced from one-neutron knockout reactions. Configurations involving neutron excita- tions across the N = 20 and N = 28 shell gaps are quantified experimentally in these rare isotopes. Comparisons with shell model calculations show that the tensor force, understood to drive the col- lective behavior in 42 Si with N = 28, is already important in determining the structure of 40 Si with N = 26. New data relating to cross-shell excitations provide the first quantitative support for repulsive contributions to the cross-shell T = 1 interaction arising from three-nucleon forces. The atomic nucleus is a fermionic many-body quantum system composed of strongly-interacting protons and neutrons. Large stabilizing energy gaps, separating clusters of single-particle states, provide the cornerstone for the nuclear shell model, one of the most powerful tools available for describing the structure of atomic nuclei. In the simplest version of the shell model, empirical shell gaps at the magic nucleon numbers 2, 8, 20, 28, 50, 82, and 126 are reproduced when assuming that the nucleons experience, predominantly, a mean-field potential with an attractive one-body spin-orbit term. In rare isotopes, with imbalanced proton and neutron numbers, significant modifications have been observed. Here, new shell gaps develop and the conventional gaps at the magic numbers can collapse. Understanding this observed evolution is key to a comprehensive description of atomic nuclei across the nuclear chart. Detailed studies of the evolution of shell structure with proton number (Z) or neutron number (N ), e.g. [1], probe the effects of particular components of the complex interactions between nucleons: such as the spin-isospin [2] and tensor [3] two-body terms, and three-body force terms [4,5]. The need to include such terms in the nuclear interaction has been demonstrated by their robust effects that become amplified at large isospin [3,5] and, without which, features such as driplines and shell structure may not be reproduced. Clearly, a full treatment of the nuclear force from its underlying QCD degrees of freedom is very challenging, and experimental data is essential in helping to identify the most important degrees of freedom responsible for driving the evolution of nuclear properties. Here, we present data for the silicon (Z = 14) isotopic chain, a region of the nuclear chart were rapid shell evolution is at play. 34 Si is known to exhibit closed-shell be-havior while 42 Si shows no indication of an N = 28 shell gap [6]. Hitherto, observations on the neutron-rich silicon isotopes, dominated by measurements of collective observables, have been reproduced by large-scale shell model calculations using phenomenological effective interactions [6][7][8][9]. To assess the theoretical description of the evolving shell structure it is also critical to investigate these nuclei using single-particle observables, such as the energies and single-particle (spectroscopic) strengths of states involving the active orbitals at shell gaps. This Rapid Communication reports a first experimental investigation of observables that reflect single-neutron degrees of freedom in the 36,38,40 Si isotopes. Extraction of the presented cross-sections from the data, collected in the measurement reported in Ref. [10], required the development of novel analysis strategies. The results go beyond those of Ref. [10], and are interpreted here within a common theoretical framework. Exclusive one-neutron knockout cross sections, measured using γ-ray tagged neutron removal reactions from 36,38,40 Si projectiles, are used to identify and quantify configurations that involve neutron excitations across the N = 20 and N = 28 shell gaps. Specifically, the partial cross sections to the lowestlying 7/2 − and 3/2 − states, involving the diminishing N = 28 gap, and 1/2 + and 3/2 + states, involving the N = 20 shell gap, are measured and compared to calculations using shell-model spectroscopic strengths and eikonal reaction theory. The results (i) track the evolution of the neutron f 7/2 and p 3/2 orbitals at the N = 28 shell gap, and (ii) quantify the little-explored neutron excitations, from the d 3/2 and s 1/2 sd-shell orbitals, across the N = 20 gap. The experiment was performed at the Coupled Cyclotron Facility of the National Superconducting Cy- clotron Laboratory at Michigan State University. Secondary beams of 36,38,40 Si, produced by fast fragmentation of a 48 Ca primary beam, impinged on a beryllium target with energies of 100, 95 and 85 MeV/u, respectively. The one-neutron knockout residues were detected and identified on an event-by-event basis. Prompt γrays, emitted in-flight from de-excitation of the knockout residues, were detected with the GRETINA array [11] surrounding the target position, and were Dopplercorrected event-by-event. The level schemes of the knockout residues were constructed based on γγ coincidences, energy sums, and intensity balances. These are summarized in Fig. 1. Spinparity assignments were made with the aid of the parallel momentum distributions of the residues in comparison with theoretical distributions calculated in an eikonal model according to the formalism of Ref. [12]. Full details of the experiment, data analysis and the spin-parity assignments can be found in Ref. [10]. The knockout cross sections to negative parity states, i.e. removal from the neutron f 7/2 and p 3/2 orbitals in 36,38,40 Si, map their spectroscopic strengths. The experimental and calculated cross sections are listed in Table I. Details of the shell model calculations used can be found in Ref. [10]. Determining the partial cross sections is challenging in some cases. For example, population of the 35 Si(7/2 − 1 ) ground state is hindered by the presence of a 3/2 + isomer, expected to be strongly populated but which cannot be tagged with prompt in-beam γ spectroscopy. We use instead the 35 Si residue momentum distribution to extract the population fraction. Fig. 2 Parallel Momentum (GeV/c) 14 14. TABLE I. Experimental (σexp) and calculated (σ th ) oneneutron knockout cross sections to the lowest 7/2 − and 3/2 − states in the mass Ares residues. The σ th use the shell-model spectroscopic factors C 2 S and their center-of-mass correction, (Aproj/Ares) 3 , and the calculated eikonal model singleparticle cross sections σsp. All cross sections are in millibarns. shows the 35 Si parallel momentum distribution after the subtraction of all events that decay by prompt γ emission. Overlaid is a linear combination of the theoretical distributions for neutron removal from the f 7/2 and d 3/2 orbits together with the resulting χ 2 fit minimization, giving a f 7/2 fraction of 0.45 (10). The ground state cross section is estimated from this fraction of the knockout reaction events with no prompt γ decay. The 37 Si(3/2 − 1 ) and 39 Si(7/2 − 1 ) states are nanosecond isomers. As a result, their depopulating transitions have broad, asymmetric peak shapes due to the larger uncertainty in the position and velocity of the decaying fragment and corresponding degradation of the Doppler reconstruction. This lifetime effect is incorporated into the GEANT4 [13] simulation and a best-fit lifetime is ob-tained with a maximum likelihood method. An example is shown in Fig. 3. The proximity of these peaks to the γ-ray detection threshold results in a dependence of the extracted peak intensity on the assumed lifetime. This dependence is shown in the lower panel of Fig. 3(a), and makes the major contribution to the uncertainty in these peak intensities. The effect of the lifetime on the peak shape depends on the polar angle of the emitted γ ray. We can confirm that the simulation reproduces this dependence by dividing the array into three rings centered near 50, 65 and 90 degrees (labeled front, middle, and backward in Fig. 3), and comparing the fit in each ring. This comparison, in Fig. 3(b), shows satisfactory agreement. The large uncertainty for the 39 Si(3/2 − 1 ) state in Table I is due to several observed transitions which were not placed in the level scheme, introducing ambiguity in the subtraction procedure described above. The quoted uncertainty includes the range of possible level schemes which are consistent with the data. Further, since the second 3/2 − state was not identified, the value shown provides only a lower limit on the bound p 3/2 strength. The stated 37,39 Si(7/2 − ) cross sections assume that population of the predicted 5/2 − 1 states is small compared to other sources of uncertainty (the shell model strengths predict cross sections of order 1 mb). The measured and theoretical cross sections (for the SDPF-MU [8] and SDPF-U [7] shell model effective interactions) are shown in Fig. 4. In the region of 42 Si, the tensor component of the interaction has been proposed as an important driving force for shell evolution [3,6], and so we investigate this with a third set of calculations-denoted SDPF-MU-NT-obtained by removing the tensor part of the cross-shell sd-f p interaction of SDPF-MU. All theoretical cross sections are scaled by an empirical quenching factor R(∆S) obtained from a fit to the knockout reaction systematics [10,14]. The agreement between the measured 7/2 − state cross sections (shown in blue) and both the SDPF-MU and SDPF-U calculations is excellent. We see that the effect of the tensor force, as discussed in [3], becomes important [15] already around 40 Si. In contrast, the 3/2 − state cross sections (shown in red) are markedly underpredicted. This finding is consistent with previous measurements using one-neutron knockout, from 30,32 Mg [16] and 33 Mg [17], as well as a (t, p) transfer measurement populating states in 32 Mg [18]. In each of these cases, an excess of p 3/2 strength was seen, relative to shell model predictions, while the f 7/2 strength was generally consistent with the shell model. The fact that this discrepancy is observed for different reaction mechanisms, and only for a particular orbit, suggests that this is a structure effect and not related to any systematic defect of the reaction theory. As can be seen from the dotted line in Fig. 4, the tensor force does not appear to have much effect in 36 Si and 38 Si, and so the p 3/2 discrepancy likely TABLE II. Experimental (σexp) and calculated (σ th ) oneneutron knockout cross sections to the lowest 3/2 + and 1/2 + states in the mass Ares residues. The σ th use the shell-model spectroscopic factors C 2 S and their center-of-mass correction, (Aproj/Ares) 2 , and the calculated eikonal model singleparticle cross sections σsp. All cross sections are in millibarns. has origins elsewhere. To clarify the N = 20 shell closure we also consider the removal of neutrons from the d 3/2 and s 1/2 sd-shell orbitals, populating positive-parity final states. The cross sections for population of bound 3/2 + and 1/2 + states are listed in Table II. The large uncertainty for the 35 Si(3/2 + 1 ) state yield is due to the same isomer effect as was discussed for the 7/2 − 1 state. The measured and theoretical (SDPF-MU and SDPF-U) cross sections are compared in Table II. Both model calculations, which include 1p − 1h excitations from the sd-shell in the wave functions of the residual nuclei, over-predict the strength of transitions from these orbits. It is very likely that the theoretical over-prediction of sd-shell strength and the aforementioned underprediction of f p-shell strength are related, reflecting unaccounted-for excitations across the N = 20 shell gap in the ground states of the projectiles. Indeed, the present calculations for the projectile ground states were performed in a 0 ω model space in which the neutron sd-shell orbits are fixed and fully occupied. So, it is evident that the assumed occupation of sd-shell orbits is too high. In the Monte Carlo shell model calculations of Ref. [19], that allowed an arbitrary number of neutron particle-hole excitations from the sd-shell into the lower f p shell, the results, in 36,38 Si, were an average excess of approximately 0.3−0.4 neutrons compared to normal filling. This reduced sd strength (and additional f p strength) would bring the shell-model predictions into better agreement with the present data, with the exception of the large 3/2 + strength of SDPF-MU. Finally, the newly-measured energies of the 3/2 + and 1/2 + hole states provide guidance for shell-model effective interactions that include excitations across the N = 20 shell gap. Figure 5 shows the experimental energies of the 3/2 + 1 (1/2 + 1 ) states relative to the 7/2 − 1 states, indicative of the f 7/2 to d 3/2 (s 1/2 ) shell gap. For reference, we also show the shell-model spectroscopic factors for populating these states by one-neutron removal. The experimental data indicate that both gaps shrink as neutrons are added from N = 19 to 25, while SDPF-MU predicts a flat trend and SDPF-U predicts an increase of these gaps. These qualitiatively different predictions can largely be attributed to a difference in the cross-shell neutronneutron (T = 1) interaction. Figure 6 shows selected monopole (i.e. angle-averaged) terms of the SDPF-U and SDPF-MU interactions. While both interactions have similar sd and f p monopoles and are successful in reproducing the spectroscopy of the region within the 0 ω model space, the more-attractive SDPF-U crossshell monopoles over-bind the neutron sd orbits as neutrons are added to the f p shell, leading to the observed trend. This discrepancy highlights a key difference between the two interactions. In SDPF-U, due to insufficient experimental data, the cross-shell part of the interaction was left as essentially the two-body G matrix. On the other hand, the cross-shell component of SDPF-MU was generated from the schematic potential V M U [20] which allowed-by incorporating information from data closer to stability-the approximate inclusion of the repulsive contribution of three body forces to the effective T = 1 two body interaction. This same repulsive T = 1 effect has been shown to be robust consequence of the Fujita-Miyazawa process which is crucial in reproducing the oxygen dripline [5]. We note that a more recent version of SDPF-U [9], developed to allow neutron excitations across the N = 20 gap, in fact produces significant improvement over the original SDPF-U energies [21]. In conclusion, we have exploited one-neutron knockout reactions to probe the evolution of the f 7/2 and p 3/2 spectroscopic strength in neutron-rich silicon isotopes. Stateof-the-art shell-model interactions describe the trends of the data but underestimate the role of the p 3/2 orbital. We confirm that the tensor force is necessary to describe the evolution of the f 7/2 strength, and show that it is already important at N = 26. The observed excess of p 3/2 strength relative to shell-model predictions indicates that the N = 28 shell gap may be reduced even more than present calculations suggest. Neutron cross-shell excitations across the N = 20 shell gap were identified and quantified for the first time from the observation of positive-parity final states. The shell-model interactions considered (SDPF-U and SDPF-MU) over-predict the measured d 3/2 and s 1/2 neutron removal yields, pointing to the deficiency of the applied model space truncations. We have also identified the energies of neutronhole states which depend strongly on previously unconstrained neutron-neutron monopole interactions. A comparison of shell-model predictions indicates the importance of three-body forces in the evolution of structure in this region. We thank the staff of the Coupled Cyclotron Facility for the delivery of high-quality beams. We also thank A. Poves for helpful discussions. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number de-na0000979. This work was also supported by the National Science
2015-04-09T14:31:54.000Z
2015-04-09T00:00:00.000
{ "year": 2015, "sha1": "bd00e4ef03bb2d857196651d64d10fd6e7740eea", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevC.91.041302", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "bd00e4ef03bb2d857196651d64d10fd6e7740eea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258652712
pes2o/s2orc
v3-fos-license
Cell Proliferation Indices in Regenerating Alitta virens (Annelida, Errantia) In recent years, interest in the possible molecular regulators of cell proliferation and differentiation in a wide range of regeneration models has grown significantly, but the cell kinetics of this process remain largely a mystery. Here we try to elucidate the cellular aspects of regeneration by EdU incorporation in intact and posteriorly amputated annelid Alitta virens using quantitative analysis. We found that the main mechanism of blastema formation in A. virens is local dedifferentiation; mitotically active cells of intact segments do not significantly contribute to the blastemal cellular sources. Amputation-induced proliferation occurred predominantly within the epidermal and intestinal epithelium, as well as wound-adjacent muscle fibers, where clusters of cells at the same stage of the cell cycle were found. The resulting regenerative bud had zones of high proliferative activity and consisted of a heterogeneous population of cells that differed in their anterior–posterior positions and in their cell cycle parameters. The data presented allowed for the quantification of cell proliferation in the context of annelid regeneration for the first time. Regenerative cells showed an unprecedentedly high cycle rate and an exceptionally large growth fraction, making this regeneration model especially valuable for studying coordinated cell cycle entry in vivo in response to injury. Introduction The regeneration of organs and body parts is a remarkable phenomenon involving changes in cell fate and proliferative status. The study of regeneration on organismal models has given a comprehensive understanding of the complexity of this process and has great potential for determining the fundamental mechanisms of cellular plasticity. Epimorphic regeneration includes several stages, which are wound closure, formation of the wound epithelium, induction of the blastema, and its growth, patterning, and differentiation, followed by functional restoration of the lost structure [1][2][3]. As one of the crucial elements of this process, the regeneration blastema consists of undifferentiated cells undergoing active mitotic divisions, thus providing cellular material for subsequent stages. Even in representatives of the same phylum, for example, in annelids, the cellular sources of regeneration can vary significantly. Depending on the organism species, the source of blastemal cells may be stem cells migrating from distant segments [4][5][6][7] or dedifferentiated cells from the wound-adjacent tissues [1,[8][9][10]. Long-distance cell migrations to the wound site in invertebrates are described mostly for planarians and oligochaete annelids. These stem cell populations are referred to as neoblasts. In planarians, neoblasts are distributed throughout the body's parenchyma and comprise almost 20% of the total number of cells. It is the only proliferating cell population that produces lost body parts after wounding [11]. Neoblast specialization takes place during different phases of the cell cycle and is most likely a labile and transient state [12]. After wounding, neoblasts initiate a missing-tissue response, interpreting positional information. This information, coming from muscles expressing position-control genes, is Animals Spawning epitoke individuals of A. virens were caught in the summer near the Marine Biological Station of SPbSU in the White Sea. A laboratory culture of embryos was obtained by artificial fertilization [47]. The animals grew for 2-3 months in small aquariums with Cells 2023, 12, 1354 4 of 16 natural or artificial seawater until they reached 15-20 segments in length. The posterior thirds of the juveniles' bodies were amputated, and then the animals were left to regenerate for various time periods at +18 • C. At the preferred stages (1-6 days post-amputation, dpa), the specimens were anesthetized with 7.5% MgCl 2 mixed with artificial seawater (1:1) and fixed in 4% paraformaldehyde on 1.75× PBS with 0.1% Tween-20 overnight at +4 • C. The samples were washed two times and stored in 100% MetOH at −20 • C. EdU Incorporation and Detection We performed various experiments on 5-ethynyl-2-deoxyuridine (EdU, a thymidine analog) labeling ( Figure 1C) and aimed to estimate some aspects of proliferation and the cell cycle (length of the S-phase (Ts), cell cycle length (Tc), and growth fraction (GF)) in A. virens. We used 1 mL of 5 µM EdU diluted in artificial seawater for the incubation of one worm for 15 min (experiments (1) "pulse", and (3.1), (3.3) "pulse-wait"), 1 h (experiments (3.2), (3.4) "pulse-wait") or up to 48 h (experiments (2.1), (2.2) "cumulative labeling"). In the latter case, the EdU solution was changed every day. Experiment (3.4) was performed on intact non-amputated worms that were incubated in EdU and washed and fixed 1, 2, and 3 days after labeling. All other experiments were carried out on regenerating worms according to the scheme depicted in Figure 1C. EdU labels cells that are in the S-phase at the time of incubation. Upon mitotic division, the label transfers to the daughter cells, allowing for the visualization of the fate and location of the descendant cells as well. After labeling in dark conditions, the specimens were either fixed as described above or washed 5 times in 5 mL of seawater per specimen and kept in the dark until fixation. To detect EdU labeling, we used the "click" reaction [48,49]. The "click" reaction involves fluorescent azide-alkyne cycloaddition catalyzed by Cu(I). Before the "click" reaction, we rinsed samples in 0.1 M TRIS buffer (pH = 8.5). The reaction mix included 100 mM TRIS (pH = 8.5), 4 mM CuSO 4 , 2 µM sulfo-cyanin-5-azide, 50 mM ascorbic acid, and deionized water. Incubation for 45 min in the reaction mix was followed by washes in TRIS buffer (pH = 7.4) and nuclear DNA staining in DAPI (1 µg/mL). The samples were mounted in 90% glycerol for visualization. The sample size varied from 4 specimens per experiment up to 16 specimens (see Supplementary File S1, the "sample size" tab, for details on each experiment). Visualization and Cell Counting Confocal images were obtained using the Leica TCS SPE confocal microscope. We used a 40× lens with a 1.5 µm step between planes. For the statistical analysis, we evaluated the first 45 planes of the regenerative bud in each specimen, which made up almost half of its depth in most cases, and sometimes even more. All specimens were scanned from the ventral side. Optical sections were combined in stacks by ImageJ. Schemes were made in Adobe Illustrator and Inkscape. For experiment types 1 and 2, we calculated the number of labeled cells within the regenerative bud in confocal Z-stack using Bitplane Imaris 7.5 software. We manually specified the region of interest (regenerative bud) and separated it using the "Surfaces" tool. Then we estimated the relative size of several nuclei using the "Slice" function. An average value of the nuclear diameter was specified using the "Spots" tool. After automated quantification of the objects, we manually adjusted the threshold level so that all nuclei were counted, and the signal/false positive ratio was adequate. The results of the first automatic calculations were verified by manual counting of the nuclei in ImageJ using the Cell Counter plugin. After registration of the EdU+ and DAPI+ nuclei numbers, we calculated the labeling index (LI), which is a ratio of cells in the S-phase to the total amount of cells multiplied by 100. Statistical analysis was performed in MS Excel, Past, and R. For each quantified sample, we calculated the mean values and standard errors (Supplementary File S1). The obtained values of LI and the total amount of registered cells were examined using one-way Kruskal-Wallis tests and Mann-Whitney pairwise post-hoc tests. Cell Cycle Parameters The cell cycle parameters in the regenerating tissue were assessed by the cumulative labeling method in experiments (2.1) and (2.2) at the stages of 1 to 3 dpa and 2 to 4 dpa, respectively. We incubated the regenerating worms in EdU solution for up to 48 h and fixed them after 15 min, 5, 10, 24, and 48 h of exposition. After EdU detection, we determined and plotted the LIs of each specimen at a certain time point, and fitted a cumulative curve for the obtained values. To evaluate the cell cycle parameters based on the cumulative curve that reached a plateau, we used a method described by Nowakowski and colleagues [50]. The cumulative curve is described by the equation: y = a + bx, where "a" is the intercept, "b" is the slope, meaning that before the break point, we observed a linear regression, then at the break point, the regression reached a plateau. After visualizing the cumulative curves in R by fitting them with the least squares method (with packages "nlraa" and "minpack.lm"), we estimated the approximate cell cycle parameters, such as the growth fraction (GF), the length of the S-phase (Ts), and the length of the cell cycle (Tc). The growth fraction was found by the plateau value of the curve when all dividing cells had an EdU label and the amount did not increase any more. The break point of the curve corresponded to the time Tc-Ts, so that by extrapolating a regression line to the y-axis, we found the Ts. Knowing the Ts, we could estimate the Tc by adding the Ts to the break point value. All of these parameters are described by equations from the mentioned model [50]: f(t) = GF × (t + Ts)/Tc, for t ≤ Tc − Ts, and f(t) = GF, for t ≥ Tc − Ts. Pulse Labeling In the type (1) experiments, we identified the zones of proliferative activity in the regenerating juveniles by short EdU labeling ( Figure 1D). We also estimated the proportion of simultaneously proliferating cells (the labeling index, LI) in the regenerative bud. At the 1 dpa stage, only sparse individual labeled nuclei were present in the wound epithelium ( Figure 2A,A',A i ), and the LI at this stage was unsurprisingly low 1.8 ± 0.7% (Figure 2iv). EdU-positive nuclei were also found in the ventral nerve cord and in longitudinal muscles; however, the location of these nuclei lacked any anterior-posterior gradient, indicating the absence of an obvious wounding response at this stage. Starting from 2 dpa, and over subsequent regeneration stages, EdU incorporation within the wounded segment became more prominent near the amputation site compared to the anterior part of the same segment. Gut cells there had higher EdU incorporation rates; however, some coelomic and epidermal cells were also EdU-positive. By the 2 dpa stage, most of the nuclei in the regenerative bud were in the S-phase, which made the labeling more extensive ( Figure 2B,B'). EdU-positive nuclei were predominantly found in the lateral domains of the epidermis and blastemal cells ( Figure 2B i ), which differ from wound epithelium cells by their elongated nucleus shape and size. At this stage, the LI drastically increased and reached 22.8 ± 3.6% (Figure 2iv). By the 3-4 dpa stage, the regenerative bud becomes more pronounced, the pygidium forms posteriorly, and resegmentation events take place [46]. EdU incorporation is more prominent at the segment formation area compared to the pygidium region and cirri, where proliferation is less active ( Figure 2C,D). In the regenerative bud, EdU-positive cells were found in the epithelium, newly formed gut, and coelomic sacs, which, by this stage, were reemerging. At 4 dpa, the LI reached a maximum value of 37.3 ± 1.35%. The number of registered cells by the 4 dpa stage also increased four-fold compared to the 2 dpa stage and reached 1408 ± 124 cells (Figure 2iv). By the 6 dpa stage, the most active proliferation was also observed in the developing segmental tissues; however, the more mature segment in the anterior part of the bud seemed to be less proliferatively active ( Figure 2E', red bracket). The LI at this stage decreased to 22.5 ± 0.6%; however, the number of registered cells was the highest (5970 ± 463) (Figure 2iv). Comparison of the proliferation values at successive time points demonstrated that statistically significant increases in the LI (at 2, 4, and 6 dpa) were always followed by a significant increase in the cell number at the next sampled stage (at 2, 3, and 6 dpa). This confirms EdU labeling as a marker of mitotic activity in our model. Cell Kinetics in Cumulative Labeling In the type (2) experiments, with constant exposure to EdU, we observed a continuous increase in the LI until the cumulative curve reached a plateau ( Figure 3). In experiment (2.1), at the 1 dpa stage, individual labeled nuclei were observed in the wound epithelium. After 5 h of EdU incorporation at the 1 dpa + 5 h stage, proliferating cells were noticed in the epithelium, ventral nerve cord, gut, and in other internal mesodermal tissues in the wounded segment. As for the wounding site, labeling was predominantly noticed in the epithelial cells and in the first blastemal cells ( Figure 3A', arrow). Over the next 5 h (1 dpa + 10 h), the labeling was enriched and present in the same domains as previously described ( Figure 3B). After 24 h of EdU incorporation, most of the proliferating cells were localized in the epithelium of the regenerative bud ( Figure 3C). At this point, the labeling index increased by almost three-fold (from 26.2 ± 5.8% to 64.9 ± 2.8%, (Figure 3iv, red circles)). In the old segment, most of the EdU-labeled nuclei were localized near the wounding site, and most of them were in the superficial epithelium and ventral nerve cord. By 48 h of incubation, the number of EdU-positive nuclei in the regenerative bud reached 85.7 ± 2.5%, indicating an unprecedentedly high GF (Figure 3iv). By this time (1 dpa + 48 h), almost all epidermal cells of the regenerative bud were labeled, which was not the case for the blastemal cells. In the neighboring old segments, a proliferation marker was present in the same manner as after 24 h of incubation, although its level was obviously increased ( Figure 3D). In experiment (2.2), at 2 dpa, in the specimens after 10 h of EdU incorporation, the labeling in the blastemal cells was quite prominent ( Figure 3E) and its overall localization was similar to the 15 min of pulse labeling ( Figure 2B). In the adjacent segment, labeled cells were localized predominantly in the ventral epithelium near the wounding site ( Figure 3E'). At further stages of regeneration, this tendency remained the same, and most of the EdU-positive cells were found in the epithelium closest to the bud-segment border ( Figure 3F,G). During those stages (2 dpa + 24 h/48 h), the EdU label distribution was similar, although in the latter stage (2 dpa + 48 h), the proportion of EdU-positive nuclei was higher. Accordingly, the labeling index notably changed from 61.4 ± 8.4% at 2 dpa + 24 h to 90.6 ± 2.6% at the 2 dpa + 48 h stage (Figure 3iv, blue triangles). EdUpositive nuclei were found in the nervous system both in the wounded ( Figure 3F i ,G i , arrows) and unwounded ganglia of the adjacent segments ( Figure 3F',G') and in the muscle fibers ( Figure 3F i ,G i , arrowhead). The increase in the LI value (the cumulative curve slope) reflects that the rate of entry into the S-phase was higher in experiment (2.2) compared to (2.1). During the first 24 h of experiment (2.1), the LI drastically increased from 1.8 ± 0.7% (after 15 min of incubation) to 17 ± 1.9% (at 1 dpa + 5 h). The next increase occurred from 10 h (30.1 ± 4.1%) to 24 h (64.9 ± 2.8%), and by 48 h, the LI reached 85.7 ± 2.5%. Using these labeling indices, we plotted a cumulative curve (Figure 3iv). From the graph of (2.1), we estimated the growth fraction, which was 85.7%, the approximate length of the S-phase was 1.3 h, and the overall cell cycle length was approximately 33.2 h, which equals the sum of the breakpoint time and the Ts length (31.9 h + 1.3 h). In experiment (2.2), the LI initial value was 22.3 ± 1.9% after 15 min of incubation, which reached 51 ± 3.1% after 10 h of incubation, and then only increased by approximately 10% after the next 14 h (by 2 dpa + 24 h the LI equals 61 ± 8.4%). After 48 h of EdU incorporation, the LI was 90.6 ± 2.56%. For cumulative labeling after the 2 dpa stage in (2.2), cell cycle parameters differed compared to the (2.1) values. These changes were multidirectional due to the shortening of the overall cell cycle length, which was 25.7 h and contrasts the Ts longer time, which was approximately 7.3 h, and the decreased growth fraction, which was equal to 76% (Figure 3iv). In experiment (2.2), at 2 dpa, in the specimens after 10 h of EdU incorporation, the labeling in the blastemal cells was quite prominent ( Figure 3E) and its overall localization was similar to the 15 min of pulse labeling ( Figure 2B). In the adjacent segment, labeled cells were localized predominantly in the ventral epithelium near the wounding Pulse-Wait Labeling We performed label-retaining assays (3.1-3.4) to evaluate the input of the woundadjacent segments in the regenerative bud formation and the impact of wounding on proliferation (Figure 4). After short EdU incorporation, samples were rinsed and left for regeneration up to 3 days. A similar experimental scheme was performed on nonregenerating juvenile worms. Experiment (3.4) demonstrated the baseline proliferation in normal physiological conditions ( Figure 4F-H). Incubation in EdU before and immediately after the amputation labeled cells that were mitotically active at the moment of labeling. Later on, during the waiting time, those cells retained the label and may have contributed to blastema formation. In intact worms (3.4), we evaluated the cellular distribution in segments of the posterior third of the body where we normally would have carried out the amputation. The overall number of cells there became visually more prominent at the successive time points (Figure 4F-H). Due to the dilution of the EdU label among the daughter cells, we suggest that differentiated cells undergo mitotic divisions during the normal physiological state, when juveniles are constantly growing and increasing their segment numbers. Proliferating cells were found in the gut and mesodermal tissues, such as muscles and coelomic cells. Within the ectodermal and neural derivatives, cell proliferation seemed to be less prominent, compared to other tissues, but nonetheless detectable. The labeled cells were forming "clusters" already after 1 day of waiting ( Figure 4F), indicating that local proliferation was more prominent in specific parts of the same tissue. Incubation in EdU prior to amputation (3.1) had comparable results with experiment (3.4) regarding the proliferation pattern in the segmental tissues ( Figure 4A-C). Individual EdU-positive nuclei and their couples in muscle fibers ( Figure 4A', arrow) and in the ventral nerve cord on the wound-adjacent side were observed at 1 dpa ( Figure 4A). On the second day of regeneration, diluted labels were found in individual blastemal cells (Figure 4B'). The observed number of EdU-positive nuclei in the regenerative bud was visibly lower compared to the overall 2 dpa proliferation detected by the pulse labeling ( Figure 2B). Those faintly labeled cells, being daughters of mitotically active cells in intact animals, can be found predominantly at the gut-epidermal border. A similar labeling pattern in the growing tissues of the regenerative bud was registered at the 3 dpa stage ( Figure 4C). As for wound-adjacent segments, starting at the 1 dpa stage and further on, most proliferating cells were found in the gut epithelium. Mesodermal tissues, ventral nerve cord, and superficial epithelium were actively proliferating not only at the amputation site, but in the entire segment. The labeling patterns in those segments were similar to those in the non-amputated samples (3.4) tissue-wise and regarding the dilution of the EdU signal. This might indicate that cells undergoing mitosis before amputation have minimal impact on blastema formation. Thus, amputation might influence the mitotic activity that is necessary for the accumulation of reparation-responsible cells. To check this assumption, we incubated juveniles in EdU immediately after amputation in experiment (3.2). During the first 24 h after amputation, there were only a few proliferating cells in the epithelium, ventral nerve cord, mesodermal tissues, and gut ( Figure 4D), which contrasts the broader labeling of the segmental tissues in experiments (3.1) ( Figure 4A) and (3.4) ( Figure 4F). However, it was obvious that the first divisions in the samples (3.2) already occurred, since we observed couples of cells near each other. After the next 24 h of waiting, the segmental tissue labeling was slightly wider ( Figure 4E); however, it contrasted even more with experiments (3.1) ( Figure 4B) and (3.4) ( Figure 4G). In the regenerative bud, we observed diluted EdU labels ( Figure 4E), which were present in much more cells than in experiment (3.1) ( Figure 4B). Some blastemal cells had much brighter labels than others ( Figure 4E, arrow), suggesting their slower mitotic rate. Comparison of the proliferation profiles in experiments (3.1), (3.2), and (3.4) suggests that the mitotically active cells responsible for body growth were different from the cellular sources of regeneration that entered the cycle in response to amputation. Additionally, experiment (3.3) was aimed at identifying the input of blastemal cells labeled at 2 dpa into new segments. Mitotic cells incorporated the EdU, and later, we observed diluted or finely dispersed labels in the nuclei as an indicator of those daughter cells. At the 3 dpa stage (2 dpa + 24 h wait), the localization of EdU-positive nuclei ( Figure 4I) resembled the pulse experiment results ( Figure 2C). The regenerative bud consisted of two visible zones that differed in their proliferation pattern, since, in the posterior part (pygidium), the EdU labeling was less intense ( Figure 4I, white bracket) than in the anterior (segment-producing) part. The less intense pulse labeling in the pygidium at 3 dpa ( Figure 2C,C', white bracket) compared to the (3.2) sample ( Figure 4I, white bracket) indicates a differential decrease in the rate of cell cycle entry in different parts of the regenerative bud. In the wound-adjacent segment, EdU labels were present predominantly in the gut tissues; however, individual EdU-positive cells were found in the superficial epithelium, mesodermal derivatives, and ventral nerve cord. Waiting for 48 h resulted in even more diluted EdU labeling ( Figure 4J), indicating that the blastemal and epithelial cells of the regenerative bud continued to proliferate until 4 dpa, which is consistent with the LI dynamics (Figure 2iv). Spatial and Temporal Dynamics of EdU Incorporation in Regenerative Bud Posterior restoration of the A. virens body is accompanied by extensive proliferation of cells in the regenerative bud as well as localized proliferation within the old segments ( Figures 1D and 2). The entire process of regeneration takes from 6 to 9 days (depending on the environmental conditions) and undergoes the standard stages of regeneration [2]. During the first 24 h, wound closure through muscle contraction and formation of the tissue plug takes place, and later, wound epithelium forms at the amputation site. Wound epithelium is mitotically inactive (except for rarely found S-phase nuclei) due to its origin from the intestinal and epidermal epithelial cells closest to the wound, which simply fuse together, forming this transient layer of dedifferentiating cells [1,2,46]. Adjacent tissues of the wounded and unwounded segments did not demonstrate a proliferative response to the amputation at this early stage (Figures 2A and 4A). As the regenerative bud grows and develops, the wound epithelium is replaced with regular epithelium, and it becomes much more mitotically active and remains like that until the first differentiated structures (pygidium and anal cirri) are restored at the terminal posterior body end. On the second dpa, the regeneration blastema (i.e., the internal mesodermal cells of the early bud) is formed in A. virens. It actively incorporates EdU, as well as the epithelium above it ( Figure 2B). Closely examining the EdU-positive nuclei distribution, we found that, already at this stage, the bilateral terminal region of the posterior body end seemed to be more proliferatively active, indicating that even before terminal structures and the newly formed segment can be visualized, they are distinguishable in terms of the intensity of their cell cycle entry. Interestingly, some molecular markers found in P. dumerilii, such as Pdum-dlx (a marker of appendage formation), Pdum-en, and Pdum-wnt1 (some of the early indicators of segmentation), tend to show similar patterns of expression at this stage of regeneration [9]. At the 3-4 dpa stages in A. virens, terminal structures acquired lower proliferative activity, but cells from the old segment side seemed to incorporate EdU more intensively ( Figure 2C,D). At the 3 dpa stage, the pygidium region with pygidial cirri became evident, as well as a new segment with coelomic sacs, however, without parapodia [46]. We observed peak levels of LI, 37% at 4 dpa, which correlated with the active morphogenetic processes at this stage. Another important process taking place at this time was the re-emergence of the posterior growth zone as a presumptive ring of cells, and its functional launch. In our work, we rarely can visualize the growth zone as a row of synchronously dividing cells; however, previous work on A. virens showed that multipotency markers, such as Avi-vasa and Avi-Piwi1, first reappeared in the growth zone at the 3 dpa stage, indicating its activity [42]. In nereidid Perinereis nuntia, the growth zone cells are characterized by synchronized cell cycle entry and other properties, allowing for resegmentation [51]. Hox genes expression patterns in A. virens [43,52] and in P. dumerilii [53] have also indicated the active processes of axial restoration happening at this stage. Thus, the differential proliferation pattern presented here may be established by the preceding expression of these regulatory genes. By the 6 dpa stage, cell proliferation in A. virens became less active and had an obvious anterior-posterior gradient, with higher EdU incorporation in the posterior segmental tissues adjacent to the growth zone ( Figure 2E). From this stage onward, we can infer that the reparative process was complete, and normal posterior growth began. The overall labeling dynamics have similarities with those that have been described for other Errantia polychaetes D. bermudensis [44], S. malaquini [10], and P. dumerilii [9]. Contribution of the Wound-Adjacent Tissues to Formation of the Regenerative Bud Proliferative activity in the wound-adjacent segment is of particular interest since it reflects the mechanisms of the induction of regeneration sources. Cells of so-called old (but physiologically growing by nature) tissues can either locally dedifferentiate and provide the source of blastemal cells [9,10,15], or their multipotent precursors can migrate to the wound site from the adjacent segment as in C. teleta [8] and from other parts of the body as in some oligochaetes [6,45]. Migrating cells do not necessarily have stem cell properties, but several distinct cell types can be determined as they migrate towards the wound site and undergo divisions [7]. We used different types of pulse-wait combinations in our experiments to identify the input of segmental tissues in blastema formation (Figure 4). In EdU incubation, before (3.1) and immediately after amputation (3.2) we saw a proliferative response to wounding and compared it with the proliferation happening under normal conditions (3.4). Our results show that the blastemal cells were predominantly descendants of cells entering the cell cycle after amputation ( Figure 4E). The proliferation present before amputation contributed very little to the newly restored parts of the body ( Figure 4B,C). On the contrary, most of the EdU-positive cells in the (3.1) samples remained in the old segment's tissues, demonstrating similar localization and abundance as those registered in the intact animals (3.4) at the same waiting time ( Figure 4A vs. Figure 4F, Figure 4B vs. Figure 4G, Figure 4C vs. Figure 4H). However, labeling (3.2) immediately after amputation with the subsequent waiting period showed us EdU-positive nuclei couples in the old segment, which we interpreted to have resulted from divisions ( Figure 4D). Compared to the (3.1) samples, there were much fewer label-retaining cells, which indicates an immediate decrease in the proliferation of the segmental tissues in response to amputation. Such drastic differences in the EdU distribution between the (3.1) and (3.2) samples support the hypothesis that A. virens restores its segments through the local dedifferentiation of cells, which become mitotically active in response to wounding [42]. There is evidence for dedifferentiation within certain structures, such as epithelium and muscles, in the reparative process across annelids [1,2]. In A. virens, dedifferentiation seemed to take place in the longitudinal muscles that we identified by clusters of EdU-positive nuclei located near the amputation site within muscle fibers at the early stages of regeneration ( Figure 4A). Muscle dedifferentiation at the early stages of anterior regeneration have been also described in the polychaete O. fusiformis [15]. Tissues of the nervous system also seemed to have an impact on A. virens regeneration. Nerves from the severed ventral nerve cord project and innervate the regenerative bud [46]. Experiments on EdU pulse and pulse-wait labeling at the stages of 1 and 2 dpa recovered EdU-positive cells in the ventral nerve cord at the wounding site ( Figures 3B and 4A-C). However, the labeling patterns do not imply continuity of the old neural tissues and the ganglion-producing cells in the regenerative bud. Recent studies on nereidid regeneration have described the neural expression of certain Hox genes [43,53] and FGF pathway components [32] at the early stages of regeneration. A neural hormone regulates regeneration in P. dumerilii [54]. Nerves themselves are known to regulate and induce the formation of the regenerative bud, both in annelids [55][56][57][58] and vertebrates [21,23,59,60]. Thus, the regulation of proliferation by the nervous system in annelid regeneration seems to be an important area for future research. As for the intestine, which appeared to be the most proliferatively active in the majority of our experiments, its cells were the only source of newly formed gut tissues ( Figure 4A-C). The same conclusions are valid for P. dumerilii [9], S. malaquini [10], and D. bermudensis [44]. The intestine is likely remodeled by morphallaxis and fused with epidermal epithelium at the wound site [2]. The wound-adjacent intestinal tissues lack dividing cells during the first two days of regeneration. At this time of A. virens regeneration, the posterior portion of the intestine launches the expression of foxA [41], a marker gene for the ectoderm-derived gut tissues. Thus, the inferred molecular morphallaxis of the gut [41] may be responsible for delayed proliferation in this organ. Cell Cycle and Subpopulations of Proliferating Cells in Blastema Through the cumulative labeling approach, we evaluated the approximate parameters of the cell cycle at the early stages of A. virens regeneration. The S-phase varied from 1.3 to 7.3 h depending on the experiment type ( Figure 3). Similar Ts durations were described in the sponges' regeneration, but the overall cell cycle duration was significantly shorter [35]. The G2, M, and G1 durations in our experimental setup could not be strictly evaluated, but the overall duration of the cell cycle (Tc) was 33.2 h in the type (2.1) experiment and 25.7 h in (2.2). The growth fraction in the regenerative bud never reached 100%, which means that some cells either had completely different cell kinetics (e.g., synchronously dividing cells of the posterior growth zone [51]), or stopped cycling very early. Comparing experiments (2.1) and (2.2), which examined cell populations at 1-3 dpa and 2-4 dpa, we noticed an increase in the Ts, but an overall shortening of the Tc (Figure 3iv). Changes in the calculated cell cycle parameters might indicate some undergoing transformation in the cell states, such as moving from the quiescent state to active mitosis, or changes in the composition of a heterogeneous population. A hypothesis of punctuated cycling was initially proposed for newt limb regeneration as an explanation for the different growth rates between adult and larval newts. The size of the quiescent cell population determines the speed, rate, and success of regeneration in newts [61]. It is noteworthy that the size of the G1-G0 cell population increases if the regeneration blastema in the newt limb is denervated [62] indicating that nerves and wound epithelium act as controlling factors of regeneration by influencing the cell cycle [63]. Despite the significant evolutionary distance between the mentioned organismal models, these observations on amphibian regeneration are consistent with our results on A. virens, which has made us start looking for factors that slow down the entry into the cell cycle (detected by the decrease of LI from 4 to 6 dpa) and reduce the GF already at 2-4 dpa. We also noticed heterogeneity in the spatial distribution of the proliferating cells in the A. virens regenerative bud. The superficial epithelium and blastema seemed to incorporate EdU at different rates. Specifically, at the end of experiment (2.1), virtually all epithelial cells bore the EdU label (by visual examination), while blastemal cells demonstrated visibly lower mitotic activity (Figure 3). Heterogeneity of the blastemal cells was also confirmed by experiments (3.1) and (3.2), which showed distinct labeling patterns (Figure 4). In addition, the regenerative bud cells in the (3.2) samples had varying intensities of EdU labels ( Figure 4E), suggesting their different mitotic rates. The anterior-posterior heterogeneity in the proliferative pattern was also noticed by pulse labeling after the 2 dpa stage in A. virens. Altogether, the presented and previously published data allow us to speculate that since the formation of regenerated tissues in A. virens, they possess polarity and are made of multiple subpopulations with different cell cycle parameters. This probably reflects a fundamental principle of organ regeneration, since fin blastema in zebrafish consists of two domains, which are not-dividing msxb-expressing distal cells and actively dividing proximal cells [64]. Careful examination of the cell kinetics in diverse regeneration models will clarify this issue. Conclusions The problem of proliferation in animal regeneration from a quantitative perspective has not yet been described in great detail. In this work, we performed an analysis of spatial-temporal proliferation patterns, cell kinetics, and cell sources of regeneration in A. virens. Our results show that the annelid regenerative bud is a complex heterogeneous and highly dynamic structure. Understanding how it originates and functions on a cellular level requires finding out how exactly the molecular factors act and how proliferation control is accomplished. Our research on a promising model is an important step towards the comprehensive study of the regeneration phenomenon. In addition to describing the particular quantitative and qualitative characteristics of proliferation, our work raises questions about the general principles of regenerating tissue organization and its regulatory factors. For further progress in these issues, first, complex studies on non-standard organismal models are required, which would allow for comparative analysis at the tissue, cellular, and molecular levels.
2023-05-13T15:12:07.160Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "1bb7d2e32551b22711be3905bc3892f8724a5011", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/cells12101354", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61dbe2d5b6c6d1f503d5caff7ec4b4725aa1fc4e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
204849610
pes2o/s2orc
v3-fos-license
New Zealand Tobacco Retailers’ Understandings of and Attitudes Towards Selling Electronic Nicotine Delivery Systems: A Qualitative Exploration Introduction: In 2017, the New Zealand Government signalled its intent to legalise the widespread sale of Electronic Nicotine Delivery Systems (ENDS), which many New Zealand retailers have actually sold for several years. Although ENDS uptake may reduce the harm smokers face, it requires them to adopt an entirely new practice; we therefore explored how effectively existing, non-specialist, tobacco retailers could advise and support potential quitters. Methods: Using in-depth interviews with 18 tobacco retailers (prior to legislative change), we explored knowledge of ENDS, attitudes towards selling ENDS and supporting customers’ cessation attempts, perceptions of ENDS’ risks and benefits, and views on the proposed legislation. Results: Participants generally had poor knowledge of ENDS products and provided either no advice or gave incorrect information to customers. They believed that the main benefit consumers would realise from using ENDS rather than tobacco would be cost savings; relatively few saw ENDS as smoking cessation devices. Those who stocked ENDS did so despite reporting very low customer demand, and saw tobacco as more important to their business than ENDS, citing higher repeat business, ancillary sales, and rebates. Participants typically supported liberalising ENDS availability, though several expressed concerns about potential youth uptake. Conclusions: Tobacco retailers’ limited understanding of ENDS, and the higher value they placed on tobacco, suggests they may have little capacity or inclination to support ENDS users to quit smoking. Given tobacco companies incentivise sales of smoked products, retailers have no reason to prioritise selling ENDS over tobacco. INTRODUCTION As in many countries, the electronic nicotine delivery systems (ENDS) products sold in New Zealand (NZ) have evolved rapidly in recent years, as has the ENDS supply chain.At the time of the research, NZ law did not allow sale of nicotine-containing e-cigarettes and e-liquids, although sales via internet-based outlets and at brick-and-mortar tobacconists, specialist 'vape shops', and convenience retailers had occurred for several years.[1] This anomalous situation, together with a recent court decision, led the NZ Associate Minister of Health responsible for tobacco to propose amending existing legislation to allow ENDS sales (Supplementary File 1 outlines the history of NZ ENDS regulation).[2] Considering these restrictions, ENDS use in NZ has been reasonably high, with around 17% of people aged 15 years and older having ever used ENDS; 6% on a daily basis and 3% at least weekly.[3] While regular ENDS users tend to be adult quitters, [4] the high prevalence of everuse among 15-24 year olds (30%) has prompted concern, as has the rapid increase in ever-use among NZ 14-15 year olds, from 7% in 2012, to 33% in 2018.[3,5,6] As the NZ Government is developing legislation that will allow sales of vaping products containing nicotine wherever tobacco is sold, it is crucial to consider the impact of greater ENDS availability.The Government's proposal to liberalise ENDS availability could theoretically help improve public health and reduce health inequities by encouraging existing smokers to switch completely to ENDS, as these products are likely to be substantially safer than combustible cigarettes.Yet, as the Government itself acknowledges, the extent to which ENDS can help achieve population health gains depends on "the extent to which they can act as a route out of smoking for New Zealand's 550,000 daily smokers".Most NZ ENDS users report concurrent The reference is already in the paper -it's current #19 smoked tobacco use (64%), [7] which and our earlier research suggests they ENDS users may require better information and support to navigate transitions from dual use to exclusive ENDS use.[8,9] Retailers could therefore play a crucial role in determining whether or not the intended population health gains are realised.To meet smokers' needs, and help realise population health benefits from harm reduction, such as switching to ENDS, retailers will require sound knowledge of the products they sell and how switching occurs.Specialist vape shops, typically staffed by people who have themselves transitioned from smoking to using an ENDS device, may provide this advice and guidance.[10,11] However, it is unclear how effectively non-specialist retailers support transitions from smoking to vaping.To inform proposed legislative change, we examined the following research questions: RQ1: What knowledge do non-specialist tobacco retailers have of ENDS products, and how effectively could they assist smokers wishing to use ENDS in a cessation attempt?RQ2: What are non-specialist tobacco retailers' views on the Government's plans to liberalise ENDS sales, and on selling ENDS as a potential alternative to combustible tobacco? Sample and Procedure We developed a semi-structured, in-depth interview guide that outlined specific discussion topics but used flexible wording and question sequence to maintain a natural and conversational interview (see Supplementary File 2).[12] Drawing from a national database of 5,500 known tobacco retailers compiled in previous research, [13] we recruited retailers using a purposive sampling strategy [12] stratified by area-level socioeconomic-status (SES), urban/rural location, and outlet type.We drew an equal number of retailers from the Otago and Wellington regions to ensure we obtained a varied representation of tobacco retailers in NZ.We recruited 18 participants, after which saturation (defined as no new idea elements in two consecutive Commented [LM2]: Although this is true, it is probably more accurate to say that this is related to the location of the tobacco retail premise. interviews) was achieved and data collection stopped (Figure 1 outlines how the final sample was achieved). LR and LT conducted the interviews, which lasted an average of 43 minutes, between November 2017-March 2018.The interviewers visited potential participants, told them the study aims, gave them an information sheet, and set an interview time with those who consented and met eligibility criteria (e.g.English language comprehension compromised their ability to provide informed consent or if they did not wish to be audio-recorded).All participants gave written informed consent before the interview commenced, and were offered a NZ$40 gift voucher to reimburse any expenses they incurred by participating in the study. The study was reviewed and approved by a delegated authority of the University of Otago Human Ethics Committee. Data Analysis The interviews were transcribed verbatim and checked by LR and JB; LR drafted a coding structure using the interview guide as an initial framework, which she, JH, LM and JB reviewed and agreed.We analysed the data using qualitative descriptive analysis, an approach designed to elicit information for practical applications.[14] Qualitative description aims to provide "rich, straight descriptions'' and thus differs from more interpretive approaches designed to inform theory development.[14] JB, JH and LM coded five transcripts independently, compared and refined initial categories, discussed key findings and agreed on policy implications.JB subsequently coded the remaining transcripts in consultation with LR, LM and JH. Participant Characteristics The sample of 18 participants comprised twelve men and six women, aged 25-59.Seven participants were Indian, six were New Zealand European, four were Chinese, and one identified as both New Zealand European and Māori.Nine of the retail outlets sampled were convenience stores, eight were supermarkets, and one was a service station.Table 1 contains details of store and participant characteristics; participants are referred to as (P(n)) with Y indicating they sell ENDS and N indicating they do not. Table 1: Participant Characteristics a Convenience stores are defined as small businesses that sell primarily food, beverages and a limited range of household goods; in NZ, they are not permitted to sell alcohol.b The NZDep2013 scale provides an ordinal score from 1 to 10, where 1 represents the area with the least deprived score and 10 the areas with the most deprived score.For this study NZDep2013 is categorised for each outlet as low (deciles 1-3), medium (deciles 4-7) and high (deciles 8-10).These comments delineated participants' as information providers (as opposed to advisors) and reflected their widely held view that smoking was a personal "choice" and enabled them to continue stocking and selling tobacco.Even those for whom tobacco sales presented a "dilemma" rationalised smoking as "people's choice" and rejected pleas from customers wanting to quit smoking as outside their role of "to smile, take your money".P11(Y) explained: Unlike tobacco, where participants reported receiving incentives via rebates, had higher product turnover, repeat custom and perceived ancillary sales, they had little incentive to sell ENDS.Ironically, the main benefit of ENDS to consumersreduced costwas the main disadvantage to retailers.P13(Y) explained: "it's a small shop, so we want our customers to be repeat every time, to come again and buy the smokes and buy some other stuffs... if the customers buy the e-cigarettes, ones like the long-lasting, it's very hard... to get the customers back to the shop… Like I want the customer to come every day.At least four or five times a Name Although no participants thought ENDS made more than a negligible contribution to their turnover, most viewed proposals to liberalise the sale of ENDS favourably.However, some felt concerned that point-of-sale promotions (banned for smoked tobacco products in 2012) would appeal to young people and could prompt ENDS experimentation. Our findings have important implications for impending changes to NZ's Smokefree Environments Act 1990, [19] and for international tobacco control advocates and policymakers.First, the NZ's Government considers vaping products to have potential to help achieve NZ's Smokefree 2025 goal; a goal which will only be achieved by greatly accelerating smoking cessation rates.FirstYet, research examining transitions from smoking to ENDS use suggests some smokers find vaping uptake difficult; [9,20] some remain dual users while others revert to smoking, [8,21] despite most wanting to stop smoking completely.Even 'pod mods' such as JUUL, which are easier to use than tank mods, require fundamental behaviour changes. Allowing ENDS sales from non-specialist stores whose staff have weak knowledge of the devices they sell and little commitment to go beyond their commercial remit, seems unlikely to support even those smokers wishing to use less technically complex devices to quit smoking using ENDS.Switchers need to learn how to use and maintain vaping products, find the 'right' combination of device and e-liquid nicotine concentration(s), and recreate ritualistic practices enacted with smoking.[9,20] Greatly limiting the supply of smoked tobacco and disallowing incentives offered to retailers alongside cautious increases in access to ENDS could rebalance the nicotine-product market in favour of less harmful options.Careful monitoring could identify positive effects and unintended outcomes, and enable availability to be fine-tuned.To create a nimble policy framework, policy makers should license ENDS retailers (and all tobacco retailers), require them to demonstrate knowledge of all devices they sell, and make it mandatory to advise users they should ultimately stop smoking and then to quit ENDS use.This approach could manage commercial motivations to prolong ENDS use and promote ENDS cessation among those vapers who can quit.Smokefree Enforcement Officers, who in NZ routinely visit tobacco retailers, educating them on the restrictions relating to the sale of combustible tobacco products and undertaking controlled purchase operations to test legislative compliance, could be directed to monitor and redress non-compliance with ENDS sales by convenience retailers. Our study has some limitations; we cannot generalise from our small sample of retailers but the diversity of our sample is likely to have captured the range of views that would have been Janet please could you insert a reference here: Ministry of Health.2018.Vaping and smokeless tobacco. found in a larger qualitative study sample or a representative survey.The detailed data on participants' beliefs are a key strength of our study, which is the first to explore how ENDS are sold.Future work could review the advice provided by specialist retailers including vape stores and pharmacies, and variations in advice quality by retailers' own vaping experience.Such studies could inform future regulations and help develop criteria ENDS store owners must meet to obtain a licence.In conclusion, convenience store owners who lack knowledge of the ENDS they sell may undermine rather than support smoking cessation.Policies regulating the availability of ENDS need to recognise the dynamic nature of the ENDS market and the acculturation process smokers undergo as they transition from smoking to exclusive ENDS use.Dramatically reducing the supply of smoked tobacco while restricting ENDS sales to specialist vape stores Outlet type a NZDep b Stock ENDS Role Time in Role ENDS User Retailers' views on advising consumers varied and some acknowledged they could not provide appropriate guidance: "The companies do give us a brochure that we can just pass on to them… there probably isn't enough training in store to be able to fully help somebody and make their mind up that this is the way that they should go" (P12,Y).Others showed no interest in learning how to advise people wanting to quit smoking: "I will take their money, and that's it.But I most certainly ain't gonna stand there and say, "Well, look," you know, "do you know how to use them, and if so... or if not, this is how."I mean, if they're big enough to buy them, and old enough to buy them, they surely should be able to read the instructions" (R4,N). Participants used numbers indicating nicotine content to compare ENDS to tobacco and suggest how customers might wean themselves off nicotine (three indicated the highest nicotine strength in a commonly stocked e-liquid range).P1(Y) explained: "…number three got the same volume of nicotine we've got in the actual cigarette…If people are intend[ing]to drop down the volume of nicotine, they can use the number two or number one and… reduc[e] them day by day... then they come to shisha [a no-nicotine device] and… finish it".Yet although P1's overview was correct, he later described offering contradictory advice, such as starting on zeronicotine products: " if the customer asks for it, like, "I want to quit", so we offer them[electronic]shisha… this is what you can try, and buy that.We don't show the vapes, because vape is similar to the smoke… We show them directly shishas, like, which got no nicotine….youcantaste the flavour... and feel like you're smoking, but you're not getting any harmful chemicals in you."explained:"Imean, nicotine does a heck of a lot of damage to you...Over years.You know.I mean, one smoke or two, probably wouldn't do a thing… But if you get addicted to it and smoke like I did...Oh my gosh.It does a heck of a lot of damage… So they [ENDS] would too...I would assume".Similarly, P1(Y) noted: "As the [ENDS supplier] says, they've got the same nicotine.So, same nicotine, same harm".This confusion over nicotine appeared to arise from sales materials and advice, and led to misperceptions about the relative risk both products posed.Retailers relied on customers' comments about ENDS' lower on-going costs and convenience to ascertain why people used ENDS and only a minority saw ENDS as a potential reduced risk product.P2(Y) summed up these perceived advantages: "It [ENDS] costs so much less.So it's eight dollars for the same amount that you pay 70 dollars for normal tobacco… they also tell me is… if they feel stressed, they can't wake up at 2 o'clock and go outside the house to smoke, because normal smoke make lot of smoke around.Smell too.They can smoke this one in the bed too, and no, no smell".Retailers also commented on the increased control smokers reported: "You don't have to… finish your smoke [with ENDS] … you wanna smoke put in your pocket and then start again" (P5,Y). it's sort of dilemma, isn't it?'Cause it's a part of the business.When you saying, you try to have a responsibility for the public, so it's sort of really a bit ah….my priority is people's own choice.I respect people's choice, that's it".P18(N) went even further: "I've had people coming in and saying, 'If I want to buy cigarettes… don't sell them to me."... From my point of view, I'm not getting involved.I'm not your mother, I'm not your caregiver.I'm here basically to smile, take your money."StockingENDSmeantretailerscould extend their product range, but they reiterated they would provide information rather than offer advice: "So what is my role?I would just tell them what was available.The choice is theirs…Yeah.I'd tell them that we have the cigarettes.I'd tell them what we had.If they ask for vaping, I'd tell them what we had, and then... the choice is theirs.I'm not there... to make their choice for them" (P17,N).Aside from respecting personal choice, tobacco retailers felt reluctant to promote ENDS as a smoking cessation tool because they lacked specialist knowledge: "From a personal perspective... if you were a customer and decided to try and quit smoking using vaping… I'd probably go to a pharmacist.I don't know why, but, for me, a pharmacist would probably have more information" (P10,N).Others queried whether ENDS helped smokers quit and noted they had seen customers relapse from vaping to smoking: "They're still two very different things, they're not the same, they're kind of similar, but... I've had a lot of people that have gone back to smoking" (P9,Y).couldbebadfor you…it's hard to say, like I think this is one of those things that over time, like when they do, do the research and do all the testing behind it, then you'd be able to tell".One participant had no interest in selling ENDS, which she thought would be incompatible with her store's positioning.Instead, she felt sales should be restricted to specialist stores: "probably just particular outlets I think, there's a couple of specialist tobacconist shops in [area] … I definitely wouldn't be interested in selling them… yeah, I mean we're supposed to be perceived as a grocery store" (P18,N).Several participants found it difficult to reconcile more liberal promotion of ENDS with measures to reduce smoked tobacco use.P8(N) explained: "the only thing I probably would question would be the promotional side of it… what are the limits on the promotion of the product?Is it just point-of-sale?....I probably don't agree with that because... it's a product that still has nicotine in it and it is still promoting the act of smoking whether it be with or without nicotine".P16(Y) raised more specific questions: "Why... advertise[and]have the [free] samples?That's encouraging people to smoke…I think that's the bad policy...If you sell, just sell.Why you have give a free sample?The people that not smoke, they'll come… to buy
2019-10-24T09:07:44.316Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "59009495e7521383ba6dd469f86b8dbd4644eedf", "oa_license": "CCBYNC", "oa_url": "https://purehost.bath.ac.uk/ws/files/201884907/Pandora_Qual_Paper_revision_V2_track_changes_LM_LR_1_.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "47c87e3e5e2f520648df9a8b4a10401e0219e48d", "s2fieldsofstudy": [ "Business", "Medicine", "Sociology" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
140746126
pes2o/s2orc
v3-fos-license
Land cover: to standardise or not to standardise? Comment on ‘Evolving standards in land cover characterization’ by Herold et al. A recent article advocated the adoption of a single standard for all land cover classifications. The authors argued that variations in classification were problematic, standards solve problems related to classification heterogeneity and land cover is the fundamental land variable. This letter challenges these arguments: 1. methods exist for integrating disparate data, many based around data semantics; 2. standards are themselves problematic as they are frequently revised (e.g. soils) and because they always lag behind current activities cannot represent the depth of knowledge held within a community such as land cover; 3. scientists working in other disciplines may view land use as the elemental variable driving many other processes and they construct land cover in a very different way. This letter argues that as most geographic data and especially land cover is a socially mediated construct (there are no agreed fundamental units), fixing a specific conceptualisation of land cover into the ‘aspic’ of a formal standard does not represent a scientific advance. Introduction In a recent paper in the Journal of Land Use Science, Herold et al. (2006) advocate the adoption of a single standard land cover legend in order to overcome variations in the way that land cover features are recorded. They propose that the UN Land Cover Classification System (LCCS; Jansen and Di Gregorio 2004) be adopted as the standard and their arguments can be summarised thus: 1. variation between different land cover mappings (legends, concepts, semantics, etc.) is problematic to the user and research community; 2. a standard classification system solves this problem and placing it within the ISO suite of standards legitimises its use; and 3. land cover is the fundamental land variable and therefore most urgent for standardisation. In the following sections we attempt to rebuff these arguments. We would like to establish at the first that we are not opposed to the LCCS system advocated in that paper. It is one among a number of useful classification schemes, all-be-it one with a large number of proposed classes. We are, however, opposed to standardisation in this area, regarding it as a false paradigm, ignoring as it does the validity of personal opinion, scientific advances and human practice. Diversity of classifications Currently, data producers use a classification that is appropriate for their context and related to their specific socio-political and technical setting. This approach allows the data producer to embed subtle distinctions in their classification and to generate a classification that is responsive to their context (i.e. it is useful and not just usable). Changes in policy, sensors, method and environment encourage the use of subtle variations in classifications. Imposition of a standard would either lose that subtly of conceptualisations through the granularity of its specification or become impossibly detailed referring to little more than parameters of the original data. The LCCS imposes a view of land cover categorisation which is strictly and precisely hierarchical. It often imposes crisp univariate distinctions and aggressive aggregations of concepts. For instance, when using LCCS to specify the characteristics of woody vegetation the difference between an 'open' and 'closed' canopy occurs at precisely 65% density, shrubs and trees can only be distinguished by their height, while palms, tree ferns and bamboos are forced to be trees (despite the nonsense this makes of linking land cover to ecosystems). These granularity distinctions result in the loss of descriptive richness. There are 'good' reasons for using different classifications in different contexts which arise from scientific, technical, organisational, institutional and political influences and those contexts should not be ignored. In response to these variations, the research community has developed a number of possible solutions to deal with inconsistent semantics and conceptualisations. For land cover these include using expert opinion to compare the global land cover classification schemes of GLC-2000 and MODIS (Fritz and See 2005), modelling the semantic relations between national land cover data in the UK (Comber et al. 2004), comparing the semantics of classifications systems of the US National Vegetation Classification Standard and the European CORINE Land Cover System (Ahlqvist 2005), and expanding the metadata to include semantic and conceptual aspects associated with land cover information to facilitate translation (Schuurman and Leszczynski 2006). All these methods allow the fusion of data from diverse semantic backgrounds and which exploit the heterogeneous semantics of land cover datasets conforming to the historical working practices of people. They do not impose the fixed constraints of standardisation. Towards a standard To describe the principal endeavour Herold et al. use the terms land characterisation, land cover characterisation, land cover assessment and land observation repeatedly and apparently interchangeably to describe the recording of land cover. There is an implication that there may be some deeper distinction between the terms, but it is never clarified, so the reader wonders whether there is a distinction. This is not 284 A. J. Comber et al. appropriate for writing on standards where the terms should be clearly defined and when use of synonyms with technical meaning should be clearly indicated. Standards are theoretically useful because they provide a common language, enabling parties to exchange data without misunderstandings. However, as their specification is a compromise between interested parties and because they lag behind activity, they cannot represent the depth of knowledge held within a community. Standards are further problematic because of their scientific background which permeates the endeavour and which denies the socially constructed nature evident in most geographical information, including land cover (Comber et al. 2005). Herold et al. are promoting the adoption of LCCS as a standard and describe various steps they are taking to help it become a standard. However, it is apparent that developing LCCS as a standard involves a very narrow set of land cover mapping practitionersprincipally those involved in various United Nations and European activities. References cited in the paper are limited to FAO/UNEP and European global mapping initiatives such as IGBP-LUCC and GLP involving GLCN, GOFC-GOLD, GTOS, UNEP and ESA. Essentially, Herold et al. are arguing for an application-specific standard. Herold et al. suggest that they are proposing a similar approach to that used in soil science (soil mapping) since the 1960s. However, soil scientists in many different countries have developed alternative classification schemes (Taxonomies), just as have land cover mappers, and they continue to do so. Secondly, there are actually at least two international schemes: the US Soil Taxonomy (Soil Survey Staff 1999) and the FAO UNESCO Legend of the Soil Map of the World (FAO/UNESCO 1990). Herold et al. suggest that the latter is the international standard. However, it is actually based on the Soil Taxonomy system of diagnostic properties which predated it. It was first published in 1974 identifying 26 Great Groups of soils, and republished and revised in 1988 with 28 Great Groups (some groups being removed and others introduced) and finally adopted as the World Reference Base (cited by Herold et al.) for soils. Therefore, it is not true to say that a standard has been in use for soils since the 1960s since it is neither a standard (not everyone uses it; Eswaran et al., 2003) and neither has it been unchanged. The fact of the matter is that soil classification is still a national pastime with a heterogeneity of approaches and discordant classifications, with international classifications existing in the background to which any local classification scheme is compared, just as for land cover mapping. Changes in classification schemes reflect the changing conceptualisation of the underlying phenomenon. Such change is inevitable, as scientific understanding is not static. Indeed, at different times the way a phenomenon should be viewed, and the basis of any taxonomy, can change dramatically. Whilst standards are built on a particular paradigm, changes in scientific paradigms are a necessary and desirable part of the advance of science. Land cover as the basis of land information Herold et al. assert that 'it is essential to base a common system for land use classification on existing land cover standards to ensure full compatibility between them' and that 'land cover and land use transitions have to be interoperable' (p. 162). This is a conceptually flawed argument. We might agree that at any time and place there is a land cover to some level of observable granularity. However, land use is Comment more dynamic. Any piece of land may have multiple uses associated with it-a woodland can be used simultaneously for recreation, timber production and hunting. Other land uses are alternate (e.g. the field with cows may be the village football pitch at weekends). Other uses take place on more than one type of land cover. Therefore, because there may be more than one activity taking place and any given use may take place on more than one cover, the relationships between land use and land cover may not be one to one, but many to one, one to many or many to many. These multiple relationships, representing different dimensions in land recording make full and direct compatibility and interoperable transitions between land use and cover problematic. The statement that 'Land cover provides the common ground for different focus areas in land assessment' (p. 162) not only assumes that there is a common understanding of land assessment but also that land cover is the elemental variable. When viewed from a socio-economic perspective, clearly land use is much more important and, in spite of multiple uses of the same land, more appropriate as an elemental variable than land cover. Even within natural resource survey, however, many people might think that soil information is the basis of scientific land assessment, while others believe that geology is the basic unit. The perception of precedence of mappable phenomena depends on personal training, not on any natural order as implied by Herold et al. That some might consider ecosystems as the basic variable is even suggested in the view of Herold et al. that 'an ecosystem reference classification system should be fostered, but has to be linked to land cover as [a or the] common land surface feature to allow compatibility' (p. 162). This mistakenly assumes that land cover is conceptually important to ecosystems, although it is not at all clear how this will facilitate compatibility or even what will be compatible. We suggest that land cover as a concept is actually unknown to many researchers in ecosystems, and to suggest that ecosystems are synonymous with land cover types is fundamentally mistaken. Indeed the whole argument for standardisation of land mapping based on land cover ignores the work on standards for ecosystem recording and description by the Taxonomic Databases Working Group for Biodiversity Information Standards (http://www.tdwg.org/ ). Interoperability: research areas and guiding principals We agree with Herold et al. that the different conceptualisations in how to represent land cover can be problematic. Our fundamental difference appears to be that we wish to recognise, acknowledge and perhaps even celebrate that diversity. What we are concerned with is that people (data producers and users) recognise that most geographic data and especially land cover is a socially mediated construct: there are no agreed fundamental units. We want producers to acknowledge the influence of changing scientific knowledge, available technology and societies needs; we want users to realise that their conceptual model of the world may differ from the data producers. We do not believe that fixing one conceptualisation from a narrow (albeit highly experienced) subset of expert producers into the 'aspic' of a formal standard represents a scientific advance.
2017-09-07T10:42:59.623Z
2008-01-03T00:00:00.000
{ "year": 2008, "sha1": "e1689e314b550903ff4b1606cf4e3ec141970988", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Land_cover_to_standardise_or_not_to_standardise_Comment_on_Evolving_standards_in_land_cover_characterization_by_Herold_et_al_/10087685/1/files/18187940.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "207edba0d6f0eef781109cf0c6dd025e07bb566b", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
220349294
pes2o/s2orc
v3-fos-license
Influence of Social Media towards the Selection of Hollywood Smile among the University Students in Riyadh City Background: Hollywood smile refers to the aesthetic development of dental appearance inspired by the beauty displayed by the movie actors. Therefore, the present study was conducted to determine the extent of social media effect on the decision making of university students towards selecting Hollywood smile as the choice of their aesthetic treatment. Materials and Methods: This cross‐sectional study was conducted by utilizing a self‐designed closed‐ended questionnaire among under‐graduate students from the various public as well as private universities of Riyadh city. The questionnaire was constructed online using Google forms and began with questions related to demographics; questions like Do you notice celebrity’s smile on social media? Have you visited a dentist solely after getting inspired by a celebrity’s smile? etc., Responses were on a 5‐point likert scale ranging from highly dissatisfied or strongly disagree to highly satisfied or strongly agree whenever applicable. Chi‐square test was used to compare the differences among the groups with the value of significance kept under 0.05 by using SPSS version 19. Results: The majority of the female participants reported noticing the celebrity’s smile on social media. Influence by the celebrity smiles on social media was found more in the older age group participants. However, the term “Hollywood smile” was slightly more known among the younger aged participants. Conclusion: The overall effect of social media in decision making of opting for Hollywood smile was found to be moderate. More studies should be conducted to investigate how much social medial is effecting the perceptions of youngsters. Introduction Hollywood smile refers to the aesthetic development of the dental appearance, which has now become a dental terminology. [6] Social media has been a revelation for the dental business in many countries. It has become one of the most common and cheapest ways to promote one's dental business and related products. In recent times, people's choice of dental treatments has vastly included the aesthetics related to dentofacial structures. Improving one's smile has been the most wanted esthetic modality among social media users, which has greatly boosted dental businesses. [7,8] When discussing the improvement of a smile, one cannot ignore the significance of Hollywood smile, which has now become an important patients' demand from their dental health care providers. The use of veneers in esthetic dentistry has evolved into the discovery of this terminology, which is now being used by patients as well as dental practitioners on a regular basis. [9] The young generation has taken over social media with their majority of the time being spent surfing and discovering the latest updates on several issues including their health. Females tend to be highly attracted to the dental aesthetics and their usage resulting from social media marketing. Their decision making is highly affected by the repeated advertisements being displayed on different social media websites. [10,11] This notion of receiving Hollywood smile has been inspired by the beauty displayed by the movie actors, which has impacted on the lives of the young generation. The major source of this excitement has been none other than the social media. Constant exposure of personal life and constant sharing of photographs on social media has also played an important role in public choosing various aesthetic dental treatments especially Hollywood smile. [12,13] It is important to know how much the social media affect the decision making of these young people in Saudi regarding their dental aesthetics. Therefore, the present study was conducted to determine the extent of social media effect on the decision making of university students towards selecting Hollywood smile as the choice of their aesthetic treatment. Materials and Methods This cross-sectional study was conducted by utilizing a self-designed closed-ended questionnaire. Ethical clearance to proceed with the study was availed from the institutional ethical committee. The IRB approval number is RC/IRB/2019/29. The participants were undergraduate students from the various public as well as private universities in Riyadh city. All undergraduates from dental specialty were excluded. A total of 1000 students were asked to fill the questionnaire using their smartphones. Convenient sampling was taken place in order to achieve the desired sample size. The questionnaire was constructed online using Google forms and began with questions related to demographics, including age, gender, name of the university, study field, total hours spent on social media in general, and the type of social media used. Do social media advertisements affect your attention? Do you notice the celebrity's smile on social media? Have you visited a dentist solely after getting inspired by the celebrity's smile? Are you familiar with the term "Hollywood smile?" The responses were on a 5-point likert scale ranging from highly dissatisfied or strongly disagree to highly satisfied or strongly agree whenever applicable. The survey was designed in a way that each respondent will attempt only once using his email account when the survey is sent using social network applications. Prior to the main survey, the validity of the questionnaire was tested by sending to the experts in research, which included a few faculty members of REU. A pilot study was conducted using 20 online questionnaires filled randomly by university students to assess reliability by calculating Chronbach's coefficient alpha, which was found to be 0.82. Statistical Analysis: Collected data was transferred from Google sheets to SPSS version 19, where descriptive as well as inferential statistics was conducted. Comparisons among groups were made with the value of significance kept under 0.05 and Chi-square test was used to achieve this. Results A total of 1000 university students male and females filled up the online survey, which comprised of 44% males and 56% females. The participants were grouped on the basis of their specialty, which demonstrated that 46% were health sciences students, 54% were students from other specialties including engineering, business, management, etc., It was demonstrated that 15% of participants used social media for 0-2 h per day and 39% for more than 6 h per day. Among the participants 4% used Facebook, 24% used Instagram, 38% used Twitter, and 34% used Snapchat. A statistically significant difference was observed among the male and female participants when the majority of the later participants reported noticing the celebrity's smile on social media (P = 0.001). Similar findings were seen when inquired about the familiarity regarding Hollywood smile, which revealed that the females were more knowledgeable than males and this comparison was statistically significant (P = 0.003) [ Table 1, Figures 1-3]. The comparison among the different fields of education revealed that the health field students tend to be more influenced by social media websites and referring themselves to the dentist as compared to students from other fields (P value = 0.001) [ Table 2]. No significant difference was found among the study groups on the basis of the number of hours spent daily on social media when inquired about ordering online products (P value > 0.05). However, participants spending more time on social media were significantly more aware of the term "Hollywood smile" (P value = 0.001) [ Table 3]. Older age group participants were found to be highly influenced by the celebrity smiles on social media and referring themselves to the dentist to receive aesthetic dental treatment (P value: 0.001) [ Table 4] Discussion This study aimed to assess the effect of social media use on the selection of Hollywood smile among the young college-going people in Saudi. The responses were compared on the basis of gender, the field of study, and the number of hours spent on social media on daily basis. Our analysis showed that the young female participants noticed more celebrity's smile on social media, which affected their decision of improving their dental aesthetics than the males with the difference being statistically significant, and similar findings were recorded by Dunlop et al. [14] Previous literature shows that social media utilization was more in females than males owing to their emotional behavior. [15] Comparison on the basis of age group suggested that the younger participants were more satisfied with their dental appearance as compared to the older individuals. Similar findings were observed by Aldaij et al. [16] , when they investigated the Saudi population about their satisfaction level of dental aesthetics. Influence by the celebrity smiles on social media was found more in older age group participants. However, contrasting findings were revealed when inquired about the term "Hollywood smile", which was known slightly more among the younger aged participants. Health filed students had more knowledge about Hollywood smile in our study. This was in accord with the study conducted by Aldaij M et al. [17] who also concluded that overall health-related students show better knowledge and attitudes towards their aesthetic dental treatments and needs. The concept of dissatisfied smile and aesthetics from own's smile has a negative impact on one's psychology. It can affect general health also. The contribution of dental aesthetics towards the overall appearance of a person cannot be ignored. The cosmetic enhancement may help to improve the financial status of a person as the probability of getting into payroll increases. It may render a person more productive at work and can also contribute to the productivity and development of the nation. Limitations Since it was a questionnaire study, the actual influence of social media on respondents may or may not be predicted, reflecting the inbuilt shortcomings of such studies. This can be attributed to the fact that in questionnaire-based studies, there is a likelihood of social allure of faking high-quality bias. [18] Conclusions The overall effect of social media in decision making of opting for Hollywood smile was found to be moderate. Females were more dissatisfied with their smiles and needed aesthetic enhancement as compared to males. The use of the internet was also more by females as compared to males. In the current era, the use of the internet is worldwide for numerous areas including dentistry. This information digital media is expanding our knowledge faster than any other source. More studies should be conducted to investigate how much social medial is effecting the perceptions of youngsters. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-07-02T10:19:19.005Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "eb3880d4765b8025de29c9cf11d48fd462d4b37e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_442_20", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c7a5d50ca5daf67ea2779d4127861c707be0806a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
234480916
pes2o/s2orc
v3-fos-license
Mir-146a Inhibits IFN-Γ Production Via Suppressing TLR4/IRAK-1/NF-kb Expresion in Pulmonary Arterial Smooth Muscle Cells Purpose: The microRNA-146a (miR-146a) could regulate proliferation of vascular smooth muscle cell and inhibits inflammation of airway, but its role in inflammation of pulmonary arterial smooth muscle cell (PASMC) hasn’t been reported. We aim to explore the effect of miR-146a on regulating inflammatory signaling in the study. Methods: Primary PASMCs were separated from rats. Cells were stimulated by lipopolysaccharides (LPS). miR-146a was transfected into cells with plasmid. miR-146a expression in PASMCs was assessed by realtime PCR. The protein expression of TLR4, phosphorylated-IRAK-1, phosphorylated-IKK, phosphorylated-IκB and NF-κB (P65) in PASMCs was analyzed using western blotting. The level of IFN-γ was detected using ELISA. Results: The protein expression of TLR4, phosphorylated-IRAK-1, phosphorylated-IKK, phosphorylated-IκB and NF-κB (P65) in PASMCs was increased when induced by LPS, which was reversed by miR-146a. The level of IFN-γ in supernatant of PASMCs was higher in LPS-treated group than controls, which was decreased in cells with miR-146a overexpressed. Conclusion: miR-146a could attenuate LPS-induced IFN-γ production, and activation of TLR4, IRAK-1 and NF-κB in PASMCs, which might provide novel target on the therapy of pulmonary hypertension. Introduction Pulmonary hypertension (PH) is a hemodynamic and pathophysiologic syndrome from increased blood pressure within pulmonary arteries, which prevalence is approximately 10 % in general population. Its prognosis is depressed that the one-year mortality is approximately only 15% [1]. Pulmonary arterial smooth muscle cell (PASMC) participates in PH through activating inflammatory signaling, such as NF-κB pathway [2,3]. However, the precise mechanisms of inflammation in PASMC are not very clear. Cell Culture and Transfection Male Wistar rats (8-10 weeks old, weighing 280±20 g) were obtain from experimental animal center of Guilin Medical University. All experimental procedures were approved by the Animal Care and Use Committee of the Affiliated Hospital of Guilin Medical University. Rats were anaesthetized with 5% isoflurane by inhalation in oxygen and killed by cervical dislocation. The small vascular was separated from the 3rd level or lower artery branch of pulmonary lobe segments, and then was minced to small pieces and digested by 0.2% type I collagenase for 20 min at 37 °C in water. Digestion was stopped by adding 10% FBS (GIBCO, MA, USA)). The primary PASMCs were cultured in DMEM medium containing 10% FBS at 37°C in 5% CO2. Seven days later, PASMCs at passages 3-6 were used to conduct the experiments. Cells were cultured in serum-free medium 30min prior to transfection. The primary PASMCs were identified using immunohistochemistry with α-SM-actin staining ( Figure 1). The slides of cells were fixed by 4% paraformaldehyde for 20 min and incubated in 0.6% H2O2 for 30 min to quench endogenous peroxidase activity. The slides were incubated with primary mouse anti-rat antibody against α-SM-actin (dilution 1:100, BM0002, BOSTER, Wuhan, China) at 4 •C overnight, and then were incubated with horseradish peroxidase conjugated goat anti-mouse IgG antibody (BA1001, BOSTER, Wuhan, China) at room temperature for 20 min. After washes with PBS for three times, 3'3-diaminobenzidine-tetrahydrochloride was applied on the slides as a chromogen for 1-5 min, and were then by haematoxylin for 5-10 min. The transfection of miR-146a was performed with plasmid (Genechem, Shanghai, China) and lipofectamine2000 (Invitrogen, MA, US) according to the manufacturer's instruction. When six hours after transfection, LPS-induced cells were stimulated with LPS(1μg/ml) (Sigma, MO, US) for 48 hours. Enzyme-Linked Immunosorbent Assay Enzyme-linked immunosorbent assay (ELISA) was used to detect the level of IFN-γ in cell culture supernatants according to the protocol of ELISA kit (Elabscience, Wuhan, China). Each sample was repeated in three wells. Briefly, in 96-well plates, 100μl sample and 100μl biotinylated detecting antibody (50μl cells and 50μl Detection reagent A) were incubated for 1 h at 37 °C, followed by incubation with 100μl Horseradish-peroxidase (HRP) conjugated working solution for 30 min at 37 °C. Subsequently, plates were incubated with substrate solution as a chromogen for 15 min without light. The optical density (OD) was measured at 450 nm using a microplate reader (TECAN, Switzerland). Statistical Analysis All statistical analyses were performed using SPSS 21.0 (IBM SPSS Inc., Chicago, IL, USA). Group data are expressed as mean ± std. deviation (SD). Significant differences were evaluated using an independent-samples t-test, and multiple groups were compared using one-way analysis of variance (ANOVA) followed by the Student-Newman-Keuls test or the Games-Howell test. p-values < 0.05were considered to be statistically significant. miR-146a Inhibits TLR4 Expression in PASMCs When PASMCs were transfected with miR-146a, the expression of miR-146a was respectively increased about 6-fold at the 24th hour and 18-fold and at the 48th hour ( Figure 1A). This demonstrated the successful transfection of miR-146a. Moreover, the expression of miR-146a was significantly induced by LPS after 24-hour administration ( Figure 1B). That effect was time-dose dependent. Furthermore, the protein expression of TLR4 in PASMCs was detected after miR-146a transfection. TLR4 expression was increased in LPS group compared with controls, whereas it was reversed when transfected with miR-146a ( Figure 2). Thus, miR-146a could inhibit TLR4 expression in PASMCs. miR-146a Inhibits IRAK-1 Activation in PASMCs The activation of IRAK-1 in PASMCs was detected by Western blotting. The protein expression of phosphorylated-IRAK-1 (Figure 3) was increased when treated with LPS. However, it's reduced in cells with miRA-146a overexpression. These findings suggest that miR-146a could inhibit IRAK-1 activation in PASMCs. miR-146a inhibits the secretion of IFN-γ in PASMCs The level of IFN-γ in the supernatant of PASMCs culture medium was assessed by ELISA. (Figure 7) illustrates that the level of IFN-γ was higher in LPS group than controls. In contrast, it's decreased in cells with miRA-146a overexpression. These findings indicate that miR-146 could inhibit IFN-γ secretion in PASMCs. Discussions Our study shows that miR-146a could attenuate LPSinduced IFN-γ production, TLR4 expression, and activation of IRAK-1 and NF-κB in PASMCs. The present study confirmed our previous finding that LPS could induce IFN-γ production, [9] and further found that miR-146a could significantly inhibit LPS-induced IFN-γ production. In vascular smooth muscle cells, IFN-γ could stimulate NF-κB activation, leading to inflammation [10] Those findings suggest that in vascular smooth muscle cells, IFN-γ may be not only an effector of LPS stimulation, but also a stimulator in the process of inflammation. It may play a key role in positive feedback of inflammation. Thus, it's meaningful to disturb that feedback for reducing inflammation in PH treatment. The miR-146a may be a potential target since it could reduce LPS-induced IFN-γ production in PASMCs as it's found in our study. TLR4 is a crucial signaling in promoting inflammation of vascular smooth muscle cells [11][12][13][14]. Our study found that TLR4 expression in PASMCs was increased in LPS group, whereas it was reversed when transfected with miR-146a. The miR-146a could regulate TLRs and downstream signaling through TNF receptor-associated factor 6 and IL-1 receptor-associated kinase [15]. Thus, our findings suggest that miR-146a could suppress LPS-induced TLR4 expression in PASMCs. Furthermore, TLR4 could activate NF-κB singling in LPS-induced inflammation of vascular smooth muscle cells from thoracic aortas [16,17]. Similarly, IRAK-1 is also the downstream of TLR4 in vascular smooth muscle cells from thoracic aortas [18]. In pulmonary vascular smooth muscle cells, TLR4 could activate IRAK-1/NF-κB signaling.9 Therefore, we explore the role of miR-146a in the activation of TLR4/IRAK-1/NF-κB signaling in PASMCs in the present study. This study showed that when miR-146a was overexpressed, the LPS-induced activation of IRAK-1 and NF-κB singling in PASMCs was inhibited. Since TLR4/IRAK-1/NF-κB singling in PASMCs could be activated by LPS and then lead to IFN-γ production, [9] we supposed that miR-146a could attenuate the activation of TLR4/IRAK-1/NF-κB singling, resulting in the decreased production of IFN-γ. Therefore, miR-146a may be a potential therapeutic target on inflammation of pulmonary artery, which may provide novel avenues in the therapy of PH. Conclusion In conclusion, miR-146a could attenuate LPS-induced IFN-γ production via inhibitingTLR4/IRAK-1/NF-κB pathway in pulmonary arterial smooth muscle cells, which might provide novel target on the therapy of PH.
2021-05-12T20:21:20.728Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "432b713f690854d30fcc7a6aee0b2a445c8f5c02", "oa_license": null, "oa_url": "https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00197.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "432b713f690854d30fcc7a6aee0b2a445c8f5c02", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
266946578
pes2o/s2orc
v3-fos-license
Impact of Resilience on Nursing Students’ Perceptions of Stress, Anxiety, and Fear Associated with COVID-19 Pandemic Background: The COVID-19 pandemic had affected negatively the mental well-being of nursing students. However, limited research is currently available that explored mental health issues of nursing students. Aim: To investigate the impact of resilience on stress, anxiety, and fear of COVID-19 among the nursing students. Methods: A cross-sectional research design was adopted for this study. A total of 268 nursing students from three universities in South India. responded to an Online survey. Data was collected using self-reported questionnaires in June 2021. Results: The findings revealed that most of the students had a normal level of resilience (3.06 ± 0.39) and low levels of stress (17.885 ± 0.09). The mean scores on fear of COVID-19 (18.31 ± 5.68) and COVID-19 Anxiety Syndrome Scale (C-19ASS) (21.67 ± 7.42) suggest that around half of the participants had a high level of fear and anxiety. The resilience of the participants was negatively correlated with fear (r = −260, p < .001) and perceived stress (r = −0.307, p < .001). Similarly, fear was positively correlated with anxiety (r = 0.211, p < .001) and perceived stress (r = 0.418, p < .001). Conclusion: Our findings showed that nearly 50% of the nursing students had a high level of COVID-19 associated fear and anxiety. Therefore, we suggest that innovative strategies are needed to improve students’ resilience and mental health during highly stressful situations such as the COVID-19 pandemic. Introduction The COVID-19 pandemic has affected all aspects of nursing education and healthcare globally.Research has shown that frontline healthcare workers (HCWs) are not immune from the mental health consequences of the COVID-19 pandemic. 1In addition, the review studies reported a higher prevalence rate of anxiety, burnout, depression, PTSD, and psychological distress among HCWs compared to the general population. 2,3Meanwhile, the COVID-19 pandemic has been stressful for nursing students as there were disruptions in nursing education such as a sudden switch from offline to online classes and missed clinical opportunities.These stressful situations may have a detrimental effect on the mental well-being of nursing students due to anxiety, fear, and poor knowledge about COVID-19. 4,5andemic-related mental health issues have become critical in the new era. 6A recent systematic review and meta-analysis found that during the COVID-19 pandemic, student nurses were more likely to suffer from depression (52%), stress (30%), fear (41%), anxiety (32), and sleep disturbances (27%). 7Also, fear is described as an unpleasant mental state produced by the perception of danger in situations of a COVID-19 pandemic. 8In a study conducted among nursing students during the SARS outbreak in Hong Kong, nursing students perceived themselves at high risk of infection. 9herefore, psychological resilience is critical when recovering from such distress and issues to acquire internal control, empathy, positive self-concept, organization, and optimism in their everyday challenges. 10Prior research also demonstrated a protective role of resilience on mental health problems such as depression and stress among nursing students. 10,11Resilience is defined as the ability to overcome adversity and cope effectively with problems faced, which also includes how one learns to develop stronger flexibility from situations encountered. 12−17 Few studies focused on mental health issues such as stress, anxiety, depression, and sleep disturbances. 18,19However, limited research exists on the impact of resilience in response to nursing students' perceived stress, fear, and anxiety associated with the COVID-19 pandemic.While the COVID-19 pandemic made a significant impact on nursing students, most of the studies on COVID-19 in India have focused on examining knowledge and academic concerns 20−22 and on nursing education. 23Although few studies 24,25 have examined the quality of life of nursing students in India, but little is known about how resilience affects stress, fear, and anxiety during the COVID-19 pandemic.The purpose of this study was to investigate the impact of resilience on stress, anxiety, and fear of COVID-19 among the nursing students. Study Design and Population This was a descriptive online cross-sectional survey conducted among conveniently selected nursing students from nursing colleges under three universities in South India.The nursing students those were enrolled in BSc in nursing program were included in this online survey.Nursing students those were unable to access to the internet, smart phone, and unaware of completing the google forms were excluded from this survey.The data was collected over a period of three weeks, that is, from 4th to 24th October 2021.Based on G-Power analysis using the mean scores on the pilot study, we estimated the sample size (n = 260) for the present study. Data Collection Tools The online questionnaire included 1. Socio-demographic Profile.This part of the questionnaire included the items to collect background information of the participants such as age, gender, year of education, and name of the university. 2. The Brief Resilience Scale (BRS) was adapted to measure the perceived resilience of the participants.The BRS consisted of six items with a 5-point Likert response scale, ranging from 1 = strongly disagree to 5 = strongly agree.Three items are positively phrased (1, 3, 5) and the other three are negatively phrased (2, 4, 5).This scale has a good internal consistency with a Cronbach's alpha value ranging from 0.80 to 0.91.The possible score range on the BRS is from 6 to 30.Divide the sum by the total number of items.The level of resilience was classified as low resilience (1.00−2.99),normal resilience (3.00−4.30),and high resilience (4.31−5.00)based on the mean score. 26The COVID-19 Pandemic-Related Stress Scale (PSS-10-C) was used to assess the nursing student's perceptions of stress during the COVID-19 pandemic.This scale was intended to measure the frequency of the individual's feelings and thoughts regarding events perceived as stressful within the past month.The scale consisted of 10 items with two domains namely: "Distress" (items 1, 2, 3, 9, and 10) and "Coping" (items 4, 5, 6, 7, and 8).Example of the item in Distress domain I have felt affected as if something serious will happen unexpectedly with the epidemic and Coping domain I have been confident about my ability to handle my personal epidemic related problems.This was a 5-point Likert scale ranging from "Never" (0) to "Very Often" (4) with four negatively worded items (items 4, 5, 7, and 8).Also, this scale showed a high internal consistency with Cronbach's α = 0.85 (Distress α = 0.83 and Coping α = 0.77).The final score is obtained by a tally of all scores ranging from 0 to 40 with higher scores indicating higher perceived stress.The total scores can be observed between 0 and 40; the scores below 25 were categorized as low perceived stress.27 4.The Fear of COVID-19 Scale (FCV-19S) was used to assess the fear of COVID-19 among the nursing students.This scale consisted of seven items (e.g., "I am most afraid of the coronavirus") with a 5-point Likert response scale, ranging from 1 = strongly disagree to 5 = strongly agree with a total score ranging from 7 to 35.Higher scores represent greater fear of COVID-19.In the present study, the average item score was used; it was calculated by dividing the total score by the number of items.This scale has good psychometric properties (internal consistency, α = 0.82 and test-retest reliability, ICC = 0.72).28 5. COVID-19 Anxiety Syndrome Scale (C-19ASS) was used to identify the presence of anxiety syndrome features associated with the COVID-19 pandemic.This scale included nine items with two domains namely: perseveration (6 items: 2, 4, 6, 7, 8, 9) and avoidance (3 items: 1, 3, 5).The participants were requested to respond on a 5-point Likert-type scale to indicate their level of agreement ("1.Not at all", "2.Rarely, less than a day or two", "3.Several days", "4.More than seven days", and "5.Nearly every day") that applies to them over the last two weeks.The score ranges from 9 to 45. Higher scores represent greater anxiety associated with the COVID-19.The scale demonstrated acceptable levels of reliability for both domains (Perseveration: α = 0.86 and Avoidance: α = 0.77).29 The English version of the questionnaire was piloted among 30 nursing students and found it was feasible. Data Collection Procedure We sent the invitations along with an information sheet and a link to the questionnaire via WhatsApp to the student groups after obtaining written permission from the nursing colleges.The majority of students in each nursing college have WhatsApp groups to stay updated with any new information.The introductory part of this Google form included brief Information on the background, aim, objectives, procedures, voluntary nature of participation, and declarations of anonymity and confidentiality.The participants were explained that the survey would take approximately 20 min to complete.Those who provided consent to participate in the study were asked to continue to fill out the form.Participants who consented to participate in the study had to click on a proceed button to indicate they had read and agreed to the study's consent form.In order to obtain sufficient responses, the questionnaire was re-posted weekly for two months between June 2021 and July 2021.Data entries with the same electronic ID were deleted in order to prevent data duplication (the same student gives more than one answer). Ethical Consideration The research proposal was approved by the Institute Ethics Committee (No. NIMHANS/30 th IEC(BEH.SC.DIV)/2021).The researchers obtained permission from the original authors of the questionnaires to use in this study.The researchers also obtained formal permission from the concerned authorities for data collection.The students were informed that participation or non-participation in this survey would not affect their academics in any way.Online consent was taken from the participants.The participants were requested to click on the Google link consent page after reading and marking their agreement to complete the questionnaire. Statistical Analysis The data were analyzed using appropriate statistical software (SPSS 21 version) and the results were presented in tabular form.Descriptive statistics such as frequency, percentage, mean, and standard deviation were performed.The data in this study was normally distributed (Shapiro-Wilk test).Inferential statistics (independent t-test, one-way analysis of variance Pearson correlational analysis) were used to examine the relationship between socio-demographic variables and participants' resilience, stress, fear, and anxiety associated with the COVID-19 pandemic.The level of significance was fixed at 0.05 levels. Results In the present study, there were 268 participants, 88.8% of whom were female.The mean age of the participants was 20 years (1.25, SD).More than half (51.5%) of the participants were aged 20 years or younger.Most of the students (67.2%) were from "A" university (Table 1).While a majority (65.7%) of the students demonstrated a normal level of resilience, 32.8% of the students had low level of resilience.Similarly, the total mean score on the BRS (18.37 ± 2.34) suggests a majority of the participants hold a normal level of resilience (3.00 to 4.30).The mean score on the FCV-19S was 18.31 (SD, 5.68) and suggests that 52% of the participants showed a high level of fear.Regarding participants' stress levels, the mean score on the PSS-10-C was 17.88 (SD, 5.09) and 6.7% of the participants perceived their level of stress was high (>25).According to the mean scores on subscales of PSS-10-C "Distress" (9.05 ± 3.83) and "Coping" (8.83 ± 4.17), 45% of the participants were distressed and 44% of them were able to cope with COVID-19 pandemic effectively.The mean scores on the C-19ASS subscales "perseveration" (13.82 ± 5.28) and "avoidance" (7.85 ± 3.06) suggest that 46% of the participants had perseverative thinking and more than half of them possessed avoidance behaviors (52%).The overall mean score on C-19ASS was 21.67 (7.42) and based on the median (21), 48.1% of the participants were anxious about acquiring COVID-19 infection (Table 2). Table 3 shows the correlations between the scores of employed scales.The resilience of the participants was negatively correlated with fear (r = −260, p < .001)and perceived stress (r = −0.307,p < .001).Similarly, there were positive correlations between fear with anxiety (r = 0.211, p < .001)and perceived stress (r = 0.418, p < .001).Although anxiety negatively correlated with perceived stress, the correlation was not significant. Nursing students above 20 years were significantly distressed (t = −2.455,p < .015)and had fear of COVID-19 (t = −2.918,p < .04)than others.One-way ANOVA test revealed a significant association between the students' resilience and their year of education, with fourth-year students scoring higher on the resilience scale (F = 2.793, p < .041).Similarly, significant differences were found between the universities on scores related to fear towards COVID-19 (F = 13.28,p < .001)and anxiety (F = 4.424, p < .013)(Table 4). Discussion The COVID-19 pandemic had an impact on everyone's daily lives with negative consequences on mental wellbeing of nursing undergraduates.The present study aimed to examine the influence of resilience on perceived stress, anxiety, and fear of COVID-19 among nursing students.The findings suggest that a majority of the participants possessed a normal level of resilience.More than half of the participants showed a high level of fear and 6.7% of them perceived that their level of stress was high.A negative correlation was found between resilience and fear and anxiety, while a positive correlation was found between perceived stress and anxiety in relation to the fear of COVID-19 (Table 5). There is mounting evidence on the psychological impact of the pandemic among the younger population. 30,31In our study, the mean score on the FCV-19S was 18.31 (SD, 5.68), which suggests a high level of fear among 52% of the participants.This finding was corresponding to the previous research. 32,33However, the mean score was lower than earlier studies conducted among nursing students (25.71 ± 6.90) 5 and nurses (23.64 ± 6.85). 34ublished research reports that mental health issues can be mitigated through resilience, adaptive coping strategies, and the presence of social support.Similar to the findings of earlier research, 35−38 resilience of the participants in this study was negatively correlated with fear (r = −260, p < .001)and stress (r = −0.307,p < .001).These findings were in favor of a recent study conducted among frontline nurses, which showed a significant negative correlation between burnout and resilience.These findings suggest the protective role of resilience in alleviating burnout during this pandemic. 39lso in our study, the resilience level significantly differed as students from fourth year had a higher resilience level.These findings could be argued that senior students had more exposure to clinical experiences, which allowed them to be more adaptable to life-altering situations. 5,40,41However, these findings suggest the necessity of strengthening the resilience level among nursing students.Hence, nurse educators must conduct resilience-training programs to improve students' ability to respond to stressful situations like COVID-19 pandemic. 42,43Also, few studies suggest that developing an educational culture of trustworthiness may facilitate the development of resilience in nursing students. 44−47 Therefore, nursing educators must explore the supportive systems of the students and assist them in strengthening their resilience by being role models and forming strong, caring, and supportive relationships. 48n the present study, the total mean score on PSS was 17.88 (SD, 5.09).This score was lower than the findings of an earlier study (31.69 ± 6.91), which indicate a moderate level of stress among nursing students. 14Healthcare providers constantly report their anxiety associated with COVID-19.In our study, nearly half of the participants were anxious about acquiring of coronavirus infection.Similarly, 46% of the participants had perseverative thinking and more than half of them possessed avoidance behaviors (52%).Our findings were found to be compatible with results of the literature that report a high level of COVID-19 related anxiety among nursing students. 49,50 Limitations The present study has certain limitations such as a cross-sectional survey design, an online survey using self-reported questionnaires, and a convenience sample from three universities.The study only investigated the correlation, not the causal impact of resilience and stress, fear and anxiety associated with COVID-19.Additionally, self-reported questionnaires may result in response bias.The present study nevertheless contributes to a better understanding of mental health issues among nursing students during the COVID-19 pandemic.The present study findings may be helpful for nurse educators to develop coping skills and resilience through innovative and theory-based intervention among nursing students. Conclusion Our study found that the nursing students had a normal level of resilience.While students perceived that their level of stress was low, more than or half of the students had a high level of fear and anxiety about acquiring coronavirus infection.Nonetheless, students' resilience level was inversely correlated with fear and stress associated with COVID-19 pandemic.Hence, nurse educators need to develop and implement innovative strategies to promote students' resilience and improve their mental well-being during highly stressful situations. Table 1 . Socio-demographic Details of the Participants. Table 2 . Frequency and Percentage of Nursing Students' Resilience, Fear, Stress, and Anxiety Associated with COVID-19. Table 4 . Association of Stress, Fear, Resilience, and Anxiety with Socio-demographic Variables. Table 5 . Correlation Between Resilience and Fear, Stress, and Anxiety Associated with COVID-19 Pandemic.
2024-01-12T16:12:09.356Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "55516a2e7be5d913e3a2e0af6c513857f875eeda", "oa_license": null, "oa_url": "https://doi.org/10.1177/0974150x231195664", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "6e050752ecb64b1c494cd75a36cb0966382a19d8", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
16944801
pes2o/s2orc
v3-fos-license
CD19 + CD23+ B cells, CD4 + CD25+ T cells, E-selectin and interleukin-12 levels in children with steroid sensitive nephrotic syndrome Background and methods Soluble-lymphocyte subsets (sCD19 + CD23+ B cells and sCD4 + CD25+ T cells), soluble-adhesion molecules (sE-selectin) and interleukin-12 (sIL-12) were assayed to evaluate the pathogenesis of steroid sensitive nephrotic syndrome in 48 patients diagnosed with steroid sensitive nephrotic syndrome (SSNS) in active (AS) and remission stages (RS). Results The ratios of soluble CD19 and sCD19 + CD23 increased in patients with AS with respect to the patients with RS and controls (p < 0.05). Increased sCD19 + CD23 ratios were preserved in the patients with RS when compared with the controls (p < 0.05). Moreover, the ratios of sCD4 + CD25 lymphocyte subsets were not significantly different among the groups. Similarly, serum sIL-12 levels were not considerably disparate between the AS and RS. Serum sE-selectin levels were higher in the patients with AS relative to the controls (p < 0.01) and RS (p < 0.05). No significant correlations were noted between sE-selectin and lymphocyte subset ratios, serum sIL-12 and immunoglobulin levels. There was a positive correlation between sE-selectin, triglyceride (r = 0.757, p < 0.0001) and cholesterol (r = 0.824, p < 0.0001) levels in patients with the AS. Conclusion The present results indicate that the patients with SSNS appear to have abnormalities in sCD23 + CD19+ cells, defect in T regulatory cell activity, and injury in endothelial cells as indicated by the presence high sE-selectin. These abnormalities might play a role in the pathogenesis of nephrotic syndrome. sIL-12 seems to have no role in pathogenesis of nephrotic syndrome reflecting normal Th1 response. Background Idiopathic nephrotic syndrome (INS) is the most prevalent kidney disease in children. Persistent immunogenic stimuli (such as viral infections, immunizations or allergens) might trigger nephrotic relapses in most of these patients. A primary immune disturbance is thought to be responsible for the pathogenesis of nephrotic syndrome in childhood. Various studies have attempted to identify potential abnormalities in lymphocyte subsets and they reported that during relapses, the subsets of CD4 + and CD8 + T cells expanded and levels of their cytokines increased (interleukin-2, IL-4 and interferon-) in the patients with nephrotic syndrome but reports regarding these measurements are conflicting [1][2][3][4]. Although steroid sensitive idiopathic nephrotic syndrome is a T lymphocyte mediated disorder, the pathogenetic role of B lymphocytes, effect of cytokines and vascular endothelial dysfunctions have not been well established in nephrotic syndrome. Therefore, in the present study we aimed to investigate the serum levels of solublelymphocyte subsets (sCD19 + CD23+ B cells and sCD4 + CD25+ T cells), soluble-adhesion molecule (sE-selectin) and interleukin-12 in patients with steroid sensitive nephrotic syndrome (SSNS). Patients and control subjects We included 48 patients diagnosed with SSNS (32 boys, 16 girls; age range 30-202 months) in the present study. The control group contained 19 healthy individuals (12 boys, 7 girls; age range 27-190 months). The patients were divided into two groups: 28 (58.3%) patients (20 boys, 8 girls) with active stage (AS) were grouped as Group 1 at the time of the diagnosis and 20 (41.7%) patients (12 boys, 8 girls) with remission stage (RS) were grouped as Group 2. Blood samples were collected before steroid treatment in Group 1. The patients who did not response the steroid treatment excluded from the study. The patients in Group 2 were selected among the steroid sensitive nephrotic patients at remission stage. The mean duration of treatment with steroids was 28 weeks in Group 2. The patients showing complications of nephrotic syndrome including infection, thromboembolism, osteoporosis or receiving blood transfusions, immunosupresive agents such as cyclosporin and cyclophosphamide, angiotensin-converting enzyme inhibitors, non-steroidal anti-inflammatory drugs and anti-histamines were excluded from the present study. Active stage was defined as increased urinary protein excretion >40 mg/m 2 /h on timed sample or > 3+ by dipstick for 3 consecutive days, spot albumin to creatinine ratio >2 mg/mg and hypoalbuminaemia <2.5 g/dl). Remission stage was defined as urinary protein excretion <4 mg/m 2 /h; nil or trace by dipstick on spot sample for 3 consecutive days. Study protocol The serum levels of E-selectine and IL-12 + p40 were measured in the patients with AS before steroid treatment and in RS and in the controls using commercially available kits (BioSource International, Inc. Camarillo, California 93012 USA). Assays were performed using solid phase sandwich ELISA. The blood samples for sEselectine, and sIL-12 were kept at −70°C until the time of assay. Hemoglobin, erythrocyte count, platelet count, fibrinogen, total protein, cholesterol, triglycerides and albumin concentration were measured using standard laboratory methods. Statistical analysis Data were analyzed using the SPSS for Windows package. All ranges quoted represent the standard error or deviation. Mann-Whitney U-test, x 2 test and Spearman's test were used for analysis. A p value <0.05 was considered to be statistically significant. Ethics The current study was approved by the Research Ethics Committee of Eskişehir Osmangazi Medical Faculty, Eskişehir Osmangazi University. Informed consent was obtained from the parents or guardians of the patients and control subjects. Results Overall, the IgG levels decreased and IgM levels increased in patients with the AS with regard to the controls and RS. Serum IgE levels also augmented in patients with the AS with respect to RS the patients and the controls. Increased levels of IgE were sustained in the patients with the RS when compared with the controls. The immunoglobulin levels are illustrated in Table 1. Subset ratios of the soluble CD3 and CD8 lymphocyte were similar in all study groups. Subset ratio of the soluble CD4 lymphocyte decreased in patients with the AS with regard the patients with RS. Moreover, the ratios of soluble CD19 and sCD19 + CD23 increased as well in patients with AS in comparison to those with RS and controls ( Table 2). Increased ratios of sCD19 + CD23 were present in patients with RS when compared with the controls. In addition, sCD4 + CD25 lymphocyte subset ratios were not notably different between the groups ( Table 2). The ratios of lymphocyte subsets are summarized in Table 2. In addition, serum sIL-12 levels were not considerably different between the two groups (Table 3, Figure 1). Serum sE-selectin levels were higher in patients with AS than controls and RS (Table 3, Figure 1). No significant correlations were noted between subset ratios of sE-selectin and lymphocyte, serum sIL-12 and immunoglobulin levels. There was a positive correlation between sE-selectin, triglyceride (r = 0.757, p < 0.0001) and cholesterol (r = 0.824, p < 0.0001) levels in Group 1 (Figure 2A and B). Discussion The pathogenesis of INS is currently considered an immune mediated disease with particularly T lymphocyte involvement. Impairment in the immunoglobulin isotype switching, which is strongly T lymphocyte-dependent has been shown [5]. Similar to earlier studies, we measured markedly decreased IgG levels and considerably increased IgM levels during AS. However, in the present study these alterations did not statically persist in the patients with RS. According to our results, the low IgG levels might be related B cell disorders. We found low IgG levels and high CD19 and sCD19 + CD23+ B cell ratio which reflects increase in number or activity of B cells (Table 2). These findings suggest a defect in production of IgG despite of increased number of B cells. The immune system deficiency in INS patients seems to be associated with excessive Th2 lymphocyte response. According to our findings, dominant T cell seems to be CD4+ T lymphocytes (Table 2). By contrast, Lama et al. report an imbalance of the CD4+/ CD8+ T lymphocytes distribution in favor of CD8+ T lymphocytes in relapse and remission in INS [6,7]. The existence of similar ratio of CD8+ T cells during in AS and RS and controls in the present study also suggested that CD8+ T lymphocytes were not dominant lymphocyte in patients with nephrotic syndrome. Our present observation is further supported by another study showing activation of the NFκB and c-maf transcription factors in CD4+ T lymphocytes during relapse [8]. Despite of increased CD4+ T cells, sCD4 + CD25+ T cells were not increased in our patients with AS. The absence of increased sCD4 + CD25+ T cells might result in increased Th2 response. In fact, there were increased IgE levels and similar sIL-12 levels in AS, RS and the controls which showed Th2 cell response in our patients with AS. Taken together, current study suggests that impairment of T regulatory cell (sCD4 + CD25+ T cell) is present in nephrotic patients. The B lymphocyte anomalies have not been well studied so far in children with nephrotic syndrome. Our study suggests that lymphocyte impairments in nephrotic syndrome do not seem to be limited to the T cell. We found that there were more B lymphocyte expansion in AS and RS than that of the controls as indicated with increased sCD19 + CD23+ B lymphocytes and decreased IgG levels ( Table 2). This finding prompted us to think that reduction of B lymphocytes could be preventing relapse ratios in patients with nephrotic syndrome. The therapeutic effects of rituximab, which suppresses B cell in the peripheral circulation as a chimeric anti-CD20 monoclonal antibody in nephrotic patients, supports our findings: (i) relapse rates following rituximab treatment may depend on the recovery of B-cells during the longterm course [9][10][11]. (ii) Moreover, rituximab might increase Treg frequency and number in patients with INS [12]. We did not find an increase in ratio of sCD4 + CD25+ T cells in patients with AS. Therefore, the increase of sCD4 + CD25+ Treg cells with rituximab supports our findings that Treg levels including sCD4 + CD25+ T cells is not enough in patients with nephrotic syndrome. On the other hand, sCD19 + CD23+ B cells are related to allergic disorders [13]. sCD23 activation mediates IgE regulation, differentiation of B cells, activation of monocytes, and antigen presentation [14]. We found increased ratio of sCD19 + CD23+ B cells and serum IgE levels in AS and RS. These findings suggest that atopy and high IgE levels might not be related only with Th2 response in our patients. These findings also suggest that sCD19 + CD23+ B cells and associated increased serum IgE could be related to relapse of nephrotic syndrome due to continued high ratio of sCD19 + CD23+ B cells and IgE levels in patients with remission. The sIL-12 is shown to be a master regulator of Th1 response and cell-mediated immunity and sIL-12 can also up-regulate the production of vascular permeability factor in INS [15]. Therefore, sIL-12 has been implicated in the pathogenesis of INS [15,16]. Despite of the presence of in vitro data obtained from culture supernatants, there is no enough information on serum levels of sIL-12 as far as we are aware of. We found that serum sIL-12 levels were statistically similar in AS, RS and the controls. Also, there were no correlation between sIL-12 and lymphocyte sub-population and serum IgE levels in our study. According to these findings, sIL-12 seems to have no role in pathogenesis of nephrotic syndrome in our patients. Macrophages and monocytes produce sIL-12 as an early response to antigenic stimuli; therefore, we think that in vitro sIL-12 production of lipopolisaccaride stimulated peripheral blood mononuclear cells of INS patients could not be specific for nephrotic syndrome [17]. Indeed, GATA-3 (Th2-specific transcription factor) related Th2 cytokines are shown to negatively influence the production of sIL-12 in patients with INS [18,19]. Adhesion molecules mediate the initial rolling of inflammatory cells along endothelial cells and platelets in response to pathological process. However, the role of adhesion molecules in the pathophysiology of nephrotic syndrome in children is not well known. Unlike other adhesion molecules, E-selectin is synthesized only by endothelial cells when activated by interleukin-1 or tumor necrosis factor- [20,21]. Thus, sE-selectin could be a candidate marker for detection of endothelial injury in nephrotic syndrome. We found that sE-selectin levels were increased in patients with AS and it returned to normal after treatment. To our best knowledge, these findings are the first report on sE-selectin in children with nephrotic syndrome. We think that endothelial injury is not related to Th2 response and B lymphocyte due to lack of any relationship between sE-selectin and lymphocyte subsets in our nephrotic patients with AS. These findings also cannot be explained with IL-12 levels which not increased in our patients with AS since IL-12 is demonstrated to increase the sE-selectin ligands on T lymphocytes [22]. The possible reason for increased E-selectin level might be the presence of hyperlipidemia which is reported to enhance the secretion of IL-6 and TNF-alpha [23]. On the other hand, hypercholesterolemia has been reported to increase superoxide anion production in endothelial cells [24]. In fact, we found that cholesterol and triglyceride levels were positively correlated with E-selectin levels in our patients. Hyperlipidemia seems to be associated with endothelial damage. Glucocorticoids are of proven benefit in the treatment of proteinuria in patients with SSNS. We found that sE-selectin levels decreased but high ratios of sCD19 + CD23+ B cells persisted with steroids therapy in patients with RS. Present findings suggest that steroid therapy can improve the endothelial cell functions but appears to fail to regulate expansion of B-cell in patients with RS. We think that expansion of the sCD19 + CD23+ B cells might contribute to the continuation of immune response in patients with RS. The implication of B-cells in SSNS remains to be investigated in detail in future studies. Conclusions In summary, present study suggests that patients with SSNS have abnormalities in their sCD23 + CD19+ B cells and show endothelial injury with high E-selectin levels. Furthermore, sIL-12 seems to have no role in pathogenesis of SSNS which reflects normal Th1 response. Present findings also implied that the patients with SSNS have T regulatory cell defect as indicated by normal sCD4 + CD25+ T cell ratio despite the increased immune response including high levels of IgM, IgE, CD19 and expanded sCD23 + CD19+ B cells.
2017-06-25T19:09:44.000Z
2013-07-06T00:00:00.000
{ "year": 2013, "sha1": "93c5c156abc6fd85b90e8290f877c6c3d5e64fa1", "oa_license": "CCBY", "oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/1824-7288-39-42", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74c48e5d1503ec4efde2b39ea9b4624a00c2dc5b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5743597
pes2o/s2orc
v3-fos-license
Aging and death-associated changes in serum albumin variability over the course of chronic hemodialysis treatment Background Several epidemiological studies have demonstrated associations between variability in a number of biological parameters and adverse outcomes. As the variability may reflect impaired homeostatic regulation, we assessed albumin variability over time in chronic hemodialysis (HD) patients. Methods Data from 1346 subjects who received chronic HD treatment from May 2001 to February 2015 were analyzed according to three phases of HD treatment: post-HD initiation, during maintenance HD treatment, and before death. The serum albumin values were grouped according to the time interval from HD initiation or death, and the yearly trends for both the albumin levels and the intra-individual albumin variability (quantified by the residual coefficient of variation: Alb-rCV) were examined. The HD initiation and death-associated changes were also analyzed using generalized additive mixed models. Furthermore, the long-term trend throughout the maintenance treatment period was evaluated separately using linear regression models. Results Albumin levels and variability showed distinctive changes during each of the 3 periods. After HD initiation, albumin variability decreased and reached a nadir within a year. During the subsequent maintenance treatment period (interquartile range = 5.2–11.0 years), the log Alb-rCV showed a significant upward trend (mean slope: 0.011 ± 0.035 /year), and its overall mean was -1.49 ± 0.08 (equivalent to an Alb-rCV of 3.22%). During the 1–2 years before death, this upward trend clearly accelerated, and the mean log Alb-rCV in the last year of life was -1.36 ± 0.17. The albumin levels and variability were negatively correlated with each other and exhibited exactly opposite movements throughout the course of chronic HD treatment. Different from the albumin levels, albumin variability was not dependent on chronological age but was independently associated with an individual’s aging and death process. Conclusion The observed upward trend in albumin variability seems to be consistent with a presumed aging-related decline in homeostatic capacity. Introduction Several observational studies on blood pressure, blood glucose, and blood hemoglobin have shown associations between a high variability in these parameters and an adverse outcome [1][2][3][4][5][6]. We recently examined the variability of these and other parameters in routine blood examinations of hemodialysis (HD) patients and found that such associations are not limited to only a few parameters. Variability in urea nitrogen, sodium, hemoglobin, creatinine, albumin, potassium, phosphate and others were often, but differently, associated with a poor survival outcome, impaired mobility, and other markers of a poor prognosis, including hypoalbuminemia and hyponatremia [7]. Some studies have shown an elevated intra-individual variability in these laboratory parameters in patients with certain chronic diseases [8][9][10][11]. In addition, for patients with chronic kidney disease (CKD), the variability of hemoglobin and blood pressure has been reported to increase according to the CKD stage [12][13][14]. These observations in general indicate that variability is related to an unhealthy status. In this regard, it should be noted that frailty, an aging-related and unhealthy condition, is often described as "a syndrome associated with a limited capacity to maintain homeostasis" [15,16]. Similar to the extent of variability, the prevalence of frailty is known to be high in populations with chronic diseases, particularly advanced CKD [17,18]. Considering these facts, a diminished homeostatic control is likely reflected by an elevated variability of some biological parameters. If our assumption is correct, the magnitude of the variability should increase with aging or a deterioration in health conditions [19]. In this study, we examined the longitudinal changes in serum albumin variability during the course of maintenance HD treatment. Study cohort A total of 1346 patients (31.1% female, 42.1% diabetic) received chronic HD treatment for more than 6 months between May 2001 and January 2015 (study period) at 4 outpatient HD facilities in Saitama-City, Japan. Most of them underwent long-term HD treatment, and the median of the final HD duration within the study period was 7.6 years (interquartile range = 4.0-13.7 years). We retrospectively analyzed the serum albumin dynamics in appropriately selected cohort subsets during 3 phases of maintenance HD treatment: i) the post-HD initiation period, ii) during maintenance HD treatment, and iii) before death (Fig 1). Post-HD initiation subcohort. Of our cohort, 520 patients had started regular HD treatment at one of the study facilities within 3 months of the first dialysis treatment session occurring during the study period. The mean (± SD) patient age at HD initiation was 62.6 ± 14.3 years; 29.6% of the patients were female, and 48.8% were diabetic. Before death subcohort. A total of 325 patients died while being treated at the study facilities during the study period or within 3 months of their last HD treatment at the study facilities during the study period. The mean (± SD) patient ages at HD initiation and at death were 63.6 ± 13.8 and 73.7 ± 10.5 years, respectively. The mean HD duration at the time of death was 10.1 ± 7.4 years. During maintenance HD treatment subcohort. Based on our analyses for the post-HD initiation and before death subcohorts, we defined the maintenance period of HD treatment for each patient as that beginning 1 year after HD initiation and ending 2 years before the censored time (transfer, death, or the end of the study period). For each patient, only calendar years that contained 6 or more serum albumin determinations performed during more than 6 months of HD treatment during the year were regarded as containing sufficient data, and the data for these calendar years were included in the study. To determine the long-term trend in albumin levels and variability in a reliable manner, the subjects were limited to 571 patients who had data for more than 4 eligible calendar years during the maintenance period of HD treatment. Consequently, most of the subjects were long-term survivors, and the mean length of the maintenance period was 7.8 ± 2.8 years. The HD duration at the middle point of the maintenance period was 8.9 ± 6.6 years (Fig 1). This retrospective observational study was approved by the institutional ethics committee of Hakuyukai Medical Corporation (approval number: 27-001) and was conducted in accordance with the principles of the Declaration of Helsinki. Informed consent was provided from all the patients who underwent HD treatment in the facilities during 2015. Data collection and calculations All the subjects underwent a regular blood examination twice a month. During these regular examinations, the serum albumin level was measured 6-8 times per year until the end of 2006 and 24 times per year thereafter. The blood samples from the study facilities were analyzed at a single external laboratory, and the serum albumin level was measured using the bromcresol green method. The laboratory and demographic data were retrieved from electronic databases. As serum albumin levels and other blood parameters exhibit seasonal changes [20], the albumin levels and their variability were estimated, in principal, on a yearly basis for each subject. A period of 1 year was determined by either the calendar time or the interval from the date of HD initiation or death. If the number of albumin determinations per year was less than 6 for an individual patient, the data for that year was excluded from the yearly analysis. The albumin level was represented by its yearly mean value and was abbreviated as Alb-M. Albumin variability was defined by the coefficient of variation (CV = standard deviation/mean) or the derived coefficient, residual CV (= residual standard deviation/mean) [5,21] (Fig 2). As the serum albumin level showed a significantly increasing (or decreasing) trend during the periods following HD initiation or prior to death, the CV could overestimate the variability during these periods. Therefore, we eventually used the residual CV as an index of variability in this longitudinal study. The residual CV of serum albumin (Alb-rCV) was log10 transformed to normalize its distribution for the statistical analysis. Statistical analysis All the analyses were performed in R 3.1.2. (R Core Team, 2014) using the gplots, mgcv, and gamm4 packages. The changes in the albumin levels (Alb-M) and the variability (log Alb-rCV) during the post-HD initiation and before death periods were statistically determined on a yearly basis for patients who had received continuous HD treatment for more than 3 years during either period. Differences between years were detected at a level of significance of 0.05 by pairwise comparisons using paired t-test with Holm's adjustment. In addition to the yearly analysis, shorter-term changes in albumin levels were assessed using generalized additive mixed models [22]. We fitted all the measured albumin values into a random intercept model including the time interval as a fixed effect and the patient identifier as a random effect. In a similar fashion, temporal changes in albumin variability were assessed by fitting moving CV values (instead of log residual CV values) into the same model. The moving CV (= moving standard deviation / moving average) was calculated for a moving window containing 3 consecutive albumin values and was treated as a variability index on the middle day. While a moving CV mirrors temporal changes better than a yearly calculated Alb-rCV, it can lead to an overestimation of variability if the source data has a continuous trend and particularly if it is sampled at a long interval. Therefore, only moving CV values for the year 2007 or thereafter were used in the model. In this setting, a constant decrease in albumin values, from 3.50 to 3.24 mg/dL in one year, was estimated to increase the CV by 0.26%. An individual general trend for Alb-M (or log Alb-rCV) during the maintenance period of HD treatment was estimated using a linear regression model fitted for the yearly calculated Alb-M (or log Alb-rCV) values. As the ends of the maintenance period usually did not coincide with those of the calendar years, the first or last calendar year of the maintenance period contained fewer albumin data points. Considering the lower reliability of the Alb-M and log Alb-rCV values in the boundary years, both values were weighted with the duration of HD treatment within the year; if the duration was shorter than 6 months, the values were discarded. The slopes of the obtained regression lines were then applied to one sample t-test to determine whether their mean value was significantly different from zero. Serum albumin dynamics before death As previously reported, the albumin levels of individual patients often showed a downward trend before death. At the same time, we noticed an increase in fluctuations toward death in several subjects. A representative case is shown in Fig 2. The levels of the annual mean albumin (Alb-M) and the log Alb-rCV in the years preceding death are shown in Fig 3. Alb-M demonstrated a visible downward trend, and this trend increased as death neared. On the other hand, albumin variability expressed as log Alb-rCV showed a contrasting upward trend. When the subjects were categorized according to their final HD duration (= survival time) into 3 groups (less than 4 years, between 4 to 8 years, and more than 8 years), the Alb-M or log Alb-rCV levels in the year before death were almost the same in all the groups (Fig 4). When the subjects were divided into 2 groups according to the age, both groups showed clearly different Alb-M levels in the years before death. However, the log Alb-rCV levels were comparable in both groups (Fig 5). These yearly trends in Alb-M and log Alb-rCV, however, might be affected by changes in the patient population as a result of their transfer to another care facility or HD initiation. Therefore, 220 patients who continued to receive treatment at the study facilities for the entire 3 years before their death were selected, and their yearly Alb-M or log Alb-rCV levels were compared (Fig 6). The decrease in Alb-M was statistically significant between the third year before death (YBD) and the second YBD as well as between the second YBD and the first YBD. The increase in log Alb-rCV was significant between the second YBD and the first YBD. Serum albumin dynamics after HD initiation A similar analysis was performed for HD patients immediately after the initiation of HD. As shown in Fig 7, the average Alb-M initially increased and then decreased following the start of HD treatment. In contrast, the log Alb-rCV values showed the opposite movement. An analysis of 326 patients who survived and received HD treatment at one of the study facilities for the initial 3 years of their treatment showed a significant increase in Alb-M and a significant decrease in log Alb-rCV between the first and second year after HD initiation (Fig 8). Analysis with generalized additive mixed model While the analysis using yearly aggregated data provided solid corroboration for the occurrence of significant changes in the years before death and after HD initiation, it blunted shorter Aging and albumin variability in hemodialysis patients temporal changes. To identify changes on a shorter time scale, we supplemented the analysis with generalized additive mixed models (See Methods). These models clarified that both the HD initiation and death-associated albumin dynamics estimated using generalized additive mixed models. Upper panels: All the albumin measurements for the post-HD initiation subcohort or the before death subcohort were fitted using a generalized mixed model with the time interval as a fixed variable and the subject ID as a random variable. The dotted lines represent the 95% confidence interval of the fixed term. Lower panels: The moving CV values were calculated using all the albumin measurements that had been recorded in January 2007 or thereafter; these values were fitted using a generalized mixed model in a similar manner. Note that the moving CV values were not log-transformed. The number of analyzed subjects is labeled in each panel. Changes during the maintenance period of HD treatment Finally, we examined the albumin dynamics during the maintenance period of HD treatment, since the influences of HD initiation and of death were likely to be minimal at these time points. Based on the results provided above, we defined the maintenance period for each patient by excluding the first year of data following HD initiation and the 2 years of data proceeding the censored time from the entire study period, including some margins. Based on the Alb-M values calculated for every calendar year within the maintenance period, a linear regression model was applied to the remaining data for each patient. To identify intra-individual longitudinal changes in Alb-M and the inter-individual association with age simultaneously, the regression lines were aligned according to each subject's age (Fig 10, left panel). As shown in the right panel, the mean regression slope for Alb-M was relatively small (-0.011 ± 0.04 g/dL/year) but was significantly less than zero. If we regard the Alb-M value at the midpoint of the regression line as the overall albumin level for each patient, an association with patient age was apparent. The change in log Alb-rCV was analyzed in a similar fashion (Fig 11). The mean slope of the regression lines for log Alb-rCV was significantly higher than zero (0.011 ± 0.04 /year) and was equivalent to an increase in the rCV value from 3.1% to 3.7% over 10 years. The Alb-rCV tended to increase with time within individuals, though their midpoint Alb-rCV values (overall variability levels) were not correlated with patient age. Mutual associations among parameters of albumin dynamics and demographics The correlations among the overall levels of Alb-M and log Alb-rCV, their slopes, and demographic parameters during the maintenance period were examined (Table 1). Alb-M was negatively correlated with log Alb-rCV and its slope (upward trend). Subjects with a decreasing trend for Alb-M tended to have an increasing trend for Alb-rCV. Concerning demographic factors, the Alb-M level was associated with a younger age, a male sex, and a non-diabetic status, whereas the log Alb-rCV level was poorly associated with these factors. The slope of log Alb-rCV was nevertheless weakly associated with age. The length of the maintenance period was associated with a higher Alb-M level, a higher Alb-M slope (i.e., a slower Alb-M decline), and a lower Alb-rCV level. Albumin levels Serum albumin is known to be a strong predictor of outcomes in various conditions and has been proposed as a marker of illness [23][24][25][26][27]. So far, several studies have examined longitudinal changes in its levels in chronic HD patients. A small and transient increase in albumin levels following the initiation of HD treatment [28][29][30] and a more distinctive and accelerating decrease before death [29,31,32] have been reported. Although these results could be affected by the existence of oxidized albumin [33], such peculiar changes in albumin levels during the initial and terminal phases of HD treatment are consistent with our observations and seem to be universal phenomena. On the other hand, albumin dynamics between these two phases are not yet well understood. Rocco et al. reported a mean albumin decline of 0.10 g/dL over 3 years among HD patients who survived for more than 3 years [34], and den Hoedt et al. showed an annual albumin decline of 0.08 g/dL during a 6-year follow-up period [35]. These values, however, might be influenced by changes associated with the initiation of HD and with death. By using data from subjects who had received HD treatment for relatively long periods of time, we were able to isolated a maintenance period (interquartile range: 5.2-11.0 years) within the entire study period. The mean rate of albumin decline during this period was small (-0.011 g/dL/year) but was significantly lower than zero. This finding indicates that serum albumin levels tend to decrease with time even among patients who have been stably receiving HD treatment for long periods of time. Albumin variability The intra-individual variability of various laboratory parameters has often been estimated using the CV and has also been called "biological variation" (BV). In a review article on BV, the subjects' age and sex and the sampling interval were reported to have little influence on BV estimates [36]. The BV levels in patients with chronic diseases have been compared with those of healthy people in several studies, with various results [8][9][10][11]37]. Among them, Holzel reported high BV levels for several parameters in patients with chronic renal failure, chronic liver disease, or insulin-dependent DM, compared with healthy individuals. In addition, Fraser et al. compared the BV levels in healthy elderly people with those in young people for various parameters and showed divergent results depending on the selected parameter [38]. These results, however, seem to be inconclusive, since most studies on BV were cross-sectional and based on a limited number of samples from relatively small cohorts. In this study, we presented the longitudinal changes in albumin variability for the first time. Alb-rCV temporarily decreased following the start of HD treatment. However, the movement was soon reversed, and the upward trend began to accelerate at one year or more before death. The Alb-rCV apparently moved in a manner opposite to that of the albumin levels throughout the course of HD treatment. During the maintenance period, the slopes as well as the levels of log Alb-rCV were negatively associated with the Alb-M levels. These results suggest that both a high albumin variability and a low albumin level develop in parallel with deteriorating health conditions. Although the movements of Alb-M and Alb-rCV were closely related, as stated above, differences became evident if the relationships between the overall levels of these parameters and demographic factors during the maintenance period were examined (See Table 1, and Figs 10 and 11). While Alb-M was strongly associated with age and moderately associated with gender and diabetic status, log Alb-rCV was poorly associated with these factors. These relationships are consistent with our previous cross-sectional analysis [7]. A similar situation for age was also observed in the years prior to death (Fig 5). Thus, although albumin variability was poorly correlated with age in inter-individual comparisons, it still tended to increase with age on an individual basis. In other words, albumin variability is associated with individual "aging" rather than chronological age. Similar to albumin variability, frailty is not determined simply by chronological age, but mostly progresses with aging in individuals. Homeostasis and albumin dynamics Homeostasis is thought to be maintained by multiple overlapping regulatory networks in the body, and many investigators have recognized the dysregulation of homeostasis as being a fundamental component of aging and frailty [15,16,[39][40][41][42][43]. Fried and her collaborators have tested this concept in several papers. For example, Kalyani et al. performed oral glucose tolerance tests in elderly women and reported an abnormal (diabetic) response in a frail group [44]. Cohen et al. estimated an individual's deviation from a normal status using the Mahalanobis distance for multiple laboratory parameters and showed associations with mortality, frailty and chronic diseases [45]. Furthermore, Cohen and others demonstrated that physiological dysregulation as assessed using the Mahalanobis distance increases with age [46,47]. Given that engineering control systems can mimic physiological regulatory mechanisms [48], dysregulation can emerge in at least 3 forms: i) an aberrant value, ii) an inadequate response to disturbance, and iii) a wider variability. From this point of view, these studies addressed the first two forms. We believe that the third form, variability, could be used as a measure of physiological dysregulation in a population-level analysis [49]. This idea is consistent with the following findings: (a) albumin variability increases with age on an individual basis, (b) this trend accelerates before death, (c) variability of several laboratory parameters predicts mortality, and (d) a high variability is often associated with several frailty-related adverse factors [7]. In our preceding study, we observed that the variabilities of different laboratory parameters were correlated with each other when almost all the combinations of parameters were compared [7]. The hypothesis mentioned above could provide a possible interpretation for this enigmatic finding as well. To maintain constant values for each parameter, multiple regulatory mechanisms must work together inside the body, and it is highly likely that some of these regulatory mechanisms are shared by different parameters. A functional loss affecting shared mechanisms might cause higher variability in all the involved parameters, and this feature might manifest as correlations among the variabilities of these parameters. Greater variability can be caused by not only a loss of homeostatic capacity, but also enhanced disturbances (i.e., internal and external stressors). Potential stressors presumably include physical activities, eating, climate, acute illness and medication, and evaluating the magnitudes of these stressors is obviously difficult. From a logical perspective, the CV (or rCV) should be used as a measure of homeostatic dysregulation only in population-based analyses with sufficient numbers of subjects. In addition, since this study was based solely on data from chronic hemodialysis patients, whether the results are applicable to the general population remains unknown. Despite these limitations, we think that the CV or other estimators of variability could be valuable tools for probing disordered homeostatic control in various pathological conditions. In the present study, we focused on time-dependent changes in albumin dynamics. For the maintenance of life, however, the homeostatic regulation of broader body constituents is necessary. Therefore, to understand the significance of variability in a precise manner, the present analysis should be expanded to include other parameters. Conclusions Except for the period after the initiation of HD, serum albumin variability exhibited an upward trend in chronic HD patients, and this trend accelerated as death approached. These aging and death-associated changes in albumin variability seemed to parallel the presumed decline in homeostatic capacity and healthy conditions. Further analysis of the variability of albumin and other parameters may contribute to the advancement of knowledge regarding the pathophysiology of aging and diseases.
2018-04-03T00:42:47.206Z
2017-09-27T00:00:00.000
{ "year": 2017, "sha1": "e2f548a0ac782b0fa215e8cc433ce376d1f375c8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0185216&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2f548a0ac782b0fa215e8cc433ce376d1f375c8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11205986
pes2o/s2orc
v3-fos-license
Impact of 18F-FDG PET/CT on target volume delineation in recurrent or residual gynaecologic carcinoma Background To evaluate the impact of 18F-FDG PET/CT on target volume delineation in gynaecological cancer. Methods F-FDG PET/CT based RT treatment planning was performed in 10 patients with locally recurrent (n = 5) or post-surgical residual gynaecological cancer (n = 5). The gross tumor volume (GTV) was defined by 4 experienced radiation oncologists first using contrast enhanced CT (GTVCT) and secondly using the fused 18F-FDG PET/CT datasets (GTVPET/CT). In addition, the GTV was delineated using the signal-to-background (SBR) ratio-based adaptive thresholding technique (GTVSBR). Overlap analysis were conducted to assess geographic mismatches between the GTVs delineated using the different techniques. Inter- and intra-observer variability were also assessed. Results The mean GTVCT (43.65 cm3) was larger than the mean GTVPET/CT (33.06 cm3), p = 0.02. In 6 patients, GTVPET/CT added substantial tumor extension outside the GTVCT even though 90.4% of the GTVPET/CT was included in the GTVCT and 30.2% of the GTVCT was found outside the GTVPET/CT. The inter- and intra-observer variability was not significantly reduced with the inclusion of 18F-FDG PET imaging (p = 0.23 and p = 0.18, respectively). The GTVSBR was smaller than GTVCT p ≤ 0.005 and GTVPET/CT p ≤ 0.005. Conclusions The use of 18F-FDG PET/CT images for target volume delineation of recurrent or post-surgical residual gynaecological cancer alters the GTV in the majority of patients compared to standard CT-definition. The use of SBR-based auto-delineation showed significantly smaller GTVs. The use of PET/CT based target volume delineation may improve the accuracy of RT treatment planning in gynaecologic cancer. Background Ultrasound, computed tomography (CT) and magnetic resonance imaging (MRI) are widely recommended in the diagnosis of gynaecologic cancer. These conventional imaging modalities present a high sensitivity, specificity and accuracy in the primary staging of the disease. However, the accuracy and specificity of these techniques for the detection of pelvic tumor recurrences or postsurgical residual disease remains low owing to limitations in distinguishing disease from postsurgical changes [1,2]. CT and MRI may be used for target volume delineation in RT treatment planning of gynaecologic carcinomas. However, a reliable definition of tumor extension is difficult to assess with either modality, especially after surgery. Recently, 18 fluorodeoxyglucose ( 18 F-FDG) positron emission tomographycomputed tomography (PET/CT) has been recognized as a valuable tool for the diagnosis of primary and recurrent gynaecological cancer enabling the optimization of RT treatment planning [3,4]. The objective of this study is to assess the role of 18 F-FDG PET/CT based target volume delineation in recurrent or post-surgical residual gynaecologic cancer. We compared the gross tumor volume (GTV) defined manually by four experienced radiation oncologists using contrastenhanced CT and fused 18 F-FDG PET/CT images, as well as the biological target volumes (BTVs) defined on the PET/CT semi-automated delineation technique. In addition, we evaluated the inter-and intra-observer variability in the GTV delineation using the above mentioned methods. Patients This prospective study was approved by the institutional ethical committee. A signed informed consent was obtained from all patients participating in the study protocol. Between September 2006 and December 2008, 10 patients with a histologically proven local recurrent (n = 5) or post-surgical residual (n = 5) gynaecological cancer were included. Patients didn't show any evidence of lymph node or distant metastases. Local recurrences were observed at a median of 34 months (range, 9-62 months) after surgery in 4 patients and following postsurgical radio-chemotherapy in 1 patient. The median age was 64 years (range, 40-81 years). The clinical characteristics and referral patterns of the patient population are summarized in Table 1. F-FDG PET/CT All 10 patients underwent a diagnostic whole body 18 F-FDG PET/CT scan performed in treatment planning conditions on the Biograph 16 PET/CT scanner, Siemens Healthcare, Erlangen, Germany. Patients fasted at least 6 hours prior to the start of the examination. A forceddiuresis protocol was used in all patients for a better differentiation between the tumor and the bladder. Thirty minutes after the 18 F-FDG-injection, each patient received 0.5 mg of furosemide per kilogram of body weight (maximum, 40 mg) followed by infusion of 500 mL of physiologic saline through an intravenous line. One hour after 18 F-FDG injection and directly after voiding of the bladder, patients were placed in scanning position. First, a topogram was obtained from the skull to the upper region of the legs. Secondly, 18 F-FDG PET data were acquired in 3 to 4 minutes bed positions (total of 6 to 7 bed positions) following a low dose CT scan using for attenuation correction. A diagnostic quality contrast enhanced CT scan was then performed. 18 F-FDG PET, CT and fused 18 F-FDG PET/CT images were displayed for reviewing axial, coronal, and sagittal planes. All studies were interpreted and reviewed with knowledge of the patient's clinical history and results of previous imaging studies including MRI of the pelvis in all patients. A combined team of an experienced nuclear medicine physician and an experienced radiologist interpreted the 18 F-FDG PET/CT images. A multimodality computer platform (Syngo Multimodality Workplace, Siemens Healthcare, Erlangen, Germany) was used for image review and interpretation. All 18 F-FDG PET/CT studies showing at least one site of abnormal 18 F-FDG uptake were characterized as malignant. Foci of increased 18 F-FDG uptake, with intensity higher than that of surrounding tissues, in areas unrelated to physiologic or benign processes, were defined as malignant. Tumor uptake of all lesions was assessed quantitatively using maximum standardized uptake value (SUV) derived by placing a region of interest encompassing the tumor on each slice of transaxial plane. Manual contouring protocol Four experienced radiation oncologists were asked to delineate the GTVs on axial slices of the CT (GTV CT ) and the 18 F-FDG PET/CT (GTV PET/CT ), respectively. Recent T2-weighted contrast enhanced MRI images were also available as additional information for contouring and for fusion on Syngo multimodality software (Siemens Healthcare, Erlangen, Germany). All scans were contoured with knowledge of the additional diagnostic images and reports. The contouring process consists of the following steps: firstly the radiation oncologists delineated the GTV on the contrast-enhanced CT images alone (GTV CT1 ). The images and reports of the 18 F-FDG PET were blinded. Then, after at least two weeks the observers contoured the BTV on the fused 18 F-FDG PET/CT images (GTV PET/CT1 ). To assess the intra-observer variability, all observers were asked to contour the target volume a second time two months later on CT images (GTV CT2 ) and once again two weeks later on the 18 F-FDG PET/CT images (GTV PET/CT2 ). They were blinded to their previous contours as well as to those of the other observers. The radiation oncologists were all trained in target volume delineation on PET/CT and were free to adjust the window, level and contrast setting of the images. Signal-to-background ratio-based (SBR) adaptive thresholding (GTV SBR ) For GTV SBR delineation, the maximum signal intensity of the tumor was defined as the mean activity of the hottest voxel and its eight surrounding voxels in a transversal slice, whereas the mean background activity was obtained from a manually drawn ROI far away from the tumor [5]. The SBR-thresholding technique has been described in a previous publication by our group [6]. The GTV SBR were checked visually before approval. Contour analysis The delineated contours for both delineation phases were analyzed separately. Firstly, the volumes contoured by every observer for GTV CT and GTV PET/CT were calculated for every patient separately and the composite and common volume of GTV CT and GTV PET/CT were calculated. The composite volume PET/CT is the sum of GTV CT1 and GTV PET/CT1 while the common volume PET/CT is the joint volume of GTV CT1 and GTV PET/ CT1 of each observer. To assess the geographic mismatch between the GTVs delineated using the different segmentation techniques, the following overlap analyses were performed: (A) The overlap volume of GTV CT1 and GTV PET/CT1 , for which overlap was expressed as the overlap volume of GTV CT1 and GTV PET/CT1 relative to the CT-based GTVs − overlap fraction (OF) CT 1 [OF CT1 ]; (B) the OF of GTV PET/CT1 and GTV CT1 relative to the PET/CT-based GTV − overlap fraction PET/CT1 [OF PET/CT1 ]. In addition, the overlap volume of GTV PET/ CT1 and GTV SBR relative to GTV SBR -OF was also calculated [OF SBR ] (C). Inter-and intra-observer variability was calculated using a two-way ANOVA model. Regression analysis was used to evaluate the difference between calculated volumes and overlap between GTVs when using the different segmentation tools. Statistical analysis and curve fitting was performed using PASW Statistics package, version 18.0 (IBM, Chicago, Illinois, USA). The level of statistical significance adopted was 0.05. Results The contrast enhanced CT scan as well as the 18 F-FDG PET/CT were able to pinpoint the local recurrent or residual cancer in the pelvis. The median SUV max of GTVs was 11.74 (range, 7.55 -17.82). We did not observe any difference in PET signal between residual tumor and recurrent tumors. Figure 1 presents the mean tumor volumes using the different manual and SBR delineation techniques. Error bars indicate standard deviation (SD) on the mean. Wide variability of the GTV CT and GTV PET/CT was observed. The mean GTV CT1 (43.65 cm 3 , SD 4.84) was significantly larger than the mean GTV PET/CT1 (33.06 cm 3 , SD 5.24), p = 0.02. The smallest GTV CT1 and GTV PET/CT1 was found in patient #6 with 1.89 cm 3 and 0.85 cm 3 respectively, and the largest GTV CT in patient #4 with 120.39cm 3 , while the largest GTV PET/CT was observed in patient #10 (101.93 cm 3 ). Figure 2 presents an example of the GTVs contoured by each observer in each modality in a patient with a local recurrent cervical cancer. The contouring of this case was hampered by the adjacent localization of the bladder and the rectum. Table 2 summarizes the comparative evaluation of the CT-and PET/CT-based GTVs. The mean composite volume was 46.15 cm 3 (SD 5.42) and the mean common volume was 31.48 cm 3 (SD 4.21). The mean OF CT1 was 0.63 (SD 0.04). The mean OF PET/CT1 was 0.90 (SD 0.03). In 2 patients, the GTV PET/CT of all observers was included entirely in the GTV CT and in 6 patients, GTV PET/CT added substantial tumor extension outside the GTV CT . We found that among four experienced radiation oncologists, the ratio of largest to smallest GTVs outlined on 10 patients using the planning CT had a median of 1.87 (range, 1.21 to 3.27). When the 18 F-FDG-PET was included, this ratio was reduced to median 1.38 (range, 1.16 to 1.81). The ratio of largest to smallest GTV was decreased in 9 of 10 patients using PET/CT for GTV delineation. Evaluation of inter-and intra-observer variation The median inter-observer reliability index for the GTV CT was 0.37 (range, 0.21-0.63) and for the GTV PET/ CT was 0.48 (range, 0.32-0.71); p = 0.23. All physicians contoured each patient twice and the median intraobserver percentage of concordance for the GTV CT was 0.49 (range, 0.13-0.89) and for the GTV PET/CT was 0.65 (range, 0.30-0.92) (p = 0.18). SBR-based auto-contour compared with manual delineation The GTVs were delineated both manually and by editing the SBR-based auto-contour. The results concerning GTV SBR are shown in Table 1. The mean GTV SBR was 21.33 cm 3 (SD 23.87), which is significantly smaller than the manually contoured GTV CT (p ≤ 0.005) and GTV PET/CT (p ≤ 0.005). In 6 patients the GTV SBR was included completely in all GTV CTs and the mean OF between GTV SBR and GTV PET/CT was 0.97 (SD 0.02). Comparing the GTV SBR with the GTV PET/CTs , we observe that in 4 patients the GTV SBR were larger than the GTV PET/CT . Discussion CT and MRI have reasonable sensitivity but low specificity in identifying recurrent gynaecologic disease [1,2]. Consequently, significant observer variation has been noted in contouring the GTV CT [7]. 18 F-FDG PET/CT plays an increasingly important role in the staging and management of gynaecologic cancer including RT treatment planning [3,4]. 18 F-FDG PET/CT has demonstrated a high sensitivity and accuracy of more than 90% with average specificity in locally advanced or recurrent gynaecologic pelvic carcinoma. Furthermore 18 F-FDG PET/CT can help to distinguish between tumor recurrence and post-therapy changes [4,8]. Kidd et al. have shown that cervical cancer patients treated with 18 F-FDG PET/CT-guided IMRT had improved survival and decreased treatment related toxicity compared with patients treated with non-IMRT radiotherapy [9]. This delineation study evaluated inter-and intraobserver variability of CT-based and 18 F-FDG PET/CTbased target volume delineation in local recurrent or postsurgical residual gynaecological cancer. The results were compared with an automated PET segmented technique using adaptive thresholding technique. In other cancer sites such as head and neck and lung, 18 F-FDG PET/CT was reported to decrease inter-and intraobserver variability in tumor contouring [10]. Our results suggest that GTV delineation using 18 F-FDG PET/CT could be superior to CT alone in this group of patients. GTV PET/CT was significantly smaller than the GTV CT with a trend for reduced inter-and intraobserver variability using PET/CT. The inter-observer agreement was moderate for the GTV CT and substantial for the GTV PET/CT [11]. The inter-observer reliability was lower than the intra-observer reliability. This is in agreement with observations made by other authors [12]. It has been considered that the observers tend to agree more with themselves rather than with each other. Inter-and intra-observer variability has been mostly investigated in lung cancer and the increased observer reliability on 18 F-FDG PET/CT in our study is in line with these findings [10]. Only one study by our group evaluated the inter-observer variability in PET/CT-based target volume delineation in the pelvis [13]. A trend of reduced inter-observer variability has been observed in the delineation of the intraprostatic recurrence lesion using 18 F-choline PET/CT. In gynaecologic cancer no inter-or intra-observer variability in PET-based GTVdelineation has been evaluated until now. Our study demonstrate that the size of GTV PET/CT was significantly smaller than the GTV CT with the implementation of a coregistered 18 F-FDG PET/CT . When the GTV SBR volumes were analyzed and compared with manual delineated target volume, it was observed that the GTV SBR was significantly smaller than the median GTV CT and GTV PET/CT . This was also manifested in the overlap analysis, where the overlap fraction increased from OF CT1 to OF PET/CT1 and OF SBR . Overall, the comparison of GTVs delineated in primary and recurrent cancer did not result in any significant differences. The strength of our study includes the use of contrast enhanced CT scans for GT CT and GTV PET/CT determination and that the exams were performed on a dedicated PET/CT scanner for virtual simulation and fused with a recent MRI. Nevertheless the inter-and intra-observer variability was relatively high with both imaging modalities, highlighting the difficulty to determine the target volumes in this group of patients. An automated segmentation of the target volume using the adaptive thresholding technique could eventually help to reduce inter-and intra-observer variability. One potential limitation of our study is that the observers were at liberty to adjust the window, level and contrast setting of the images. This could have increased the inter-and intraobserver variability. However, all observers were experienced in PET/CT-based target volume delineation and were helped by a nuclear medicine physician. Another drawback of this study is the lack of comparison of the PET/CT results with pathologic findings after surgery. The delineation of target volumes and organ at risk is a very critical step in high-precision RT treatment planning. Good image quality and reliable delineation protocol are important for accurate target volume delineation. One of the challenges of PET/CT-guided target volume delineation is the accurate segmentation of noisy and low resolution functional PET images. This is in particular true in recurrent or residual gynaecological cancer where vascular and urinary activity hampers target volume delineation. The result is a relatively high inter-and intra-observer variability. Various PET image segmentation techniques for target volume delineation were developed and evaluated to overcome this drawback [11]. Among them, manual contouring by visual examination is the most commonly used method. The determination of an appropriate window and level for viewing the PET images is highly operator-dependent and is subject to high variability between operators [12]. An improved concordance in target volume delineation using PET/CT implies a greater accuracy and can help to determine a more appropriate treatment plan. In our study, the inter-observer variability coefficient prevailed is still relatively high. Variability negatively impacts the quality of treatments delivered to cancer patients. Alternatively an automated segmented target volume could be considered. There is consensus in the need for highly objective and automatic segmentation methods, and various groups have observed that semior fully-automated delineation techniques reduce interobserver variability and improve reproducibility [10]. The adaptive thresholding technique is one of the most widely used segmentation techniques for target volume determination in clinical setting. However, knowledge of the true target volume in relation the GTV SBR in gynaecologic tumors is needed for validation purposes. PETbased target volume delineation in gynaecologic tumors is actually not recommended outside clinical studies. It has to be emphasized that both patients with recurrent or postsurgical residual gynaecologic cancer are challenging cohorts for reliable target volume delineation and thus it is more likely that high inter-and intra-observer variability will result. In the absence of a more accurate information on the target volume position in gynaecologic cancer, a composite of GTV CT and GTV PET/CT can be recommended to optimize the GTV definition. Conclusions This delineation study showed that GTV PET/CT was significantly smaller than GTV CT . The reduction was larger when the adaptive thresholding-based semi-automated contouring algorithm was used. GTV PET/CT added substantial tumor extension outside the GTV CT in 60% of the patients. The combination of a matched 18 F-FDG PET/CT reduced the inter-and intra-observer variation in the delineation of gynaecological cancer however the difference was not significant. Target volume delineation may be improved with the inclusion of 18 F-FDG PET/CT.
2016-05-04T20:20:58.661Z
2012-10-22T00:00:00.000
{ "year": 2012, "sha1": "aa47d3bb23a0a64c7474ab71f2e38387381496e9", "oa_license": "CCBY", "oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/1748-717X-7-176", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa47d3bb23a0a64c7474ab71f2e38387381496e9", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
8521920
pes2o/s2orc
v3-fos-license
Relationship between benthic macroinvertebrate bio-indices and physicochemical parameters of water: a tool for water resources managers The ecosystem health of rivers downstream of dams is among the issues that has become focus of attention of many researchers particularly in the recent years. This paper aims to deal with the question, how the environmental health of a river ecosystem can be addressed in water resources planning and management studies. In this study, different parameters affecting the ecosystem of river-reservoir systems, as well as various biological components of river ecosystems have been studied and among them, benthic macro-invertebrates have been selected. Among various bio-indices, biodiversity indices have been selected as the evaluation tool. The case study of this research is Aboulabbas River in Khuzestan province in Iran. The relationship between the biodiversity indices and physicochemical parameters have been studied using correlation analysis, Principal Component Analysis (PCA), and Genetic Programming (GP). Margalef index was selected as the appropriate bio-index for the studied catchment area. The relationship found in this study for the first time between the Margalef bio-index and physicochemical parameters of water in the Aboulabbas River has proved to be a useful tool for water resources managers to assess the ecosystem status when only physicochemical properties of water are known. Background Bio-indices have been recognized as suitable criteria for understanding the quality of aquatic environment. They are numerical expressions that combine quantitative values of species diversity with qualitative information on the ecological sensitivity of each taxon [1]. Ecologists use various metrics and indices for ecological assessment of river ecosystem environments. They can be used to predict the response of an ecosystem to different water resources management practices and environmental conditions. Considering the importance of rivers, ecosystem environment and the role of bio-indices in basin scale water resources planning and management, most rivers in the developed countries are constantly evaluated and their physical, chemical and biological characteristics are monitored [2]. Various bio-indices have been proposed and used by ecologists in different countries. The most commonly used indices in biological evaluation of rivers include species richness, evenness, diversity and dominance indices, BMWP (Biological Monitoring Working Party), ASPT (Average Score per Taxon) and EPT (The total number of Ephemeroptera, Plecoptera and Trichopteraindex). However the literature on the bio-indices and the criteria for understanding the quality of aquatic environment is rich, but there is a gap between these studies and those related to water resources planning and management. Most of the previous studies in the field of water resources planning and management have focused on socio-economic aspects of water allocation to different users while some also have considered physicochemical water quality constraints [3]. Bio-indices have not been used in these studies mostly because of the lack of knowledge of water resources modelers about these indices and also limited interval of limnological measurements. Previous studies some of which are also cited later in the section, show that the limnological information are only available in very short periods of time (mostly one or two years) in very limited rivers specially in the under developed countries while water resources planning and management studies require long records of data (usually longer than 30 years). To close this gap, one approach which is the focus of this study is to find a mathematical relationship between an ecological index which can reflect the overall environmental condition of a river in the study area and the physicochemical properties of water. Since there are widespread databases about physicochemical characteristics of water bodies in many basins around the world, finding this relationship can help in determining the quality of aquatic environments wherever no record on the quantity or diversity of species is available. Several studies including the followings have shown consistency between variations of biotic indices and fluctuations in physicochemical characteristics of water: Czerniawska and Kusza [1], studied correlation between bio-indices and diversity indices at the family level of benthic macro-invertebrates with physicochemical variables of Nysa Klodzka River in southern Poland, using Spearman's correlation coefficient. Yap et al. [4] studied variations of a benthic species called Oligochateas and physicochemical parameters of water in a river in Malaysia from March 1998 to February 1999, and showed that there has been a negative correlation between density and distribution of this benthic macro-invertebrate and DO and PH, and a positive correlation with electrical conductivity, BOD, NO3, NH3, TSS, COD, Cc and Zn. Azrina et al. [5] studied the correlation between richness and diversity index of benthic macro-invertebrates communities with physicochemical parameters of water of Langat River, Malaysia for four consecutive months (March-June 1999), and showed that they are mainly affected by TSS and EC of the river water. They showed that the richness index has a strong negative correlation with TSS, width of the river and temperature while Simpson diversity index is strongly correlated with TSS and electrical conductivity of water. Latha and Thanga [6] in a study in India examined variations in Shannon diversity and evenness indices in a period of two years for six stations on the Veli and Kadinamkulam Rivers and showed that species diversity and distribution is clearly related to water quality and the more contaminated water is, the less the diversity index will be. Their study also showed that Shannon index has had fluctuations similar to abundance index. Kennen et al. [7] studied benthic macro-invertebrates in 67 small and medium sized catchment areas in America and demonstrated the relationship between EPT species richness index and hydrological characteristics of flow. In Iran, Nemati et al. [8] calculated various biotic indices estimated based on samples collected from benthic macro-invertebrates of Zayandeh-rud River. They studied correlation between these indices and physicochemical parameters of water and concluded that BMWP (Biological Monitoring Working Party) index has a significant correlation with physicochemical parameters of water. Monk et al. [9] reviewed the 22-year long-term statistics of samples collected from 14 rivers in England. They computed BMWP, EPT and Life Score biotic indices and studied their variations with respect to changes in Indicators of Hydrologic Alteration (IHA) and observed the strongest relation between biotic indices and hydrological parameters in frequency and intensity of current flow groups. Ogleni and Topal [10] studied the impacts of pollutants on water quality in 15 stations over Mudurnu River, Turkey in a 12-month period (2006 to 2007) and biotic indices obtained based on different organisms in water. They showed that from 100 biotic indices, 60% of them have used benthic macro-invertebrates and it seems that modified ASPT and BMWP indices have the strongest correlation with water quality parameters. The above studies show that different types of bioindices have statistically significant relationships with hydrological indicators of flow and physicochemical characteristics of water. All of the aforementioned studies have used descriptive statistics to assess this relationship. However, These types of assessments could be useful for many environmental planning and management purposes, but they cannot be used for inclusion in the operation management models of river-reservoir systems. The questions this study is trying to answer are: 1) When modeling river-reservoir systems, which bio-index should be chosen? And 2) How the relationship between the chosen bio-index and physicochemical characteristics of water can be quantified? The case study of this research is Aboulabbas River in Khuzestan Province in Iran. Genetic Programming (GP) has been used in this study to obtain a quantitative relationship between biodiversity index and physicochemical characteristics of water. Materials and methods Using benthic macro-invertebrates for calculating bio-index Different biotic indices have been defined and used in different regions of the world for bio-monitoring programs, which some of them have a reasonable accuracy to be used in other regions too. Biological assessments can be used for identifying weaknesses in ecosystem environments caused by pollutants or degradation of habitats. They are also, in some cases, even more effective than physical and chemical measurement processes, because they are economical and need less time to be evaluated. Among the various components of the aquatic ecosystems including plants, birds, fish and Macrobenthic organisms (Macrobenthos), the last one pave the way for one of the best and most efficient ways for biological assessments [11]. Macrobenthos plays the role of a link in food chains which provide the energy stored by plants in larger animals such as fish. Aquatic invertebrates in the river food chains are the primary consumers of herbal products such as algae, diatoms, mosses and decaying leaves and enter the production cycle of the fish, and when mature, they fly or they are directly consumed by secondary consumers. Macrobenthos are invertebrates which can be seen with the naked eye. They spend at least part of their lives in the river beds. Being the basic components of the aquatic chains of rivers and ubiquitous in all aquatic ecosystems, limited mobility, long lifespan and species richness with varying sensitivity to pollution are the highlighted reasons for widely reported studies on benthic macroinvertebrares as biological monitoring techniques [5,[12][13][14]. Exploiting benthic macro-invertebrates is based on the assumption that the streams and rivers which are not affected by pollutant factors have more arrays of benthic species and non-resistant species are dominant there, while in polluted waters, arrays which are less tolerant to pollutants can be found less [13]. Parameters affecting the ecosystem of rivers The first step for choosing an appropriate bio-index and obtaining its possible mathematical relationship with physicochemical characteristics of river water is identifying the parameters with considerable effects on the ecosystem of the river being studied. Studying the mathematical relationship between variations of biotic indices with these physicochemical characteristics is the second step. In this step, the proper biotic index which shows strongest statistical relation with the physicochemical parameters can be selected. Some of the most influencing physicochemical characteristics of the river water bodies on the ecosystems can be listed as follows: River discharge is the most important hydrologic characteristic of rivers. It has direct and indirect impacts on the ecosystem health. While river discharge directly satisfies the needs of species in rivers, indirectly change the physical and chemical quality of water. Water velocity is among the major characteristics affecting river ecosystems. It has significant effects on morphology of river beds and movement of sediments which both have impacts on various species Floods and all types of hydrologic alterations can significantly change the ecosystem health one way or another. In addition to the hydrological conditions of the river, water quality parameters also play a major role in ecosystem health. Any change in water quality can lead to variations in compositions of plants and animal species. The most important water quality parameters in terms of impact on aquatic ecosystems include temperature, salinity, acidity, Total Dissolved Solids (TDS), pH, DO and BOD 5 . Many physical processes and chemical and biological transformations are sensitive to temperature variations. Salinity increase in freshwater ecosystems generally decreases biodiversity and may reduce the available food resources. Generally lower acidity leads to reduced biodiversity and species composition of various invertebrate communities. Increased turbidity reduces light penetration depth and thus limits the growth of aquatic species. Since oxygen is needed for aerobic respiration of aquatic species, low DO concentration is harmful to plants and aquatic organisms [15][16][17]. Bio-indices Various bio-indices have been proposed and used by ecologists in different countries, such as species richness index, evenness index, species diversity index, dominance index, and BMWP, EPT and ASPT indices. Evenness index demonstrate the distribution of the communities of species. The more even species distribution is, (i.e. the number of individual organisms or abundance of species are more similar), the higher stability is present which results in greater biodiversity. Species richness indicates the presence of various species and is calculated by the number of species in an area. An increasing number of taxons can be due to habitat diversity, suitability of water or its improved quality. Dominance index reflects the abundance of some species over others which is used as an index in biodiversity assessments. Species diversity index is in fact a combination of species richness and evenness indices, and aggregate both species richness and evenness into a single quantity. Higher biodiversity indices indicate less stress in ecosystems, higher abundance and more even distribution of species in the ecosystem. Various studies have also shown this point, some of which are also cited in this paper. With respect to the various biotic indices, it seems that using diversity indices for river ecosystem health assessment will be more appropriate [18][19][20], stated that diversity index increases by increased number of species or increasing the total number of organisms in populations; when the population of various species is distributed evenly, the diversity index increases as well. Shannon, Simpson, and Margalef diversity indices have been used by several researchers to assess bio diversity. These indices have been also used in this study and therefor are introduced with more details in the following sections. Shannon diversity index Shannon diversity index has been a popular diversity index in the ecological literature. It was originally proposed by Claude Shannon in 1948. This index can be estimated using the following formula after estimating the relative abundance of identified families at each station for different months of a year: Where: P i : Relative abundance of i th taxon in the sample. s: total number of taxons in the sample. It has been emphasized in the literature that Shannon diversity index is a fast and reliable tool to identify major changes in community structure of benthic species [21]. It has also been shown that seasonal patterns of Shannon diversity index and species richness and evenness are similar to seasonal changes in species abundance and composition [22]. Simpson diversity index Simpson diversity index was presented by Simpson in 1949. In 1972, Krabs presented the following formula for estimating Simpson diversity index: In this index, lower/higher weights are assigned to the rare/usual species. The index values are in the range of zero (lowest diversity) to 1− 1 S À Á (highest diversity). Margalef diversity index In 1958, Margalef introduced this index as a simple diversity index. Where N is the total number of individuals. In order to find the relationship between bio-indices and the physicochemical characteristics of river, GP has been used in this study. This technique is briefly described in the following section Genetic algorithm for programming (Genetic programming) To obtain a formula indicating the relationship between biotic index and qualitative and quantitative characteristics of water, GP has been used. GP was proposed for the first time by Koza in 1992 [23]. The first step in GP is generating initial population randomly consisting of two elements, i.e. functions and terminals. Functions can, according to the type of problem, be the basic operations like addition, subtraction, multiplication and division or logical functions such as AND, OR and NOT or any other function. Terminals also include variables and constants, if desired. In GP, functions and terminals are randomly selected, and a member of population is presented as a tree with functions as its roots and branches that ultimately end to the terminals. After generating a random initial population which is known as the parent for the first generation, each member will be evaluated and this evaluation can be carried out in different ways based on the type of problem. From initial population, a new population is formed using various selection methods such as roulette wheel, tournament, etc. GP operators including "reproduction," "cross over" and "mutation", affect this new population [24]. GP has proved to be a useful tool especially when the relationship between variables is unknown or the size and form of relationship is complex and difficult to formulate, as well as when no approach can be presented by analytical and mathematical methods for establishing relationship between variables [25,26]. In application of GP for determining the relationship between bio-indices and physicochemical characteristics of water, firstly, all parameters have been standardized to be in the range of [0, 1] to avoid any magnitude difference between the parameters. The basic mathematical operators of addition, subtraction, multiplication, and division have been considered as functions. GP offers a different relation for calculating bio-index in each run. Due to the fact that GP, like other evolutionary methods, is based on producing initial random answers, the estimated equation in each run can be different. Various relationships between the dependent variable (biotic index) and the independent variables (qualitative and quantitative parameters) are calculated using the results of 100 runs of GP. The best relationship is then selected based on the highest correlation coefficient. It worth mentioning that since GP algorithm uses random operators, it is suggested in the literature that the final results should be chosen from several runs. To formulate a relationship between a bio-diversity index and physicochemical parameters of water, by removing discharge variable from independent variables, again GP is used. 80% of the available dataset has been Case study In Iran, very few studies on aquatic ecosystems can be found and there is very little information available. In the recent years, some efforts have been spent to further recognize and assess aquatic environments in some catchment areas. The case study of this research is Aboulabbas River located in the southwest of Iran in Khuzestan Province, between 25˚31 ′ ′ to 31˚40′ North latitudes and 50˚49′ to 50˚10′ East longitudes (Figure 1). Samplings have been carried out in six stations around Aboulabbas dam in the period of January 2007 to December 2007. Available samples include number of fish and benthic macro-invertebrates and also physicochemical parameters of water on a monthly basis. As shown in Figure 2, the available data shows that generally the river water quality is good. For example, dissolved oxygen in all cases was reported to be more than 7.6 mg/L which place the river in Class 1A according to the national water quality standards of Iran. The maximum amount of measured dissolved solids was 200 mg/L while this amount should not be greater than 500 mg/L as recommended by EPA for drinking water. BOD 5 is also in a range which is suitable for irrigation ( Figure 2). There is no significant source of industrial or chemical pollution in the catchment basin of this river. Therefore, it is assumed that biotic indices are affected only by natural conditions of the river and are not affected by the pollutants. It worth mentioning that no sampling has been carried out after 2007. Since no major development or land use change has happened in the basin, it is assumed that the results of this study are still valid for water resources planning purposes. Results and discussions In order to establish a relationship between bio-indices and physicochemical parameters of water, firstly, based on the existing information of the catchment area, Simpson, Shannon and Margalef diversity indices have been calculated for 12 months of the year at different stations. Then using SPSS software program, the correlation coefficient between biotic indices with quantitative parameters (river discharge) and qualitative parameters (Water Temperature, pH, DO, EC, BOD 5 ) have been calculated. Analysis of the results revealed a significant correlation between bio-indices and relatively high correlations between bio-indices and some of the physicochemical parameters (Table 1). For conducting a more accurate analysis, the available data for all of the stations, including biotic indices and qualitative and quantitative parameters have been clustered using K-means clustering technique. K-means clustering is a simple clustering method with low computational complexity. It is very simple and can be easily implemented in solving many practical problems. K-means algorithm is under the category of Squared Error-Based Clustering [27]. For all of three selected bio-indices, it has been observed that the data for winter season has been clustered into one cluster and the data for the rest of the year in another cluster. Bearing this point in mind for further analysis, and in order to establish a relationship between bio-indices and physicochemical parameters, the data which is clustered into one cluster and has the information related to spring, summer and autumn is used in GP. Since TDS and EC parameters are highly correlated, only EC has been used as independent variable. Hundred GP runs provided equations for estimating each of the biotic indices with various degrees of accuracy. The obtained results presented in the Table 2 show the number of presence of each of the physicochemical parameters in the obtained equations for calculating each of the biotic indices. The results show that a small percentage of the obtained equations for calculating all of the bio-indices are affected by the river discharge. Moreover, DO and BOD 5 parameters have the most frequent repetition in the obtained equations. One of the questions that should be answered here is which of the physicochemical parameters should be included in the estimation of bio-indices. As it can be seen in Table 2, different combinations of physicochemical parameters have been used in GP for estimating bio-indices. In order to answer this question, Principal Component Analysis (PCA) method has been used. PCA is a multivariate statistical analysis technique, which has been widely used in the water quality related studies [28][29][30][31]. The results of PCA are shown in Table 3. The results of PCA show that PC1-PC4 factors contain more than 80 percent of information and by reviewing Table 3 it can be concluded that DO and temperature parameters are more important for the first main component, EC parameter for the second main component, pH for the third component, and BOD 5 parameter for the fourth component. The PCA results also show low importance of discharge compared with other parameters investigated in estimating bio-indices. Therefore, the PCA results are compatible with the GP outcomes. Correlation analysis has been carried out between the estimated values of the three bio-indices based on the observations and the estimated values using the equations obtained from GP. As it was mentioned earlier, 100 values have been estimated for each index. The results of the correlation analysis shows that the estimated values by GP method for Margalef diversity index has higher correlation with the values estimated based on the observations. Therefore, Margalef biotic index is chosen in this study. The equations generated in GP have been evaluated by two goodness-of-fit measures, root-mean-square error and correlation coefficicent. Based on these two measures, Equation (1) had the highest fitness based on both measures: Where: MI: Margalef diversity index, DO: dissolved oxygen (mg L -1 ), T: water temperature (°C), EC: Electrical Conductivity of the water (μmohs cm -1 ), and BOD 5 : Biological oxygen demand (mg L -1 ). To assess the accuracy of this relationship, summary statistics of the observed and estimated values of the index are presented in Table 4. Reviewing the results reveals the relatively significant accuracy of the obtained relationship in both training and validation datasets. Table 4 indicates that the error of the equation in estimating the average value of Margalef index is about 3.8% and 5.04% for the training and validation datasets, respectively. There has been 6.6% and 11.60% difference between standard deviation of the calculated and observed values of the index in training and validation datasets, respectively. The correlation coefficients between the observed and estimated values of the Margalef index estimated for training and testing datasets show relatively acceptable accuracy of the proposed relationship. Mean square error for training and validation datasets have been relatively close which shows no over fitting has occurred. Figure 3 shows comparison between the calculated and observed Margalef diversity index at different stations on Aboulabbas River. As it can be seen in this figure, the highest error of estimations by Equation 4 compared with observations has been in the month of April. It is worth mentioning that the river discharge in April is the highest in the year, and such surplus discharge often in the form of flash floods causes sudden changes in the river ecosystem. Excluding the results obtained for the month of April increases the correlation coefficients and this implies that the obtained formula is more accurate for other months of the year. Due to reduced river discharge and increased temperatures and reduced water quality in summer and autumn, the health of ecosystem is usually at stake in these months, so maintaining ecosystem health and improving biodiversity in such months is more important for water resources planners, and this equation can be a useful tool for calculating biotic indices in these months whenever there is no measurement. Conclusion The aim of this study has been to provide a tool for assessing biodiversity of river ecosystems to be used by water resources planners and reservoir operators. The major obstacle in this study has been the lack of longterm data. The accuracy of the proposed equation can be significantly improved in case of availability of longterm observations. Despite this fact, the novelty of this work lies in the methodology used in choosing biotic indices and the physiochemical parameters for estimating them. The equation proposed in this study for estimating Margalef index is based on the environmental condition of the study region and we are not to claim that it would work in other regions as well as Aboulabbas River, because diversity and even abundance of benthic macroinvertebrates depend on various physico-chemical properties of water and specific environmental condition of each ecosystem. Also, a larger dataset could lead to more accurate mathematical relationships between ecological target indices and various water quality parameters. Further research can be dedicated to finding similar equations for other rivers in the region specially the headwaters of Aboulabbas River to assess whether the same conclusion about the choice of bio index and physicochemical parameters is valid for them. For the larger datasets, it can also be suggested to investigate the possibility of increasing the accuracy of the relationship by making it sensitive to the overall pollution level of the river water.
2017-06-29T07:52:19.181Z
2014-01-10T00:00:00.000
{ "year": 2014, "sha1": "e070d128fb05f52a3b21e214274a3100f896bef0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1186/2052-336X-12-30.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c82852f903466c3e2a8cc09bca21dc9a92fb6909", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
267544329
pes2o/s2orc
v3-fos-license
Effects of dispersal and temperature variability on phytoplankton realized temperature niches Abstract Phytoplankton species exhibit fundamental temperature niches that drive observed species distributions linked to realized temperature niches. A recent analysis of field observations of Prochlorococcus showed that for all ecotypes, the realized niche was, on average, colder and wider than the fundamental niche. Using a simple trait‐based metacommunity model that resolves fundamental temperature niches for a range of competing phytoplankton, we ask how dispersal and local temperature variability influence species distributions and diversity, and whether these processes help explain the observed discrepancies between fundamental and realized niches for Prochlorococcus. We find that, independently, both dispersal and temperature variability increase realized temperature niche widths and local diversity. The combined effects result in high diversity and realized temperature niches that are consistently wider than fundamental temperature niches. These results have broad implications for understanding the drivers of phytoplankton biogeography as well as for refining species distribution models used to project how climate change impacts phytoplankton distributions. modulating local properties of the water column that influence the provision of nutrients and exposure of cells to light (Huisman et al., 2004).Collectively, the maximum possible specific growth rate across all species of phytoplankton increases exponentially with temperature (Eppley, 1972).However, each species of phytoplankton has a distinct thermal response curve, or fundamental temperature niche, defined by the range of temperatures where growth is positive (the niche width) and a temperature where growth is at its maximum (the optimal temperature) measured in laboratory conditions with no resource limitations or negative interactions (e.g., parasites, predators, or competition;Boyd et al., 2013;Marañón et al., 2013).These temperature preferences, in many cases, underlie observable phytoplankton biogeographic patterns across temperature gradients (e.g., Johnson et al., 2006;Thomas et al., 2012). Biotic interactions such as competition and predation should, in theory, lead to narrower realized than fundamental temperature niches (Colwell & Rangel, 2009;Hutchinson, 1957).However, a recent study of Prochlorococcus temperature niches found that realized temperature niches were wider and colder than fundamental temperature niches, as measured in laboratory conditions, for four globally distributed ecotypes (Smith et al., 2021).Prochlorococcus is the most abundant photosynthetic microbe on Earth and is comprised of many ecotypes with distinct traits and biogeographies (Chisholm et al., 1992;Larkin et al., 2016;Rocap et al., 2003;Zinser et al., 2006).There are a range of mechanisms that may contribute to the observed differences between fundamental and realized temperature niches in Prochlorococcus ecotypes, including ecological interactions such as predation (Guillou et al., 2001), local adaptation (Martiny et al., 2019), or dispersal (Doblin & Van Sebille, 2016;Hellweger et al., 2016).Here, we examined how spatial mass effects and temporal storage effects-defined broadly as the occurrence of species in habitats where their net growth is negative, but the populations survive due to immigration or temporal persistence-are one possible explanation for the discrepancies between fundamental and realized temperature niches in Prochlorococcus ecotypes.Despite having negative net growth rates, the presence of a species is still ecologically important to community dynamics, the food web, and ecosystem functions.Spatial mass effects are the net flow of individuals between local patches driven by dispersal (Leibold et al., 2004;Shmida & Wilson, 1985;Zonneveld, 1995); we use "spatial mass effects" in this context instead of "spatial storage effects."Temporal storage effects (also called temporal mass effects) describe the role that environmental fluctuation plays in supporting diversity by providing multiple windows of opportunity for species with different niche preferences to optimize growth (Cáceres, 1997;Ellner et al., 2016;Kelly & Bowler, 2005;Kremer & Klausmeier, 2017;Zonneveld, 1995). We created a simple metacommunity model to test how spatial mass and temporal storage effects influence phytoplankton realized temperature niches and community diversity.The model simulates a latitudinal transect through the ocean where phytoplankton communities are connected via isotropic dispersal that decreases in strength with increasing distance, and the temperature seasonality at each latitude is tied to marine observations.We refer to this seasonal change in temperature as "temperature variability" hereafter, but recognize that shorter-(e.g., storms, internal waves, upwelling) and longer-term variations (e.g., natural and anthropogenic climate change) are important but are not examined further.Model phytoplankton species each have a unique temperature niche, but equivalent affinities for light and nutrients and equivalent dispersal capacity.Using this idealized framework, and through a sequence of controlled model experiments varying temperature variability, dispersal, and phytoplankton mortality, we ask: (1) How does the rate of dispersal affect phytoplankton realized temperature niches and local community diversity?, (2) How does the degree of temperature variability affect phytoplankton realized temperature niches and local community diversity?, and (3) How does the strength of phytoplankton mortality modulate the effects of dispersal and temperature variability on realized temperature niches and community diversity?While our model is designed to mimic essential properties of phytoplankton in marine settings, it is general enough to have relevance to other types of metacommunities.The model helps understand how ubiquitous spatial mass and temporal storage effects in the ocean may play important roles in shaping realized niches and community diversity.Phytoplankton biomass (P; mmol P m −3 ) for each species (i) in each box (j) is controlled by the temperature-dependent growth rate of the species ( i,j (T); day −1 ), the concentration of resources (R j ; mmol P m −3 ), the temperature-dependent mortality m i,j (T) (day −1 ) scaled by γ (unitless), and immigration or emigration of species from and to adjacent boxes (i.e., net dispersal): | Model description The resource concentration in each box is controlled by the influx of nutrients from a deep nutrient pool (R 0 = 0.8 mmol P m −3 ) at a rate of d (0.864 day −1 ), representing a chemostat.The resource in each box is depleted by phytoplankton growth, which is represented by Michaelis-Menten nutrient uptake where all species have the same half-saturation nutrient concentration (k i ; mmol P m −3 ): Initially, all phytoplankton species begin with a concentration 10 −3 mmol P m −3 and the resource concentration in each box starts at 10 −3 mmol P m −3 .Phytoplankton growth for each species in each box (here we drop the j subscript) is a function of temperature (Thomas et al., 2012): where the constants a and b are empirically derived values that control the exponential increase of growth rate with temperature and the trait parameters (z i and w) control the species-specific response to temperature.The values for a, the growth rate at 0°C (0.81 day −1 ), and b, the exponential increase in growth rate with temperature (0.0631 day −1 ), are taken from empirical analyses (Bissinger et al., 2008) and are commonly utilized (Smith et al., 2021;Thomas et al., 2012).All model species have the same niche width (w; 10°C), which is roughly the niche width of observed North Atlantic phytoplankton species (Irwin et al., 2012).We created a vector of 45 unique z i values ranging from −4 to 40°C at 1°C increments and calculated a thermal growth curve with each z i across a gradient of temperatures between −4 and 50°C at 0.1°C increments. This gradient extends beyond realistic ranges in temperature to ensure full coverage.For each growth curve, we calculated the temperature where growth was maximum to define the optimal temperature (T opt ; °C).Thus, each model phytoplankton species has a unique fundamental temperature niche and optimum temperature, but equivalent niche width (Figure 1a). The rate of mortality (m i (T); day −1 ) increases with temperature, similar to phytoplankton maximum growth rates: where a and b are the same constants in Equation (3).We test three different scaling factors (γ), 0.05, 0.1, and 0.2, in order to observe how low, intermediate, and high mortality influence the resulting realized temperature niches (γm i (T); Figure 1a).Temperature-dependent mortality has been used in other modeling studies (Thomas et al., 2012), and coarsely represents the increased growth rate of predators (e.g., Vidal, 1980) and phytoplankton respiration rates with increasing temperature (e.g., Brown et al., 2004). Each model box had a climatological seasonal temperature cycle derived from daily sea surface temperature data from National Oceanic and Atmospheric Administration Optimum Interpolation SST data (NOAA OISST; https:// www.ncei.noaa.gov/ produ cts/ optim um-inter polat ion-sst; Reynolds et al., 2002).For each box, the daily temperature was averaged over all longitudes and averaged over 1982-2010, then interpolated to the model time step (Figure 1b). For model experiments with steady temperatures, we calculated the mean temperature from the seasonal cycle and set that as the constant temperature for the model box for each time point. (2) The rate of dispersal between any two model boxes ( ; day −1 ) is determined from horizontal eddy diffusivity (K H ; m 2 s −1 ) in the ocean and the distance between boxes (Δ y; m): K H ranges from 10 1 to 10 4 m 2 s −1 in the ocean (Abernathey & Marshall, 2013), and we tested four different increments of K H for increasing diffusivity: 10 0 , 10 1 , 10 2 , and 10 3 m 2 s −1 , excluding the high diffusivity values (10 4 m 2 s −1 ), as they are found in restricted areas, and including a very low diffusivity simulation (10 0 m 2 s −1 ).Δ y increases with the distance between boxes.increases with K H and decreases with Δ y, and was converted into units of day −1 . Net dispersal for each species i is the balance between immigration from all other boxes (there are b boxes) and emigration to all other boxes, such that net dispersal = immigration -emigration.For each species in box 1 (j = 1), net dispersal is: where 1,j is the dispersal rate between box 1 and box j.Immigration depends upon the biomass of species i in box j (P i,j ), whereas emigration depends upon the biomass of species i in box 1 (P i,j=1 ).Net dispersal for all other boxes was calculated in a similar manner. Model simulations were run for 50 years with a time step of 3 h, and we present results averaged over the last 5 years of the model integration. | Estimating fundamental and realized niches After 50 years, the model species in each box were categorized as extant if: (1) concentration in the last year of the model was greater than 10 −4 times the maximum concentration for any given species and (2) the difference between yearly averages phytoplankton concentration was approximately zero for the last 5 years of the model ( dP dt ≅ 0).Species not meeting these criteria were considered, in practical terms, extirpated.The realized temperature niche for each surviving species was calculated by fitting a nonparametric kernel density estimate to the biomass (P) as a function of temperature (T) for each species across all boxes.Using a kernel density estimate allowed for a continuous curve estimate over the finite range of temperatures in the model and made it simple to calculate the maximum and width of the curve without introducing bias via parameterization (Antell et al., 2021;Broennimann et al., 2012;Smith et al., 2021). From the curve fit, the modeled realized optimal temperature (T opt R ) and niche width (W R ) were calculated as follows: where the optimal temperature (T opt R ) is defined as the temperature (T) where biomass is at its max (P max ), and the niche width (W R ) is defined as the difference between the 1st and the 99th percentile of the temperature distribution.To compare the difference between the modeled and the fundamental niche widths, we calculated the following ratio: W F is equal to 10°C for all species in the model (Equation 3).A positive (negative) value of W means that the modeled realized temperature niche is wider (narrower) than the fundamental temperature niche width. To compare the difference between the modeled and the fundamental temperature optimums, we calculated the following ratio: F varies across model species (see Equation 3).A positive (negative) value of T opt means that the modeled realized temperature optimum is warmer (colder) than the fundamental temperature optimum. | Diversity metrics Average diversity over the last 5 years of the model (S) for each box was calculated by averaging the number of species present at each time point in the last 5 years of the model, using the two criteria outlined above.Total diversity (S T ) in each box was calculated by summing the total number of unique species present at any time in the last 5 years of the model. | Model experiments We conducted four model experiments, outlined in Figure 2, that tested ecological outcomes in phytoplankton community models run with: (E1) no spatial mass or temporal storage effects-a control experiment; (E2) spatial mass effects only; (E3) temporal storage effects only; and (E4) combined spatial mass and temporal storage effects. In Experiment 1 (E1; Figure 2a), we implemented the phytoplankton community model with constant temperature in each box, determined by the mean temperature throughout the year at that location, and no dispersal.We hypothesized that competitive exclusion would lead to one dominant species present in each box, and the surviving species would be the one with the optimal temperature closest to that of the yearly average.We refer to this model experiment as a control. In Experiment 2 (E2; Figure 2b), we kept a constant temperature in each box, as in E1, but allowed for model species to disperse between boxes, with varying dispersal strengths.We hypothesized that allowing for species to disperse between boxes would increase ).The left column demonstrates the yearly temperature cycle for cold (solid lines) and warm temperature boxes (dashed lines) for each experiment.Experiments with no temperature variability (E1 and E2) show constant temperature over time.Experiments with dispersal (E2 and E4) have arrows between boxes to represent dispersal between boxes.The middle and right columns show the fundamental niches and hypothesized realized niches for two species, one with a colder optimal temperature that would survive in the cold box (solid line) and one with a warmer optimal temperature that would be found in the warm box (dashed line).realized niche widths and diversity relative to the control experiment (E1; Figure 2a).Additionally, we hypothesized that as dispersal magnitude increases, realized temperature niche widths would increase beyond the fundamental temperature niche widths such that W > 0 and species diversity within each box would increase. In Experiment 3 (E3; Figure 2c), we allowed temperature within each box to vary seasonally, but did not allow for dispersal between boxes.The seasonal cycle of temperature within each box was tied to observations.Relative to the control experiment (E1; Figure 2a), we hypothesized that allowing for temperature to vary seasonally would increase realized niche widths and increase diversity. Additionally, we hypothesized that boxes with larger temperature amplitudes would support a greater number of species and have species with wider realized temperature niches relative to boxes with lower temperature amplitudes. In Experiment 4 (E4; Figure 2d), we allowed model temperature to vary seasonally in each box and for species to disperse between boxes.We hypothesized that the combined influence of increasing dispersal and temperature variability would result in realized temperature niches that were wider than fundamental temperature niches ( W > 0) and would result in the highest diversity across all experiments. In E2-E4, we explored model sensitivity to changing the strength of phytoplankton mortality (γ) and horizontal dispersal ( ).We explored the sensitivity of model results to the strength of phytoplankton mortality because it influences phytoplankton net growth rate and consequently competitive dynamics within each box. Additionally, horizontal diffusivity influences the rate and magnitude of immigration and emigration within a community influencing the overall competitive dynamics within each box. | (E1) control experiment with no mass effects In E1, model temperature was constant, there was no dispersal between boxes, and we test three mortality scaling factors (γ = 0.05, 0.1, and 0.2).In this case, only one species survived in each model box (Figure 3).The species with an optimal temperature (T opt F ) closest to the mean temperature of the environment had the highest net growth rate and outcompeted all other species.In this experiment, it was not possible to calculate a realized temperature niche for each species, as each species survived in only one box with one average temperature. | (E2) spatial mass effects only In E2, model temperature was constant, but phytoplankton dispersed between boxes.We also explored a range of dispersal strengths ( ) and phytoplankton mortality scaling factors (γ) in order to examine the effects of these mechanisms on model communities. Dispersal increased the number of species present in each box compared to the control (E1) with no dispersal (Figure 4); in this illustrative example, dispersal magnitude (K H ) was 10 2 m 2 s −1 and the mortality scaling constant ( ) was 0.05.Rather than only one species being present per box, as in E1, typically four to ten species were present in E2 per model box, due to an influx of phytoplankton from neighboring model areas with different average but constant temperatures.Even with dispersal, the model phytoplankton with an optimum temperature closest to the average temperature in that box was the most abundant (dotted lines in Figure 4). The average number of model species surviving in each box, or S, increased with dispersal magnitude (Figure 5).When the mortality rate was lowest ( = 0.05; Figure 5a), with low dispersal (K H = 10 0 m 2 s −1 ), the modeled latitudinal gradient in S was weak and S in each box was low.At the highest tested rate of dispersal (K H = 10 3 m 2 s −1 ), S was high but nearly uniform across the boxes, meaning all species were dispersed rapidly enough to be present everywhere (Figure 5a).At intermediate levels of dispersal, S was greatest in mid-latitudes due to the accumulation of both warm-and cold-adapted species in these areas.Higher phytoplankton mortality (Figure 5b,c) yielded qualitatively similar changes in S with latitude and dispersal strength, except that the ubiquity of high S across latitude as in Figure 5a was not found and diversity was concentrated in the mid-latitudes.Across all mortality scaling strengths ( = 0.05, 0.1, and 0.2), realized temperature niche width (W R ) increased with increasing dispersal strength, but temperature niche optimums (T opt R ) were unaffected (Figure 6).When dispersal was low, realized temperature niche widths were consistently narrower than fundamental temperature niche widths (δ W < 0; Figure 6a).Across all mortality scaling factors, W increased as dispersal magnitude increased.On average, T opt values were negative across all mortality strengths and dispersal magnitudes, meaning the realized temperature optimums were colder than the fundamental temperature optimums and varying dispersal and mortality did not change this (Figure 6b).Polar species with optimal temperatures close to or less than zero that thrive in the most extreme boxes end up with high T opt values as they unilaterally disperse to adjacent boxes warmer temperatures thus increasing their realized temperature (Figure 6b). | (E3) temporal storage effects only In E3, model temperature followed a box-specific seasonal cycle, but phytoplankton were not able to disperse between boxes. Boxes with high seasonal temperature amplitudes (Figure 7c-f) had a greater number of species present compared to the control with no temperature variability (Figure 3).As a result of temperature varying seasonally, phytoplankton biomass was no longer constant over a model year and when more than one species is present, the community dynamics had a cyclical pattern where species increased and decreased in abundance following changes in temperature.The model phytoplankton species with an optimum temperature closest to the average temperature in that box was typically one of the most abundant species (dotted lines in Figure 7). Compared to the control experiment with steady temperature (E1), seasonally varying temperature increased both the average number of species present (S) and the total number of species present (S T ) in the last 5 years of a 50-year model integration across all phytoplankton mortality strengths (Figure 8).As temperature amplitude increased, the number of species present in the model increased when mortality was low ( = 0.05).As the mortality scaling factor increased, the average number of species present decreased compared to low mortality (Figure 8a).The total number of species present (S T ) increased as temperature seasonal variability increased and was similarly dampened by an increase in the mortality scaling factor relationship (Figure 8b). Increasing temperature amplitude increased the realized temperature niche widths of the model phytoplankton species (Figure 9a) but had no effect on realized temperature optimums (Figure 9b).As mortality increased, there was a slight decrease in W , particularly when temperature amplitudes were high (Figure 9a), but there was no influence of mortality on T opt (Figure 9b). | (E4) spatial mass and temporal storage effects In E4, the model temperature followed a box-specific seasonal cycle Across all mortality scaling factors ( = 0.05, 0.1, and 0.2), realized temperature niche widths (W R ) increased with increasing seasonal temperature variability and increasing dispersal magnitude (Figure 12a).When dispersal magnitude was highest (K H = 10 3 m 2 s −1 ), almost all realized temperature niche widths were wider than the fundamental temperature niche widths (δ W > 0; Figure 12a).When dispersal magnitude was lower (i.e., K H = 10 0 m 2 s −1 ), realized temperature niche widths were wider than fundamental temperature niche widths when temperature amplitude was higher.Low dispersal magnitude combined with low temperature variability resulted in realized temperature niche widths that were narrower than the fundamental temperature niches (δ W < 0; cooler colors in Figure 12a). For any given dispersal magnitude, the maximum δ W decreased as mortality increased except for when dispersal was highest where some species were able to disperse the full range of modeled temperatures (Figure 12a).As dispersal magnitude and temperature variability increased, there was greater variability in the realized optimal temperature across species, but overall, there was no effect of dispersal, temperature variability, or mortality scaling factor on realized temperature niches ( T opt; Figure 12b).We still find, as in E2 (Figure 6b), that polar species with low T opt values present high T opt values due to only being advected into warmer waters. Figure 13 shows a summary of how niche widths (δ W ), temperature niche optimums ( T opt), average diversity (S), and total diversity (S T ) vary across Experiments 1-4 (E1-E4) for a range of dispersal strengths.Realized temperature niche widths (Figure 13a) increased with increasing dispersal strength and temporal temperature variation.Realized temperature niche optimums, however, were not strongly affected by either dispersal or temporal temperature variability (Figure 13b).Both total (Figure 13c) and average diversity (Figure 13d) increased with increasing dispersal strength and temperature variability. | DISCUSS ION Using a simple metacommunity model, we found that increasing dispersal and seasonal temperature variability increased realized niche widths and community diversity but did not affect realized temperature optimums for growth.Here, we discuss temporal storage effects, spatial mass effects, and source-sink dynamics in the model, and how simplifications of the model guide our interpretations of the results. | Temporal storage effects When temperature was constant with no dispersal (Figure 3), the model species with a fundamental optimum temperature for growth (T opt F ) closest to the constant temperature of the box outcompeted all others (Experiment 1, or E1; Figure 3).Similarly, in Experiment 3 (E3), when temperature amplitude was low with no dispersal (Figure 7), the model species with a fundamental optimum temperature for growth (T opt F ) closest to the mean temperature of the box outcompeted all others.However, in E3 and E4, we found that as seasonal temperature amplitude increased (Figures 8 and 11), regardless of dispersal between boxes, model diversity (S and S T ) and realized temperature niche widths increased ( W ), although there was no effect on the difference between fundamental (T opt F ) and realized optimum temperatures (T opt R ) for growth ( T opt; Figures 9 and 12).These model results, not only illustrated most clearly in Experiment 3 (E3) but also seen in E4, are caused by a temporal storage effect linked to the seasonal changes in temperature, as has been studied previously (Chesson, 2000;Descamps-Julien & Gonzalez, 2005;Kremer & Klausmeier, 2017;Scranton & Vasseur, 2016).The changing temperature allowed for a temporal succession of model species with different temperature optimums for growth, and because model species persisted beyond when their temperature-dependent specific growth rate is optimum, in many cases more than one model species existed at the same time (allowing for higher S).The temporal succession facilitated a greater number of species present at some point over the year also (higher S T ).The temporal storage effect also caused modeled realized temperature niches within a single model box to be wider than fundamental temperature niches, because species were able to persist well outside their ideal thermal conditions for growth, either due to a weakly positive net growth rate or a long, slow decline from high abundance conditions during a model "bloom."However, temporal storage effects were weakened by increases in mortality (Figures 8 and 9).When the strength of mortality increased, model species abundance decreased quickly when mortality exceeded growth, and persistence outside of ideal thermal conditions was weaker.In other words, the strength of mortality of F I G U R E 6 E2.(a) W and (b) T opt for a range of dispersal strengths (K H ) from low (10 0 m 2 s −1 ) to high dispersal (10 3 m 2 s −1 ) and phytoplankton mortality scaling constants ( ).W is the difference between realized and fundamental niche widths, divided by the fundamental niche width (Equation 9).T opt is the difference between realized and fundamental temperature optimums, divided by the fundamental temperature optimum (Equation 10).Y-axis values greater than zero mean the realized niche parameter was greater than the fundamental niche parameter.Colors represent low (orange; 0.05), medium (purple; 0.1), and high (pink; 0.2) phytoplankton mortality scaling constants ( ).Each circle within dispersal and mortality combinations represents one model species.The points are jittered along the X-axis to better visualize variations in the Y-axis.Overlaid on the raw data are boxplots to better visualize the differences between model parameter choices.The box represents the 25th and 75th percentile (bottom and top edges), and the 50th percentile (middle line).The lines are ±1.5 times the interquartile range, estimating the 95% confidence interval.Results are averaged over the last five years of a 50-year model simulation. Figure 9b).The species in our model did, however, have realized niche optima that were, on average, slightly colder than the fundamental temperature niche optimums ( T opt < 0; Figure 9b).Previous studies (Kingsolver et al., 2013;Smith et al., 2021) have observed this pattern across a range of organisms.Fundamental temperature niches are typically, and in this model, assumed to have a left-or negativelyskewed curve where growth above the optimum temperature decreases rapidly compared to growth below the optimum, as measured from laboratory experiments where growth is calculated from incubations at constant temperatures (Anderson et al., 2021;Norberg, 2004;Thomas et al., 2012).Jensen's Inequality suggests that, in nonlinear systems, time-averaged growth under variable conditions differs from growth under average conditions (Bernhardt et al., 2018).In our model, phytoplankton growth decelerates with F values.The dash-dotted lines represent the species that were present in each box with constant temperature and no dispersal (Figure 3).Dispersal for this model run was zero and the phytoplankton mortality scaling factor (γ) was 0.05.temperature (i.e., the second derivative of the thermal response curve is negative), leading to realized temperature niche optima that are colder than fundamental temperature optima. | Spatial mass effects We found that when temperature is constant (E2), increasing dispersal strength increased model diversity within each location (S and S T ; Figure 5) and increased realized temperature niche widths (W R ) of model species compared to their fundamental niches ( W ; Figure 6a), although there was no effect on realized optimum temperature for growth (T opt F ) for model species compared to their fundamental niches ( T opt ; Figures 6b).2) phytoplankton mortality scaling constants ( ).W is the difference between realized and fundamental niche widths, divided by the fundamental niche width (Equation 9).T opt is the difference between realized and fundamental temperature optimums, divided by the fundamental temperature optimum (Equation 10).Points are realized niche parameter values for each species in the model calculated across all boxes where that species was present and considered present.Dispersal in this case was zero.Results are from the last 5 years of a 50year integration of the model.The points in (a) were fit with a GAM to illustrate the relationships between temperature amplitude and W . Leibold et al., 2004;Ward et al., 2021).Spatial mass effects describe the physical displacement of species across spatially separated patches (Leibold, 1997;Leibold et al., 2004;Shoemaker & Melbourne, 2016;Steiner & Leibold, 2004).When the rates of dispersal were very low in the model, species sorting dominated ecological outcomes (Figures 4-6).As the rate of dispersal increased, dispersal reintroduced species faster than competition removed them, such that overall diversity (S and S T ; Figure 5) was higher than in the low-dispersal case (Leibold et al., 2004;Shoemaker & Melbourne, 2016).In addition to an increase in diversity, we found that increasing dispersal magnitude, with or without the addition of temporal variability, increased realized niche widths (W R ) compared to their fundamental niches ( W ; Figure 6a).At the highest rates of dispersal, realized niche widths 1c.Temperature (°C) in each illustrative box varied seasonally and differed between boxes.The surviving species are colored by their corresponding optimal temperature (T opt F ) values.The dash-dotted lines represent the species that were present in each box with constant temperature and no dispersal (E1; Figure 3).Dispersal magnitude (K H ) for this model run was 10 2 m 2 s −1 and the mortality scaling constant ( ) was 0.05. were wider than fundamental niche widths ( W > 0; Figure 6a), illustrating how spatial mass effects can rescue or buffer species from local extirpation.In regions of the ocean with high dispersal, spatial mass effects could be driving a large portion of the observed diversity, and species are likely to be present in the community even though they may have low or negative net growth there (e.g., Barton et al., 2010;Clayton et al., 2013). For example, over a matter of days, Prochlorococcus in the Gulf F I G U R E 11 E4.(a-c) S and (d-f) S T across latitude (Y-axis) and across increasing dispersal strength (X-axis) for low (0.05), medium (0.1), and high (0.2) phytoplankton mortality scaling factors ( ), respectively.S is the average number of species present at any point during the last 5 years of a 50-year integration of the model, while S T is the total number of species present during the last 5 years.Dispersal strength scaled with the horizontal diffusivity in the ocean (K H ) which we varied from 10 0 (low dispersal) to 10 3 (high dispersal) m 2 s −1 . F I G U R E 1 2 E4.(a) W and (b) T opt as function of temperature amplitude (°C) and dispersal magnitude ranging from low dispersal 10 0 m 2 s −1 to high dispersal 10 3 m 2 s −1 for low (circles; 0.05), medium (squares; 0.1), and high (triangles; 0.2) phytoplankton mortality scaling constants ( ).W is the difference between realized and fundamental niche widths, divided by the fundamental niche width (Equation 9). T opt is the difference between realized and fundamental temperature optimums, divided by the fundamental temperature optimum (Equation 10).Points are colored by the full range of temperatures an organism experienced across the whole domain in the last 5 years of a 50-year integration of the model. Stream can be moved hundreds of kilometers and ultimately encounter conditions outside their expected thermal tolerance (Cavender-Bares et al., 2001).Recent field-studies have confirmed that the composition of marine microbial communities is strongly impacted by dispersal, not just local environmental conditions (Villarino et al., 2022). We found no clear relationship between dispersal strength and realized optimum temperatures for growth in the model (T opt F ; Figure 6b).This was likely because dispersal in the model was equal in all directions, such that changing the model dispersal rates did not appreciably change the realized optimum temperatures for growth for each model species. The effects of dispersal on diversity (S and S T ) and realized niche widths ( W ) were dampened as mortality increased (Figures 5 and 6). For the same dispersal magnitude, we found less diversity and narrower realized temperature niches with increasing mortality.Lower model mortality rates allow for model organisms to spread and persist further from their source, whereas higher mortality rates tend to minimize the ecological significance of spatial mass effects on model phytoplankton assemblages.Colors represent the different dispersal magnitudes that we tested ranging from low dispersal (10 0 ; light blue) to high dispersal (10 3 ; dark blue).Boxes in E1 and E3 are not colored because there was no dispersal between boxes.Points are jittered along the x-axis for easier visualization.W is the difference between realized and fundamental niche widths, divided by the fundamental niche width (Equation 9).T opt is the difference between realized and fundamental temperature optimums, divided by the fundamental temperature optimum (Equation 10).S is the average number of species present at any point during the last 5 years of a 50-year integration of the model, while S T is the total number of species present during the last 5 years. | Source-sink dynamics Source-sink dynamics are common in ecological communities connected via dispersal (Gonzalez & Holt, 2002;Holt, 1985;Holt et al., 2003;Leibold et al., 2004;Roy et al., 2005)."Source" populations occur where conditions are favorable for the population to exist, and "sink" populations occur where they would not persist without dispersal from other locations (the rescue effect).These source-sink dynamics control model dynamics, which we discuss further there. Spatial and temporal mass effects, independently, increased model diversity and realized niche widths (E2 in Figures 5 and 6 and E3 in Figures 8 and 9).The model indicated that diversity (S and S T ) was higher when the temporal and spatial mass effects were combined (E4), particularly in regions where temperature variability was high (Figure 14a).The model also illustrated how certain areas where an organism has high fitness and biomass can serve as a source of biomass for adjacent areas where the same organism's fitness is relatively low.These source-sink dynamics underpin the widening of the realized temperature niche relative to the fundamental niche ( W ) when temporal and spatial mass effects were combined (in this case for just one illustrative model species with an optimum temperature for growth of 19°C; Figure 14b).The source location occurs where the fitness of a particular organism is relatively high, and the sink is where the population of that organism is sustained by dispersal, but these source and sink locations change over the year.For example, consider again the model phytoplankton with an optimum temperature for growth of 19°C (Figure 14c,d).In February and August, respectively, its biomass (blue lines) is maximum at 28° N and 42° N. The actual growth rate at these moments (black lines) did not precisely coincide latitudinally with biomass peaks because of temporal lags between maximum growth rate and biomass.The areas of high biomass had negative net transport, meaning they acted as a source of biomass for adjacent areas.These adjacent areas were a sink of biomass where local fitness was relatively low and the population was sustained by dispersal from other areas.Thus, in marine settings, a species may be present in space and time even when its fitness is relatively low, due to either or both temporal storage and spatial mass effects, provided that the rates of mortality are sufficiently low to allow for temporal persistence and spatial dispersal of organisms. | Model simplifications and their implications We created a simple metacommunity to study how spatial mass and temporal storage effects shape realized temperature niches and community diversity.However, given the idealized nature of the model, we did not expect model species distributions or diversity gradients to closely match observed, global-scale patterns.Here, we briefly discuss key model simplifications and how the model simplifications in traits, trophic relationships, ocean circulation, mutations, and stochasticity make direct comparison with ocean observations challenging. Temperature variability in this model was simplified from observations to create a repeating and smooth seasonal cycle within a given 1° latitude band averaged across all longitudes.Thus, temperature variations occurring on higher (e.g., internal waves, storms, and upwelling events) and lower frequencies (e.g., interannual variations and anthropogenic climate change) were not considered.Environmental variations at these unrepresented scales clearly influence community structure and competitive outcomes (Barton et al., 2020;Vasseur et al., 2014).In addition, all species in the model were seeded with the same fundamental niche width (10°C), which we based upon the average of a range of observed niche widths in the North Atlantic (Irwin et al., 2012).This choice ignores real variations in niche widths and their associated hypothetical trade-offs, such as among temperature generalists and specialists (Kingsolver, 2009). Additionally, the model did not resolve important trait variations, such as cell size, nutrient uptake affinity, and nutrient storage, and neither did the model explicitly resolve losses to zooplankton grazing, viral lysis, or other factors.There are tradeoffs between competitive traits for nutrient acquisition, cell size, and light availability that shape an organism's ecological niche (Edwards et al., 2012(Edwards et al., , 2013;;Litchman et al., 2012).For simplicity, however, we ignored these important ecological dimensions to focus on the univariate temperature niche.These omitted traits mean, for example, that the model dynamics do not accurately represent seasonal depletion of nutrients due to phytoplankton blooms (e.g., Edwards et al., 2012) or biogeographic and diversity patterns tied to nutrients, light, or other factors (e.g., James et al., 2022).Phytoplankton mortality in the model increased exponentially with temperature (Equation 4), using the same exponents a and b as reported for the temperature sensitivity of growth (e.g., Equation 3).However, while this simplification was desirable in order to have growth and mortality roughly matched across a wide range of temperatures for model phytoplankton, recent studies have shown that the temperature dependence of mortality may differ from growth (e.g., Baker & Geider, 2021;Demory et al., 2017). Our model utilized isotropic dispersal but ocean currents are much more dynamic both temporally and spatially.More realistic patterns of dispersal including, for example, wind-driven currents such as the Gulf Stream, may produce more plausible source and sink areas for microbial populations (e.g., Ward et al., 2021) and hotspots of diversity where adjacent communities mix together (e.g., Clayton et al., 2013). The model did not represent mutations or demographic stochasticity, although these processes play important roles in natural systems.Selection on new mutations and existing intraspecific variability can lead to changes in species niches over time (Collins et al., 2014;Lohbeck et al., 2012).Our model included just one phenotype per model species, defined by its temperature niche, that was able to persist in some cases in suboptimal growth conditions due to spatial mass and temporal storage effects.However, marine phytoplankton species often have considerable standing genetic variation (e.g., Biller et al., 2015), which widens the fundamental and realized niche for that species (Smith et al., 2021).Some of this standing genetic diversity may be maintained by dispersal and temporal environmental variation.The model also did not include demographic stochasticity (Lande, 1993;Shoemaker et al., 2020), which is critically important for dynamics of small populations in particular.Ward et al. (2021) found that demographic stochasticity did not significantly affect microbial populations where they were abundant, for example, in their core ranges, but did increase the chance of local extinction when microbial populations were very small.As such, our model is optimized for studying microbial dispersal between nearby regions and persistence through time, rather than through strong selection gradients (e.g., a cold wateradapted cell passing through the equatorial zone) that dramatically lower population abundance.Historical contingencies and priority effects (e.g., Sefbom et al., 2015) are therefore not resolved in our model. | CON CLUS ION Our original motivation for undertaking this modeling study was to better understand how and why realized temperature niches for the marine cyanobacterium Prochlorococcus are wider than fundamental temperature niches.In the model, temporal storage and spatial mass effects generated increased diversity and realized temperature niche widths.However, the combined effects created realized temperature niches that exceeded the fundamental temperature niches and further increased diversity.This model was idealized but provided a useful framework for asking how physical processes such as temperature variability and dispersal shape phytoplankton realized temperature niches.Much of the research focusing on microbial diversity in the oceans so far has neglected the roles that spatial mass and temporal storage effects may play in shaping diversity and biography, and our model helps illustrate that these processes may be important under certain ocean conditions.For example, the seasonal temporal storage effects are likely to be strongest in regions with strong seasonal variations in temperature, such as mid-latitude and coastal ocean regions.Because the strength of temporal storage effects decreased with increasing mortality rates in the model, the ecological importance of temporal storage effects may be heightened specifically during winter and spring when predators are relatively scarce due to overwintering (Mauchline, 1998) or dilution by deep mixed layers (Behrenfeld & Boss, 2014).Spatial mass effects are likely strongest where horizontal advection and mixing are highest, such as western boundary currents.Like temporal storage effects, the ecological importance of spatial mass effects may be highest when rates of phytoplankton mortality are lowest. While further observational and modeling work can constrain the roles that temporal storage and spatial mass effects play in setting distributions of Prochlorococcus ecotypes, our model suggested that these mechanisms are likely to be influential for the ecology of these and other microbial taxa. Beyond just understanding the distribution of species in the ocean, these results have direct implications for species distribution modeling.Species distribution models, or SDMs, are often used to predict temporal and spatial distributions of species based upon (usually limited) data describing the realized niche of a particular species and more widespread data describing environmental conditions (Elith & Leathwick, 2009).Such models are increasingly used to understand patterns of biogeography in marine plankton, and how they may change in response to climate warming (e.g., Barton et al., 2016;Brun et al., 2015;McGinty et al., 2021).The influence of temporal storage and spatial mass effects on realized niches, and the high likelihood that the ecological impact of these processes change in space and time, represent yet another challenge for applying species distribution models to make biogeographic and ecological projections in response to climate change. Finally, this simple model highlights how two fundamental processes acting ubiquitously in the ocean-environmental and population change through time and dispersal of organisms-play an important and often overlooked role in shaping marine microbial spatial and temporal patterns of distribution, realized niches, and community diversity. A latitudinal transect from 80° S to 80° N was divided into 159 1° latitude wide model boxes where each box is seeded with the same initial community comprised of 45 unique phytoplankton species (Figure 1).Model phytoplankton have equivalent affinity for nutrients but different temperature functional responses.The model does not consider light and how it impacts phytoplankton growth.The temperature conditions in each box are informed from sea surface temperature observations, and the dispersal rates between boxes are calculated based on estimated rates of horizontal eddy diffusivity in the ocean.Here, we outline the equations and assumptions used in the model for phytoplankton competition, nutrient supply, and dispersal between boxes. 1 Schematic of the model.(a) The model resolves n different phytoplankton species (in this case 45), each with their own unique fundamental temperature niche (see (a) inset).The limiting resource (R) is supplied from deep water with a high and steady resource concentration (R 0 ) at a constant rate (d; dark red dashed line).Phytoplankton loss is represented by a temperature-dependent mortality (m(T)) term which is calculated as ae bT where ae bT is the maximum growth rate as a function of temperature (solid exponential line in the inset), a and b are empirically-determined constants, and which is a unitless scaling factor.We tested three different scaling factors ( ) to model high ( = 0.2; long-dash line), intermediate ( = 0.1; short-dash line), and low ( = 0.05; dotted line) mortality pressures.(b) Seasonally varying sea surface temperature (SST) for each latitude in the model was derived from daily NOAA climatological SST data averaged across all longitudes and interpolated to the model timescale.(c) The latitudinal transect across the Atlantic Ocean was split into 159 different 1° latitude boxes in the model where the dispersal rate ( ) is a function of the distance between boxes and horizontal diffusivity.Boxes closer together ( near ) have stronger exchange rates compared to boxes further apart ( far ).The four boxes centered on 10°, 35°, 60°, and 75° (colored) are the four latitudes used in Figures 3, 4, 7, and 10 to demonstrate model output under different temperature conditions. (a) Experiment 1 (E1) is run under constant temperature with no dispersal (control); (b) Experiment 2 (E2) is run under constant temperature with dispersal (spatial mass effects only); (c) Experiment 3 (E3) is run with seasonal temperature variation with no dispersal (temporal storage effects only); and (d) Experiment 4 (E4) is run with seasonal temperature variation with dispersal (spatial and temporal storage effects).The fundamental temperature niches are the same across all experiments (middle column), but the realized temperature niche (right column) is shaped by the temperature regime and dispersal.We represent the realized niche of the two species with points in E1 because the lack of temperature variability through time and without any exchange between boxes would not result in a measurable change in abundance over temperatures. F I G U R E 3 E1.Average daily temperature (left column: a, c, e, g) and biomass for surviving phytoplankton species (right column: b, d, f, h) in four illustrative model boxes (rows) ranging from colder high latitudes (top) to warmer low latitudes (bottom) in the last 5 years of a 50-year model integration.The four illustrative model boxes represented areas centered on 75° N (a, b), 60° N (c, d), 35° N (e, f), and 10° N (g, h), as indicated in Figure 1c.Temperature (°C) in each illustrative box was constant through time but different between boxes.The surviving species (only one per box in this case) are colored by optimum temperature for growth (T opt F ). and phytoplankton dispersed between boxes.Seasonal temperature variability and dispersal promoted greater species diversity in the model when compared to other model experiments.Boxes with high seasonal variability (Figure10c-f) supported more species compared to boxes with low temperature variability (Figure10a,b,g,h).However, as a result of the combined effects of temperature variability and dispersal, all boxes supported a greater number of species compared to previous experiments when the model was driven solely by either temperature variability or dispersal (Figures4 and 7).As dispersal magnitude and seasonal temperature amplitude increased, the average number of model species present in each box (S) and the total number of species present at any time in the last 5 years of the 50-year model run (S T ) increased (Figure11).When temperature is constant but dispersal increases to 10 2 m 2 s −1 (E2), the maximum S (Figure5a) and S T (Figure5b) values were 27, 20, and 14 species with low ( = 0.05), medium ( = 0.1), high ( = 0.2) mortality scaling factors.When there was no dispersal but seasonally variable temperature (E3), the maximum S (Figure8a) values were 3.29, 2.59, and 2 species; and the maximum S T values across any box was 5, 4, and 2 species (Figure8b) with low ( = 0.05), medium ( = 0.1), high ( = 0.2) mortality scaling factors.Combining the effect of dispersal and temperature variability (E4) increased the maximum S and S T values to 30, 30, and 25 species with low ( = 0.05), medium ( = 0.1), high ( = 0.2) mortality scaling factors (Figure11). model microbes, caused by grazing, viruses, or any other source, is determined in large part by the strength of the temporal storage effect in the model.In contrast, we found no evidence in the model for the temporal storage effect influencing realized optimum temperatures (T opt R ; F I G U R E 4 E2.Average daily temperature (left column: a, c, e, g) and biomass for surviving phytoplankton species (right column: b, d, f, h) in four illustrative model boxes (rows) ranging from colder high latitudes (top) to warmer low latitudes (bottom) in the last 5 years of a 50-year model integration.The four illustrative model boxes represented areas centered on 75° N (a, b), 60° N (c, d), 35° N (e, f), and 10° N (g, h), as indicated in Figure 1c.Temperature (°C) in each illustrative box was constant through time but different between boxes.The surviving species are colored by their corresponding optimum temperature for growth (T opt F ).The dash-dotted lines represent the species that were present in each box with constant temperature and no dispersal (Figure 3).Dispersal magnitude (K H ) for this model run was 10 2 m 2 s −1 and the mortality scaling constant ( ) is equal to 0.05.F I G U R E 5 E2.Average model phytoplankton diversity (S) at each model box latitude (Y-axis) and for a range of dispersal strengths (X-axis) for (a) low ( = 0.05), (b) medium ( = 0.1), and (c) high ( = 0.2) mortality scaling factors ( ). S is the average number of species present at any time point during the last 5 years of a 50-year model integration.Dispersal strength scaled with the horizontal diffusivity in the ocean (K H ) which we varied from 10 0 (low dispersal) to 10 3 m 2 s −1 (high dispersal). F I G U R E 7 E3.Average daily temperature (left column: a, c, e, g) and biomass for surviving phytoplankton species (right column: b, d, f, h) in four illustrative model boxes in the last 5 years of a 50-year model integration.The four illustrative model boxes (rows) ranging from colder high latitudes (top) to warmer low latitudes (bottom) represented areas centered on 75° N (a, b), 60° N (c, d), 35° N (e, f), and 10° N (g, h), as indicated in Figure 1c.Temperature (°C) in each illustrative box varied seasonally and differed between boxes.The surviving species are colored by their corresponding T opt Phytoplankton community composition and dynamics are not only influenced by local environmental conditions and ecological processes but also by immigration and emigration from and to other locations(Hellweger et al., 2014;Jönsson & Watson, 2016; F I G U R E 8 E3.(a) S and (b) S T as a function of temperature amplitude (°C) in each box, for low (orange; 0.05), medium (purple; 0.1), and high (pink; 0.2) phytoplankton mortality scaling constants ( ).Each point represents the diversity metric for a single box plotted against the temperature amplitude of that box.S is the average number of species present at any point during the last 5 years of a 50-year integration of the model, while S T is the total number of species present during the last 5 years.Dispersal in this case was zero.The lines represent a GAM fit to visualize the change in diversity as temperature amplitude increases.F I G U R E 9 E3.(a) W and (b) T opt as function of temperature amplitude (°C), for low (orange; 0.05), medium (purple; 0.1), and high (pink; 0. F I G U R E 1 0 E4.Average daily temperature (left column: a, c, e, g) and biomass for surviving phytoplankton species (right column: b, d, f, h) in four illustrative model boxes (rows) ranging from colder high latitudes (top) to warmer low latitudes (bottom) in the last 5 years of a 50-year model integration.The four illustrative model boxes represented areas centered on 75° N (a, b), 60° N (c, d), 35° N (e, f), and 10° N (g, h), as indicated in Figure F Comparison of (a) W , (b) T opt, (c) S T , and (d) S across all four experiments-E1: control; E2: spatial mass effects only; E3: temporal storage effects only; and E4: combined effects.Points for each experiment in panels a to b represent niche values for each surviving species and points in panels c to d represent diversity metric values within each box.Box plots show the 25th and 75th percentile (bottom and top edges), the 50th percentile (middle line), and the vertical lines are ±1.5 times the interquartile range, estimating the 95% confidence interval. F I G U R E 1 4 (a) Total diversity present during the last 5 years of a 50-year integration of the model (S T ) across all latitudes under three experiments with low mortality scaling ( = 0.05; see Figure 2)-Experiment 2 (E2): dispersal equal to K H × 10 2 m 2 s −1 with constant temperature (green); Experiment 3 (E3): natural temperature variability with no dispersal (yellow); and Experiment 4 (E4): dispersal equal to K H × 10 2 m 2 s −1 with natural temperature variability (orange).(b) Realized temperature niches for a species with an optimum temperature of 19°C under the same three experiments as in the panel a. (c-d) Instantaneous model dynamics for the same species in the panel (b) with an optimal temperature of 19°C at the time that the growth rate is maximum at 28° N (c) and 42° N (d).Growth maxima occurred during Winter at 28° N and during Summer at 42° N. The solid black line is temperature-dependent specific growth rate ( (T); days −1 ); the dashed black line is temperature-dependent mortality rate (m(T) = ae bT ; days −1 ); the solid red line is net transport ( ; mmolP m −3 days −1 ; top red axes); and the solid blue line is biomass in each model box (P; mmolP m −3 ; top blue axes).Negative net transport means the location is a source of biomass, while positive net transport means the location is a sink for biomass.
2024-02-09T05:09:47.849Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "6d079da7d56c39e99204a27d825d470141ceb728", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.10882", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d079da7d56c39e99204a27d825d470141ceb728", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
73639120
pes2o/s2orc
v3-fos-license
THE STUDY OF CHANGES IN ARDABIL PLAIN GROUNDWATER LEVEL USING GIS Uncontrolled exploitation of groundwater in many parts of the world has led to a sharp drop in groundwater levels. In this study, changes in Ardabil plain groundwater level were studied using geographic information system (GIS). For this purpose, the interpolation table method was used, the intrinsic data as table data of piezo metric wells was used. In order to implement the model, the Majol Geoestatical in geographic information system software was used. The data entered as regions into the geographic information system, and then done for the entire zoning area, due to zoning 8 models, the IDW, GPI, RBF, LPI, KO, KS, KU and EBK in geostatical extension were evaluated. The ordinary kriging method (KO) with the lowest RMSE, was determined as the most accurate one, and finally, as the ultimate method for zoning and map providing for the changes in groundwater levels drop of the region. The results of classification showed that the biggest drop of about 40 meters was in the areas close to the southeastern parts of the study region and in other areas, little changes were observed, this rate of the change and decline in some parts of the desert like southern regions is very tangible and specified. INTRODUCTION The provided statistics by the world resources illustrate a dilemma in the trend of annual drop in the ground water.The deficit in volume of the global groundwater storage annually is between 700 to 800 billion cubic meters, out of which 1 % belongs to Iran [1,10].In some countries with dry areas in which, the underground water has been used more than the consumption amount, and with full discharging of it, they are facing severe difficulties in the supply of water at the moment [11,13] so being aware of the status of water resources both in terms of quality and quantity is the most important implementation to prevent the destruction of water resources [4].Within the recent years in Ardabil desert uncontrolled exploitation has caused a significant drop in groundwater resources that land subsidence in some areas of the plain especially in southern parts is one of its reasons [2].Shafi'I Motlagh (1388) stated that the risk of water resources crisis, which is caused by a sharp decline in precipitation and wrong methods of irrigation and uncontrolled exploitation of groundwater resources draws a troubling future ahead of us.Akbari et al. (1388) conducted research to assess the groundwater levels drop of Mashhad plain, the statistics of 70 observed wells during 2 periods of 10 years (76 in 1366 and 87 in 1377) examined, and the results showed that during 20 years, the rate of 1.12 m, and it means that averagely every year 60 cm water level has been reduced. The GIS is one of the most practical knowledge, in addition, high profitability, accelerates the process of work, i.e. planning, and determining critical activities [14].The ability of this system in management, planning and strong statistical analysis has made many people in different fields use it as a powerful tool in making deci-sions [8].Ebrahimi et. al. (2009) by evaluating the effect of drought on wetland water surface of Chaharmahal and Bakhtiari province utilizing the GIS and remote sensing technology made this conclusion that by the means of uncontrolled exploitation of groundwater and effects of drought, the Pond water surface also decreased.Albertson and Henington (1995) reviewed the analysis of groundwater resources by using GIS.Makoto et. al. (2008) conducted a research in an area of evergreen forest in the central Cambodia river basin, and measured groundwater level fluctuations and analyzed the movement of groundwater using boundary conditions and parameters that in field operations were able to be measured.The results showed that, generally in the rainy season the height of groundwater level increases and in the dry season drops [9,12].In this study, decline in groundwater levels in the Ardebil plain aquifer using interpolation functions and zoning in the GIS was conducted, due to high population density, limited water resources in groundwater and removal of surplus capacity, increasing the acreage surface of crops, decline in groundwater levels and land subsidence. MATERIALS AND METHODS Study region: Ardabil plain in the eastern part of Azerbaijan plateau and in terms of the country divisions is located in the center of Ardabil province.The total water resources of Ardabil county are around 95/285 million cubic meters embody 120 million cubic meters surface water and groundwater 95/165 million cubic meters (recoverable from wells and subterranean).The volume of mentioned groundwater in the city of Ardabil through 39 series of aqueducts (85/1 million cubic meters) and 2192 wells (3/126 million cubic meters) utterly 15/128 million cubic meters are currently used (Ardabil Regional Water Company, 1392).illustrates the Ardebil plain position in Iran and Ardebil city (Figure 1). In order to do the following research, the statistics of 56 peso metric wells within the watery years 1350 to 1393 were used.After sorting the resulted data in Microsoft Excel environment, the column chart of the studied time period was drawn.The aim of drawing the following chart was the overview of underground water variation trends.Application used in this research, was ARC GIS software, version 2.10.In this study, due to providing the final map for drop in the groundwater levels, associated information to wells in the whole area was extended, that's why the interpolation model was used to examine the final map.According to this status that claims how the resulted data related to the rate of RMSE is lower, it illustrates this interpolation model as the efficient model for exhibiting the related information to peso metric wells in the total region.8 models including Inverse distance weighting, Global polynomial interpolation, Radial basis function, local polynomial interpolation, kriging -ordinary, kriging -simple, kriging -universal, Empirical Bayesian kriging were studied.For the implementation of model in geographic information system software, the Majol Geostatical model was used. RESULTS AND DISCUSSION Throughout the Ardabil plain an unconfined aquifer extends across it.Because of the uneven stone floor and existence of the feeding areas of surrounding mountains, the aquifer is not homogeneous and does not have the same situation in terms of watering.According to Figure 2, it is clear that Piraqum village near the southeastern of the area with about 40.38 meters of drop within the years 1961-1993 has the heighest exploitation and relevance to villages such as Nouran, Jabe dar lands, Aghcheh Kandy, Yajlu, Saadi Street, Niyar and Saeed Abad, in the northern half of the area with the loss of about 0 m and by the years 1961-1993 have the lowest uptake shown in Figure 2. INTERPOLATION DATA Figures 3 to 10 illustrate the relevant information to the unknown regions prediction and the related error in determination of these parts for the used 8 models. The measurement of validation results showed that the ordinary kriging method with the lowest RMSE (Table 1) is the most accurate one, and finally was selected as the ultimate method for preparing the map changes in groundwater level decline of the region. The zoning map of changes in groundwater level drop in the entire plain was developed using ArcGIS software (Figure 11). The zoning maps and distribution of wells show that the highest values of changes and the amount of ground water level reduction are in the southeast and in the villages (Khalil Abad, Merny, Piraqum and big Arallo). Based on the circumstances and characteristics of Ardabil plain, there is a free aquifer that is extended throughout the Ardabil desert in which this aquifer has broaden in the middle of the fertile lands and settlement of the studied wells in the agricultural fertile regions and uncontrolled exploitation from the aquifers in the recent years have caused a sharp fall in the water of these areas.Akbari et al. [1] also studied the groundwater level drop in Mashhad plain, and came to this conclusion that ground water levels in the central and western parts of the aquifer was reduced to 30 meters.The focus of the harvest wells with high flow rates in the region has caused this phenomenon. CONCLUSIONS In the past, irregular and incorrect exploitations of groundwater caused a drop in groundwater levels and in some cases, caused the groundwater aquifers to be dried in some parts of the country.Population growth, technology progress and other factors caused excessive water withdrawals for different uses such as drinking, industry and agriculture in different regions.In this context, due to withdrawing too much water for agriculture is the main factor that could make a significant decrease in the level of groundwater table.In the Ardebil plain during recent years due to excessive harvest about 500 million cubic meters of water have declined from the reservoirs.This rate of changes and reduction in some ar-eas of the desert, such as southern regions are very tangible and specific.This information as regions is entered into the geographic information system, and then done to all zoning areas.For zoning, 8 models including: IDW (Inverse distance weighting), GPI (global polynomial interpolation), RBF (radial basis furction), LPI (local polynomial interpolation), KO (krigingordinary), KS (kriging -simple), KU (kriging -universal) and EBK (empirical Bayesian kriging) in geostatical extension were studied and two models with the lowest RMSE were finally accepted and eventually based on the ordinary kriging model.The unknown regions for the entire area was zoned.The results showed that the highest drop with about 40 m was in the areas close to the southeast of the study area and in other areas, little change was observed.Accordingly, we can say, the density of population and much exploitation of groundwater for agriculture, industry and drinking water caused a sharp drop in these regions. Fig. 1 . Fig. 1. the condition of study region in Ardabil province and Iran Fig. 2 . Fig. 2. The changing rate of groundwater decline in the wells of Ardabil plain villages Table 1 . The RMS rate and average for each methods of interpolation
2018-12-21T12:34:20.546Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "6058aef7c1cdf76607e73096b29306a6114ebdf0", "oa_license": "CCBY", "oa_url": "http://www.astrj.com/pdf-61938-4060?filename=THE%20STUDY%20OF%20CHANGES%20IN.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6058aef7c1cdf76607e73096b29306a6114ebdf0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
53207555
pes2o/s2orc
v3-fos-license
Temporal Variability of Escherichia coli Diversity in the Gastrointestinal Tracts of Tanzanian Children with and without Exposure to Antibiotics This study increases the number of resident Escherichia coli genome sequences, and explores E. coli diversity through longitudinal sampling. We investigate the genomes of E. coli isolated from human gastrointestinal tracts as part of an antibiotic treatment program among rural Tanzanian children. Phylogenomics demonstrates that resident E. coli are diverse, even within a single host. Though the E. coli isolates of the gastrointestinal community tend to be phylogenomically similar at a given time, they differed across the interrogated time points, demonstrating the variability of the members of the E. coli community in these subjects. Exposure to antibiotic treatment did not have an apparent impact on the E. coli community or the presence of resistance and virulence genes within E. coli genomes. The findings of this study highlight the variable nature of specific bacterial members of the human gastrointestinal tract. of this study highlight the variable nature of specific bacterial members of the human gastrointestinal tract. KEYWORDS Escherichia coli, diversity, microbial genomics E scherichia coli in the human gastrointestinal tract is often recognized as an important source of disease (1,2). As the causative agent of over 2 million deaths annually due to diarrhea (3,4), as well as millions of extraintestinal infections (5), its categorization as a pathogen is not unwarranted. Particularly in developing countries, the consequences of diarrheal E. coli are substantial among children under 5 years old, who incur the majority of infections and deaths (3) and whose rapidly developing microbiomes can be impacted by frequent bouts of disease and subsequent treatment (6,7). Yet, E. coli is a dominant organism in the human gastrointestinal tract, identified in greater than 90% of humans, and many other large mammals, often reaching concentrations up to 10 9 CFU per gram of feces (8) without causing disease. In this role as a resident organism in healthy hosts, it is thought to have critical roles in digestion, nutrition, metabolism, and protection against incoming enteric pathogens (9)(10)(11)(12). Despite the importance and involvement of E. coli in human health, studies of its role as a native, nonpathogenic member of the human gastrointestinal microbiome are poorly represented among genome sequencing, comparative analysis efforts and functional characterization. Investigations into E. coli strain diversity and persistence in the human gastrointestinal tract are nothing new. In fact, studies going back to 1899 (13) have reported on fecal E. coli diversity and persistence. Additional studies have continued to probe this question with the advent of new microbiological technologies beginning with antigenic techniques (13,14), electrophoresis (15,16), and PCR (17), to name a few. Today, thanks to the ready access of whole-genome sequencing, we have an unprecedented opportunity to explore E. coli diversity and persistence at the genomic level. Most studies of bacterial genomics have focused on pathogenic isolates over a limited time frame. E. coli genomic studies are no exception, having concentrated on sequencing single isolates, from single time points, and on samples related to a clinical presentation, such as diarrhea or urinary tract infection (10,(18)(19)(20)(21)(22). There have been fewer than five closed genomes sequenced of nonpathogenic E. coli, in addition to a limited number of draft genomes from isolates obtained from the feces of individuals who do not have diarrhea (10,(22)(23)(24)(25). To date, the genomic examination of longitudinal isolates is lacking, thus hindering the ability to explore the diversity of E. coli isolates both within host and across time. With the exception of Stoesser et al. (23), which identified multiple isolates in single-host samples using single nucleotide polymorphism (SNP)-level analyses, most studies of resident E. coli were completed prior to ready access to sequencing technologies (11), leaving much to be learned about E. coli genomic diversity within and between human hosts over longitudinal sampling. A population-based longitudinal cohort study, PRETϩ (Partnership for the Rapid Elimination of Trachoma, January to July 2009), provided a unique opportunity to examine both the diversity and dynamics of the E. coli isolates in the human gastrointestinal tract among children in rural Tanzania (26,27). In the PRETϩ study, Seidman et al. investigated the effects of mass distribution of azithromycin on antibiotic resistance of resident E. coli (26,27). E. coli bacteria were isolated from fecal swabs obtained from 30 children aged 2 to 35 months old living in rural Tanzania, half (15 children) of whom were given a single oral prophylactic azithromycin treatment for trachoma (an infection of the eye caused by Chlamydia trachomatis). E. coli isolates from this cohort were selected for genome sequencing and comparative analyses to investigate the within-subject and longitudinal diversity of E. coli isolates in children (see Table S1 in the supplemental material). Up to three isolates per individual, from each of three time points spanning six months, were collected in the PRETϩ study, providing up to nine potential isolates from each subject for examination (Fig. 1). Samples from the current study provide insight into E. coli diversity within a subject over several time points. While other studies have examined resident E. coli in children in developing countries, they limited their focus to using PCR and in vitro lab techniques to identify a limited set of canonical virulence genes and determine resistance profiles of the isolated strains (28)(29)(30). In addition to the virulence-and resistanceassociated gene content, the current study demonstrates previously uncharacterized diversity among E. coli isolates from the human gastrointestinal tract on a wholegenome level within and across sampling periods. This work represents the most comprehensive longitudinal genomic study of resident E. coli within the human gastrointestinal tract and expands knowledge of the nonpathogen gut flora by increasing the available genome sequences of resident E. coli and highlighting the dynamic nature of the E. coli community. Subject clinical state and E. coli pathotype identification. There were 17 instances in which subjects had active diarrhea at the time of sample collection (12 instances occurred at the baseline time point), yielding 46 isolates from diarrheal conditions (26,27), 23 each from the antibiotic treatment and control groups. All cases of diarrhea were identified in children under the age of 2. Only 10 of these isolates (21.7%) contained canonical virulence factors belonging to the EPEC (3 isolates), ETEC (6 isolates), or EAEC (1 isolate) pathotypes (Fig. 2), as determined by sequence homology searches of canonical virulence genes in the assembled genomes. In most cases, observed diarrhea could not be associated with a prototypically virulent E. coli strain in this data set. Other sources of diarrhea were not investigated. An additional 61 isolates from 19 individuals contained canonical E. coli virulence factors, but were not obtained from samples taken during an active diarrheal event. These data indicate that the presence of a potentially virulent E. coli strain does not necessarily result in clinical presentation of diarrhea. Overall, in our data set association between diarrheal cases and incidence of isolates containing canonical E. coli virulence factors was rare. Phylogenomic analysis. Phylogenomic analysis of the isolates identified a diverse population of E. coli within the gastrointestinal community of these children. A phylogenetic tree of the 240 isolates from this study plus 33 reference E. coli and Shigella genomes (Table S2) was used to assess the genomic similarity of the isolates from a single subject both within and across time points, as well as between subjects over the study period (Fig. 3). The SNP-based phylogenomic analysis of the draft and reference genomes identified 304,497 polymorphic single nucleotide genomic sites. The isolates from the current study were identified in the established E. coli phylogroups: A (132 isolates), B1 (62 isolates), B2 (24 isolates), D (17 isolates), and E (2 isolates) ( Fig. 3 and Table S1). Additionally, three isolate genomes (isolates 1_176_05_S3_C2, 2_011_08_S1_C1, and 2_156_04_S3_C2) fell into cryptic clades located outside the established E. coli phylogroups. The distributions of the E. coli isolates in each of these phylogroups were not associated with any of the clinical parameters associated with these isolates. To further investigate the E. coli diversity of an individual subject at a given time, we analyzed the phylogenetic groupings of isolates from each subject at each time point. Most isolates from an individual at a single time point group together within a single phylogenomic lineage, where a lineage is defined as a terminal grouping of isolates (54.4%; 49 of the 90 same-subject time points). One-third (35.5%; 32/90 of the samesubject time point isolates) fell into two distinct lineages, and in 10% (9/90 time points), all isolates belonged to a distinct lineage (Table 1). Overall, these data suggest that while there is considerable diversity among the isolates from many of the subjects, in over half of them, the population of E. coli at a given time point displays limited phylogenomic variation. The relatedness of co-occurring isolates was further confirmed by comparing the total gene content of the genomes from each subject. Those genomes found in the same phylogenetic clade had fewer divergent genes when the genomes were compared (average of 147.9 Ϯ 120.1) than those found in different clades (average of 2,629.1 Ϯ 339.4) (Table S3), further confirming the relatedness of the isolates within each clade. These E. coli populations were variable over time, demonstrating increased E. coli diversity in each subject when observed over the multiple time points. Same-subject isolates from different time points resided in distinct phylogenomic lineages in 93.3% (28/30) of subjects, whereas more than half of the isolates from any individual at a single time point grouped together in a single lineage. Only two subjects had isolates Further details are provided in Table S3. Richter et al. from multiple time points that occupied the same lineage (subjects 4_203_08 and 8_415_05) (illustrated in Fig. 3 and detailed in Table S4). In contrast, all isolates from subject 3_475_03 were phylogenomically distinct (Fig. 3). These examples of the phylogenomic distributions of isolates represent the extremes of conservation or diversity that are observed with this study. Additional sampling will most likely reveal that the isolates within these individuals are not conserved or diverse as this initial sampling would suggest, but they do represent the possible distributions of the isolates within a subject over time. Multilocus sequence typing and molecular serotyping. The genomes in this study comprise a combined total of 87 sequence types (STs) ( Table S1). The most common ST was ST10, which was represented by 40 of the E. coli genomes, while 40 additional STs occurred only once (Table S1). Only five isolates were from ST131, which has been demonstrated to be associated with the spread of antimicrobial resistance (31). There were, on average, 1.5 (range 1 to 3) STs among isolates from a subject at a single time point, and an average of 4.4 (range 2 to 7) STs per subject across all time points. Since the total number of available isolates per subject varied, the values were normalized per the number of isolates, revealing an average of 2 (range 1 to 4) isolates per sequence type and mimicking the diversity observed in the phylogenomic analyses ( Fig. 4 and Table S4). Similar to MLST, serotype analyses (32) reflect the diversity observed in the phylogenomic analysis (Table S4). The 240 isolates represent a combined total of 106 O:H serotypes, with 54 of them occurring only once in the data set, making serotype a finer-scale measure of diversity than MLST. There is an average of 1.63 (range 1 to 3) different serotypes in isolates from the same time point and 4.7 (range 2 to 7) serotypes in a subject across all time points. The O, H, or either serotype could not be predicted in 33 isolates (Table S1). In silico analyses were unable to distinguish between some serotypes in an additional 58 isolates (Table S1). This left 149 isolates that could be unambiguously assigned a single serotype (Table S1). Nearly all isolates that shared a serotype also shared an MLST sequence type and phylogroup (Table S1). There are five examples (excluding those isolates in which the serotype could not be unambiguously differentiated) where MLST, serotype, and phylogroup were not congruent (Table S5), suggesting molecular variation and strain differentiation could not be detected by a single method alone. The combination of these detailed molecular methods could add nuance to diversity measurements in closely related strains. Genome content determined using LS-BSR. Variations in genome content further demonstrated the diversity of the E. coli isolate genomes both within and between time points. Using the LS-BSR analysis (33) and an ergatis-based annotation pipeline, a gene content profile was determined which identified 32,950 genes in the pangenome of the 240 isolate genomes. More than 3,000 genes in any single genome were comprised of genes that vary between genomes, leaving only approximately 2,000 genes in the conserved core, as has been previously identified (10,22). This level of variation is true even among the isolates from subject 8_415_05 in which the isolates from the 3-month and 6-month time points group together phylogenomically, and are of the same MLST sequence type. In this case, each isolate contains an average of 220 (range 95 to 259) variable genes. Given the level of diversity suggested by the variability of the gene content, more detailed SNP analyses, as previously performed by Stoesser et al. (23), were deemed unnecessary. Antibiotic resistance-associated gene profiles. The antibiotic treatment of half of the children in this study provided a unique opportunity to investigate the impact of antibiotic treatment on the prevalence and maintenance of antibiotic resistance genes in the E. coli community at 3 and 6 months after administration. Antibiotic resistance genes were investigated in the isolate genomes using 1,371 genes from the Comprehensive Antibiotic Resistance Database (CARD) (34). The resistance gene profiles (assortment of present/absent genes) for each isolate were used to create a cladogram to investigate the relationships among isolates by time and by subject (Fig. S2). These relationships were then compared to those in the phylogenomic groupings as well as in the cladogram of virulence gene profiles (Table S6 and Fig. S3). Similar clustering patterns were identified between the whole-genome phylogeny or virulence gene presence and resistance gene-based analysis 74% of the time at each time point, and 37% (phylogeny) or 27% (virulence) of the time for each subject as a whole (Table 1). There was no significant change in number or type of resistance-associated genes over time, regardless of antibiotic treatment or isolation time point. As subjects were treated with azithromycin, a macrolide, genes conferring resistance to macrolides were investigated in greater detail (Table S7). Macrolide resistance genes were identified in only 19% (46 of the 240) isolates (Table 2), and based on a logistic regression model, there is no evidence to suggest that either time point or antibiotic treatment was significantly associated with macrolide resistance genes (P Ͼ 0.05 for antibiotic treatment adjusted for time point, for time point adjusted for antibiotic treatment, and overall antibiotic treatment). Isolates from nearly half of the subjects had no known macrolide resistance genes (46.67% antibiotic treatment, 40% control). Based on these results, exposure to a single large dose of azithromycin did not lead to a significant change in the number of known antimicrobial resistance genes or macrolide resistance genes among these E. coli populations. DISCUSSION This study represents a detailed examination of the genomic diversity of Escherichia coli isolates obtained from longitudinal samples from the gastrointestinal tract of children in rural Tanzania. An overall trend identified in this study is that the identified E. coli isolates from the gastrointestinal tract are diverse not just between these subjects, but within the same subject over time. The E. coli genomes sequenced in this study were selected based on the greatest number of longitudinal isolates per subject and include members of all five of the traditional E. coli phylogroups, as well as 87 different MLST sequence types, and 106 serotypes. The isolates in this study were most frequently of the A or B1 phylogroups, unlike a previous study by Gordon et al. (17) in which greater than 70% of the isolates obtained were from either phylogroup B2 or D. Other studies, featuring isolates from Europe and South America, have similarly identified phylogroup A as a dominant phylogroup in the human gastrointestinal tract (35,36). This observed difference may be due to differences in sample acquisition (stool swab versus biopsy), differences in the study participants, or geography. The Gordon et al. (17) study obtained samples from adults, the majority (72.5%, 50/69) of whom were diagnosed with either Crohn's disease or ulcerative colitis, which would also likely impact the immune status of the gastrointestinal tract, and potentially alter the bacterial community structure. In contrast, our study participants were children under the age of 5, and, other than a few who displayed diarrhea of an unknown source, were considered to be relatively healthy. This study, by using a combination of molecular methods, including whole-genome sequencing, enhances the understanding that E. coli in the human gastrointestinal tract is variable and diverse in the studied population. Previous studies of the variability of E. coli, using non-genome sequencing methods, have also identified multiple isolates within a single host, reporting up to an average of 4 E. coli genotypes in adult human gastrointestinal studies (17,23). The findings in this study are similar in that it has identified a number of E. coli isolates that are genomically and molecularly different in the subjects at each time and between time points. This study examines the relatedness of E. coli isolates in an individual over time using two independent methods, phylogenomics of the genome core and wholegenome content. We find that approximately half of E. coli isolates in an individual appear phylogenomically and phenotypically similar at any given time point; however, between time points, the prevalent E. coli clones from individual subjects were variable. While it is possible, and likely, that in the current study less prevalent E. coli isolates were not captured at some of the sampling time points, we assume that the relative isolate abundance in culture reflects the relative abundance in the feces at the time of sampling. The current study likely still underestimates the E. coli diversity in the examined subjects with the relatively small number of isolates collected per time point. Dynamic populations within the human gastrointestinal tract have been previously suggested as an explanation for observations of variable clones in E. coli diversity studies (35), but the necessary longitudinal genomic studies were lacking. This study begins to address that deficiency, with the potential caveats outlined below. The observed within-patient and longitudinal diversity of E. coli isolates could be a function a The proportion of isolates in which a macrolide resistance gene was identified is shown for each time point. Subjects are separated in to treatment groups and categorized based on the time points in which macrolide resistance genes were identified. Percentages reflect the proportion of subjects who fall into each macrolide resistance gene category within treatment groups. of age, as all of the subjects in this study were less than 3 years of age, and thus, the diversity could be a result of natural introduction of new exposure to foods, as well as immune system and microbiome development (37,38). It has been demonstrated that intrahost E. coli diversity is greatest in tropical regions where hygiene may play a role and that E. coli density in the gastrointestinal tract is altered most significantly in the first 2 years of a child's life (11,39). Therefore, it is unclear how well these results correlate with E. coli diversity in adults or in other geographic regions, but they provide a starting point for the comparisons of studies in diverse subject populations and geographic locations. It is thought that the infant microbiome is not established until about 3 years of age (40); however, the detailed longitudinal infant microbiome studies are currently lacking. Furthermore, changes in health status may have impacted the strain variability, as some subjects displayed symptoms of diarrhea during sampling, with the possibility of other unreported occurrences between samples, leading to additional fluctuations in the E. coli community, as well as the potential emergence of otherwise rare, resident strains. Future longitudinal studies that include sampling subjects from multiple age groups will be necessary to fully appreciate levels of bacterial population diversity and dynamics present across host populations of all age groups. Virulence and resistance-associated gene analyses in this study confirm that genomic analyses of single isolates are imperfect predictors of clinical phenotypes, as several isolates harbored canonical E. coli virulence genes, classically identifying them as enteric pathogens, but were present in subjects not displaying clinical symptoms. The converse is also possible, in that E. coli strains may not contain traditional virulence factors, but be obtained from a diarrheal sample, as has been highlighted in the recent GEMS studies (41,42). While diarrheagenic E. coli is often the dominant strain when causing diarrhea (43), the fact that these pathogenic strains may have been missed due to undersampling in the diarrhea samples cannot be discounted. There are many potential explanations for these observations which include the following: (i) the subjects have been previously exposed to these bacteria, and thus, have an established immunity; (ii) the organisms are not pathogenic in the context of other host factors, including the host microbiota; (iii) additional necessary virulence factors are absent in these isolates; or (iv) the virulence factors are present but not expressed by the bacterium. Unfortunately, detailed immunological, microbiota, or transcriptional data are not available on the current samples, so the impacts of these factors on pathogenicity cannot be determined conclusively. Whole-genome analyses have led to increasing recognition that virulence genes and phylogeny are associated attributes in microbial pathogen genomes and suggest that there may be an optimal combination of chromosomal and virulence-associated features that results in maximal virulence, survival or transmission (44)(45)(46)(47). This may also be true of the success of a commensal isolate in the community in these subjects (48). In contrast to Seidman et al. (26), from which the samples were originally obtained, our genome analyses did not demonstrate an increase in the presence of macrolide resistance genes among isolates from children treated with azithromycin. This observation may be due to the selection of isolates for this genomic study. Subject samples sets with the greatest number of longitudinal isolates were chosen for sequencing. Additionally, genome sequencing did not include any samples from the first month after azithromycin treatment, which Seidman et al. found to demonstrate the greatest increase in phenotypic macrolide resistance (26). The examination of the 23S rRNA gene for SNPs associated with macrolide resistance is not possible due to the incomplete nature of the genomes and the genetic redundancy of the multiple copies of this gene cluster (49). This study, once again, highlights the discrepancies between genotypic and phenotypic assessment of resistance and other traits. This study adds significantly to the number of available E. coli genomes that were not selected for based on pathogenic traits, a group that has been traditionally underrepresented in the sequencing of this species. The scientific community is still in the early stages of understanding gastrointestinal tract microbial ecology and the role that the resident bacteria, including E. coli, play in microbiome stability and function. The current study demonstrates that at the genomic level, the community of E. coli in the gastrointestinal tract of this population of children is diverse and variable over time. Further studies on human populations from different geographic areas, as well as other age groups, are required to determine if E. coli communities would stabilize as a person approaches adulthood, or whether the community diversity of E. coli regularly changes depending on the development of the immune system, as well as many other exposures within the gastrointestinal tract. MATERIALS AND METHODS Isolate selection. E. coli isolates in this study were selected from isolates collected in Seidman et al. (26). The PRETϩ study was a 6-month study designed to assess the ancillary effects on pneumonia, diarrhea and malaria in children following mass distribution of azithromycin for trachoma control. The study was conducted in 8 communities in the Kongwa, a district located in rural central Tanzania on a semiarid highland plateau with poor access to drinking water. The district has a total population of approximately 248,656, comprising mostly herders and subsistence farmers. The Tanzanian government stipulates that villages with trachoma prevalence Ն10% receive annual mass distribution of azithromycin. On survey, 4 villages found eligible for antibiotic treatment became the PRETϩ treatment villages and 4 neighboring ineligible communities were included as controls. The study methods and results detailing the impact of antibiotic treatment on pneumonia and diarrhea morbidity and antibioticresistant Streptococcus pneumoniae carriage were published previously (50)(51)(52). The selected E. coli isolates were chosen to represent individuals with the most complete longitudinal sample sets from the PRETϩ E. coli substudy. Isolates were obtained from 30 individuals between 2 and 35 months of age, living in 8 villages in the same rural area of Tanzania. Half of these individuals received antibiotic treatment, while the other half (control) received no antibiotic treatment. These isolates were cultured from fecal samples collected at three time points ( Fig. 1 and Table S1): a baseline prior to antibiotic treatment, three months posttreatment, and six months posttreatment, with corresponding time points in the untreated controls. A single treatment of 20 mg/kg of body weight of azithromycin was given 2 days after the baseline sample was collected. At each time point, up to three E. coli colonies per individual were selected for sequencing and subsequent comparative analyses. Isolates were labeled with a three-number subject ID (i.e., 1_110_08), the sample (time point) from which the isolate was obtained (i.e., S1), and the number of the colony isolated from the sample (i.e., C1). Bacterial growth and isolation. E. coli colonies were obtained as described in Seidman et al. (26,27). Briefly, fecal swabs were streaked on MacConkey agar (Difco) and grown overnight at 37°C. Three lactose fermentation (LF)-positive colonies were inoculated on nutrient agar stabs and grown overnight at 37°C. E. coli isolates were identified as those colonies which were LF-positive, indole-positive (DMACA Indole Reagent droppers, BD), and citrate-negative (Simmons citrate agar slants). Isolates were transferred to Luria broth for overnight growth at 37°C with shaking. E. coli cultures were frozen with 10% glycerol and stored at Ϫ80°C. Genome sequencing and assembly. Genomic DNA was extracted using standard methods (21) and sequenced on the Illumina HiSeq 2000 platform at the Genome Resource Center at the University of Maryland School of Medicine, Institute for Genome Sciences (http://www.igs.umaryland.edu/resources/ grc/). The resulting 100-bp reads were assembled as previously described (44,46) using the Maryland Super-Read Celera Assembler (MaSuRCa version 2.3.2) (53). Contigs of fewer than 200 bp were excluded from assemblies. Assembly quality was determined based on number of contigs (less than 500), and genome size and GϩC content compared to known E. coli genomes. Two genomes had GϩC content divergent from that of E. coli (55.61%) and were excluded from further analysis. The assembly details and corresponding GenBank accession numbers are provided in Table S1. Identification of predicted pathogen isolates. Isolate genomes were interrogated for the presence of pathotype-specific virulence factor genes using LS-BSR and are derived from a similar E. coli typing schema used in the MAL-ED studies (54). The nucleotide sequence for each factor or resistance gene was aligned against all sequenced genomes with BLASTN (55) in conjunction with LS-BSR (33). Genes with a BSR value Ն0.80 were considered highly conserved and present in the isolate examined. The targeted virulence factors are as follows: ETEC heat-stable enterotoxin (estA147) or ETEC heat-labile enterotoxin (eltb508), identifying the isolate as being enterotoxigenic E. coli (ETEC); the aggR-activated island C (aic215) or EAEC ABC transporter A (aata650) genes, which are common diagnostic markers for enteroaggregative E. coli (EAEC) (56,57); and the major subunit of the bundle-forming pilus (bfpA) (bfpa300) or intimin genes (eae881), which are indicative of enteropathogenic E. coli (EPEC) (44). Phylogenomic analysis. A total of 273 genomes were used in the phylogenomic analyses: the 240 assembled in this study, in addition to a collection of 33 E. coli and Shigella reference genomes from GenBank (Table S2). Single nucleotide polymorphisms (SNPs) in all genomes were detected relative to the completed genome sequence of commensal isolate E. coli HS (phylogroup A) using the in silico genotyper (ISG) v.0.12.2 (58), which uses MUMmer v.3.22 (59) for SNP detection. Analysis with ISG yielded 701,011 total SNP sites that were filtered to a subset of 304,497 SNP sites present in all of the genomes analyzed. These SNP sites were concatenated and used for phylogenetic analysis as previously described (60). A maximum-likelihood phylogeny with 1,000 bootstrap replicates was generated using RAxML v.7.2.8 (61) and visualized using FigTree v.1.4.2 (http://tree.bio.ed.ac.uk/software/figtree/) and interactive tree of life (62). Phylogenomic lineages were assigned based on visual determination of groupings. Three genome outliers (1_176_05_S3_C2, 2_011_08_S1_C1, and 2_156_04_S3_C2 were removed from the tree figures for visualization purposes. Serotype identification. In silico serotype identification was performed on the assembled genomes using the online SerotypeFinder 1.1 (https://cge.cbs.dtu.dk/services/SerotypeFinder/) and an LS-BSR analysis using the serotype sequences compiled for the SRS2 program (https://github.com/katholt/srst2/ tree/master/data) (20,32). Multilocus sequence typing (MLST). In silico MLST was performed on the assembled genomes using the Achtman E. coli MLST scheme (63). Gene sequences were identified in the isolate genomes using BLASTn, and MLST profiles were determined by querying the PubMLST database (http:// pubmlst.org). Variations in gene distributions. The gene content across all genomes was identified and compared using the large-scale BLAST score ratio (LS-BSR) with default settings, as previously described (33). Genes with a BSR value Ն0.80 are considered to be highly conserved and present in the isolate examined at this level of homology. Those genes that are conserved in all genomes were removed from further analyses. The predicted protein function of each gene cluster was determined using an Ergatis-based (64) in-house annotation pipeline (65). Pairwise gene content comparisons were performed for all of the isolates for each subject to determine the number of genes that differed between the isolates. The numbers of differing genes were used to calculate the average number (and standard deviation) of genes that differed between isolates from the same phylogenomic clade and those from differing phylogenomic clades for each subject. Virulence factor and antibiotic resistance gene identification. The list of compiled common E. coli virulence factors genes was used for interrogation of the study genomes (Table S2). Antibiotic resistance genes were compiled from the Comprehensive Antibiotic Resistance Database (CARD; http://arpcard .mcmaster.ca, downloaded 24 June 2015) (34). The nucleotide sequence for each factor or resistance gene was aligned against all sequenced genomes with BLASTN (55) in conjunction with LS-BSR (33). Genes with a BSR value Ն0.80 were considered highly conserved and present in the isolate examined. Statistical analysis of macrolide resistance gene distributions. A logistic regression on the probability of a macrolide gene being present in an E. coli isolate was run against 2 covariates: time point (excluding the baseline) or antibiotic treatment. For each individual, the two to three isolates were considered replicates for that time point, and the time points were far enough apart to be considered independent. Therefore, gene presence was collapsed as presence in at least one of the replicates at a given subject and time point. Each subject by time combination was considered an independent observation. Genes in this analysis with P values Յ0.05 were considered significant. If the covariate was dichotomous, then the Wald chi-square test statistic was used to determine significance.
2018-11-15T17:45:15.084Z
2018-11-07T00:00:00.000
{ "year": 2018, "sha1": "83b1f1d76ffe815ec7b92211ec0a598025168f40", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/msphere.00558-18", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "c6e3dc02c6ea45c997a602896eb16d7b86b15b00", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119651093
pes2o/s2orc
v3-fos-license
Projectional skeletons and Markushevich bases We prove that Banach spaces with a $1$-projectional skeleton form a $\mathcal{P}$-class and deduce that any such space admits a strong Markushevich basis. We provide several equivalent characterizations of spaces with a projectional skeleton and of spaces having a commutative one. We further analyze known examples of spaces with a non-commutative projectional skeleton and compare their behavior with the commutative case. Finally, we collect several open problems. Introduction Projectional resolutions of the identity (shortly PRI ), introduced and used for the first time by J. Lindenstrauss [33], are an important tool for investigation of nonseparable Banach spaces. The main application consists in extending some results from separable spaces to certain classes of non-separable ones using transfinite induction, see, e.g., [15,Section 6.2]. Let us recall the definition of a PRI. Let X be a non-separable Banach space and let κ = dens X. A PRI is a transfinite sequence of projections (P α ) α≤κ satisfying the following properties. (i) P 0 = 0, P κ = I; (ii) P α = 1 for 0 < α ≤ κ; (iii) dens P α X ≤ max{ℵ 0 , card α} for α ≤ κ; (iv) P α P β = P β P α = P α for α ≤ β ≤ κ; (v) P λ X = α<λ P α X for λ ≤ κ limit. So, a PRI provides a decomposition of the space X to certain subspaces of a smaller density. In order to prove a property of X using a transfinite induction argument, we need to know that the property is satisfied by the smaller subspaces. It inspires the following definitions of a P-class and of a P-class of Banach spaces. Let C be a class of Banach spaces. • [19, Definition 3.45 on p. 107] We say that C is a P-class if for any nonseparable space X ∈ C there is a PRI (P α ) α≤κ on X such that (P α+1 − P α )X ∈ C for each α < κ. • [18, p. 417] We say that C is a P-class if for any nonseparable space X ∈ C there is a PRI (P α ) α≤κ on X such that P α X ∈ C for each α < κ. Certain classes of Banach spaces are easily seen to be both P-classes and P-classes as soon as we know they admit a PRI. For example, any weakly compactly generated Banach spaces admits a PRI by [1]. Since this class is stable to taking complemented subspaces, it is clearly both a P-class and a P-class. The situation becomes more complicated if we look at the larger classes of 1-Plichko Banach spaces or on spaces admitting a 1-projectional skeleton. The class of 1-Plichko spaces was investigated already in [47], later in [35] under the name class V, the current name was given in [24]. This class contains many Banach spaces naturally appearing in mathematics, see [27,4,5,6]. Let us recall the respective definitions: Let X be a Banach space. • A subspace D ⊂ X * is said to be a Σ-subspace of X * if there is a linearly dense set M ⊂ X such that D = {x * ∈ X * ; {x ∈ M ; x * (x) = 0} is countable}. • X is said to be 1-Plichko if X * admits a 1-norming Σ-subspace. • X is said to be Plichko if if X * admits a norming Σ-subspace. • X is said to be weakly Lindelöf determined (WLD ) if X * is a Σ-subspace of itself. Note that, as indicated by the presence of the constant 1 in the name, 1-Plichko spaces are not stable to isomorphisms. (The stability fails even in a very strong way, see [22].) The definitions used in [47,35] were different, their equivalence with the current one follows from [24,Theorem 2.7]. Any 1-Plichko space admits a PRI -this follows from [47, Theorem 1 and Note 1]. Moreover, 1-Plichko spaces form both a P-class (by [24,Theorem 4.14]) and a P-class (this can be proved by a minor adjustment of the proof of [24,Theorem 4.14]; it also follows from [30,Theorem 17.6] -more precisely from its proof using [31,Theorem 27]). These results are not just a mere consequence of the existence of a PRI, as a (complemented) subspace of a 1-Plichko space need not be 1-Plichko, see [20,23] or [24,Sections 4.5 and 5.2]. So, one should take care during the construction of a PRI. 1-Plichko spaces can be characterized and generalized using the notion of a projectional skeleton introduced in [31]. Let us recall the definition and basic properties. Let X be a Banach space. A projectional skeleton on X is an indexed family (P s ) s∈Γ of bounded linear projections on X, where Γ is an up-directed partial ordered set, satisfying the following properties: (i) P s X is separable for s ∈ Γ; (ii) P s P t = P t P s = P s whenever s, t ∈ Γ and s ≤ t; (iii) If (s n ) is an increasing sequence in Γ, then s = sup n∈N s n exists in Γ and P s X = n∈N P sn X; (iv) X = s∈Γ P s X. Note that the condition (iii) in particular implies, that any increasing sequence in Γ has a supremum, i.e., Γ is σ-complete. If (P s ) s∈Γ is a projectional skeleton X, the subspace of X * defined by D = s∈Γ P * s X * is said to be induced by the skeleton. If Γ ′ ⊂ Γ is cofinal, i.e., ∀s ∈ Γ ∃t ∈ Γ ′ : s ≤ t, and σ-closed, i.e., whenever (s n ) is an increasing sequence in Γ ′ , its supremum in Γ belongs to Γ ′ , then clearly (P s ) s∈Γ ′ is also a projectional skeleton on X and the respective induced subspace is again D. Therefore, by [31,Proposition 9 and Lemma 10] we can assume without loss of generality that the projections are uniformly bounded and, moreover, the following stronger version of (iii) holds: (iii') If (s n ) is an increasing sequence in Γ and s = sup n∈N s n , then P s x = lim n→∞ P sn x for x ∈ X. A 1-projectional skeleton is a projectional skeleton made from norm one projections. By [31,Theorem 27] a Banach space is 1-Plichko if and only if it admits a commutative 1-projectional skeleton. Here, the word commutative means that P s P t = P t P s for any s, t ∈ Γ, not only for comparable method of elementary submodels and contains a gap (see the comments at the end of the current proof). We give an easy direct proof. First observe, that for any countable set C ⊂ D there is some s ∈ Γ with C ⊂ P * s X * (this follows easily from definitions). Since P * s X * is weak * closed, we deduce that C w * ⊂ D. This shows that D is weak * -countably closed. Further, for any s ∈ Γ the space P * s X * is hereditarily separable in the weak * topology. Indeed, the mapping y * → y * • P s is an isomorphism of (P s X) * onto P * s X * which is also a weak * -to-weak * homeomorphism. (P s X) * , as the dual of a separable space, is hereditarily separable in the weak * topology (it has even a countable network), so the same is true for P * s X * . Now assume that A ⊂ D is bounded, x * ∈ D and x * ∈ A w * . Without loss of generality assume that the projections P s are uniformly bounded. Fix s 0 ∈ Γ such that P * s0 x * = x * . We can construct by induction countable sets C n ⊂ A and elements s n ∈ Γ such that • P * sn−1 C n is weak * dense in P * sn−1 A; • s n ≥ s n−1 and C n ⊂ P * sn X * . Let s = sup n s n . We claim that n C n is weak * -dense in P * s A. So, fix any y * ∈ A and let U be a weak * -neighborhood of P * s y * . We are going to prove that U ∩ C n = ∅. We have U = {z * ∈ X * ; |P * s y * (x j ) − z * (x j )| < ε for j = 1, . . . , k} for some x 1 , . . . , x n ∈ X and ε > 0. Since {P * s y * } ∪ n C n ⊂ P * s X * , we can without loss of generality assume that x j ∈ P s X for each j ∈ {1, . . . , k}. Moreover, the mentioned set is bounded, so without loss of generality we may assume that x j ∈ n P sn X (as this is a dense subset of P s X). Then there is some n such that x j ∈ P sn X for j = 1, . . . , k. Since P * sm y * w * −→ P * s y * , there is some m > n such that P * sm y * ∈ U . Then there is z * ∈ C m+1 such that P * sm z * ∈ U . Since for any j = 1, . . . , k we have z * (x j ) = z * (P sn x j ) = z * (P sm P sn x j ) = z * (P sm x j ) = P * sm z * (x j ), we deduce z * ∈ U . This completes the proof that n C n is weak * -dense in P * s A, in particular This completes the proof. Let us now point out what is the gap in the proof in [31]. The quoted result claims that D has countable tightness, not only bounded subsets of D. The proof uses elementary submodels, but the procedure is in fact similar to our proof. The problem appears when one assumes that x j are from a dense subset. It is possible if A is bounded, but not for an unbounded set. Fortunately, the statement of [31,Theorem 18] is true (see Remark 4.2(g)), but we do not know an easy and elementary proof. Projectional resolutions constructed from projectional skeletons The aim of this section is to prove Theorem 1.1. This will be done by proving Proposition 2.7 below. We will proceed by refining the proof of [30,Theorem 17.6] using some results of [10]. Throughout this section X will be a fixed Banach space, (P s ) s∈Γ a fixed 1-projectional skeleton on X and D = s∈Γ P * s X * the respective induced subspace of X * . Further, σ(X, D) will denote the weak topology on X generated by D (i.e., the weakest topology making all functionals from D continuous). One of the key tools to prove Theorem 1.1 is the following lemma, especially its part (b). It follows easily from [10, Proposition 3.1] (cf. the proof of [10,Theorem 4.6]). In view of the fact that this lemma is not explicitly formulated and proved in [10] and it is important for the present paper, we provide a complete proof. Lemma 2.1. Let Y ⊂ X be a closed subspace. (a) Suppose that P s (Y ) ⊂ Y for each s ∈ Γ. Then (P s | Y ) s∈Γ is a 1-projectional skeleton on Y and the respective induced subspace is {x * | Y ; x * ∈ D}. (b) Suppose that Y is σ(X, D)-closed. Then there is a cofinal σ-closed subset Γ ′ ⊂ Γ such that P s (Y ) ⊂ Y for each s ∈ Γ ′ . In particular, (P s | Y ) s∈Γ ′ is a 1-projectional skeleton on Y and the respective induced subspace is {x * | Y ; x * ∈ D}. Proof. (a) It is obvious that (P s | Y ) s∈Γ is a 1-projectional skeleton on Y and the respective induced subspace is s∈Γ (P s | Y ) * Y * . Fix any s ∈ Γ, y * ∈ Y * and x * ∈ X * such that y * = x * | Y (such x * exists by the Hahn-Banach theorem). For any y ∈ Y we have (P s | Y ) * y * (y) = y * (P s y) = x * (P s y) = P * s x * (y), hence (P s | Y ) * y * = P * s x * | Y . Therefore (b) Let K = (B X * , w * ). Then K is a compact space and (P * s | K ) s∈Γ is a retractional skeleton on K, the respective induced subset is D ∩ K. (For the definition of a retractional skeleton see Section 5.1 or, for example, [10]; the statement follows easily from definitions, cf. [9,Proposition 3.14].) The canonical mapping J : X → C(K) defined by is a σ(X, D)-to-τ p (D ∩ K) homeomorphism of X into C(K) (where τ p (D ∩ K) denotes the topology of pointwise convergence on D ∩ K). Moreover, it is easy to observe that J(X) is a τ p (D ∩ K)-closed subset of C(K) (cf. [21,Lemma 2.14] for the real case and the proof of [26,Theorem 3.2] for the complex case). Hence J(Y ) is τ p (D ∩ K) closed in C(K). Now it follows directly from [10, Proposition 3.1] that there is a cofinal σ-closed subset Γ ′ ⊂ Γ such that for each s ∈ Γ ′ f • (P * s | K ) ∈ J(Y ) for each f ∈ J(Y ). Since for any y ∈ Y and x * ∈ K we have (Jy • P * s )(x * ) = Jy(P * s x * ) = P * s x * (y) = x * (P s y) = J(P s y)(x * ), we conclude that P s (Y ) ⊂ Y for s ∈ Γ ′ . Now it is clear that (P s ) s∈Γ ′ is a 1-projectional skeleton on X with induces subspaces equal to D. Hence the rest of (b) follows from (a) applied to the skeleton (P s ) s∈Γ ′ . A key tool to constructing a PRI is the following construction of a single projection coming from [31, Lemma 11]. For any nonempty directed subset A ⊂ Γ we define a mapping x ∈ X. (2.1) By [31,Lemma 11] (or [30,Proposition 17.8]) the mapping P A is a well-defined projection of X onto s∈A P s X. It is clear that P A = 1. For completeness we set P ∅ = 0. The next easy lemma deals with compatibility of the projections P A and P s . Lemma 2.2. Let A ⊂ Γ be a nonempty directed subset. Then the following assertions hold. (a) P s P A = P A P s = P s for each s ∈ A. (b) Let t ∈ Γ. If P t commutes with P s for each s ∈ A, then P t commutes with P A . (c) If B ⊂ Γ is a directed set containing A, then P A P B = P B P A = P A . Proof. (a) Fix s ∈ A and x ∈ X. Then for each t ∈ A, t ≥ s we have P s P t x = P t P s x = P s x. Hence, by taking the limit over t ∈ A we get P s P A x = P A P s x = P s x. (b) Suppose t ∈ Γ satisfies the assumptions. Then for each s ∈ A we have P s P t x = P t P s x. Thus, by taking the limit over s ∈ A we get P A P t x = P t P A x. (c) Fix x ∈ X. By (a) we get P s P B x = P B P s x = P s x for s ∈ A, thus by taking the limit over s ∈ A we deduce Next we will study in more detail the projection P A . The first statement of the assertion (iii) is used as obvious in the last two sentences of the proof of [30,Theorem 17.6]. The added value of our version is a more precise statement of (iii) and, mainly, the assertion (iv) which plays a key role. If A ⊂ Γ, we denote by A σ the smallest σ-closed subset of Γ containing A. Proposition 2.3. Let A ⊂ Γ be a nonempty directed subset. Denote Y = P A X. (i) A σ is a directed subset of Γ (ii) P A = P Aσ and, moreover, P A X = s∈Aσ P s X. (iii) The family (P s | Y ) s∈Aσ is a 1-projectional skeleton in Y . The respective induced subspace in Y * is If the skeleton on X is commutative, then the last inclusion can be replaced by equality. (iv) ker P A is σ(X, D)-closed. Therefore there is a cofinal σ-closed subset Γ ′ ⊂ Γ such that the family (P s | kerPA ) s∈Γ ′ is a 1-projectional skeleton on ker P A . The respective induced subspace is Proof. (i) Let us start by describing A σ . Define sets B α for α < ω 1 as follows. • B 0 = A; • B α+1 = B α ∪ {sup n t n ; (t n ) is an increasing sequence in B α } for α < ω 1 ; • B λ = α<λ B α if α < ω 1 is limit. Then clearly A σ = α<ω1 B α . Indeed, since A σ is σ-closed and contains B 0 = A, by transfinite induction we get B α ⊂ A σ for α < ω 1 , which proves the inclusion '⊃'. To prove the converse inclusion it is enough to observe that the set on the right-hand side is σ-closed. To show that A σ is directed, it is enough to prove that B α is directed for each α < ω 1 . It is true for α = 0 as B 0 = A and A is assumed to be directed. Suppose that B α is directed for some α < ω 1 . Fix any two indices s, t ∈ B α+1 . Then there are increasing sequences (s n ) and (t n ) in B α such that s = sup n s n and t = sup n t n . (If s ∈ B α , we can take s n = s for each n ∈ N, and similarly for t.) Since B α is directed, we can find a sequence (u n ) in B α such that • u 1 ≥ s 1 and u 1 ≥ t 1 ; • u 2n ≥ u 2n−1 and u 2n ≥ s n+1 for n ∈ N; • u 2n+1 ≥ u 2n and u 2n+1 ≥ t n+1 for n ∈ N. Since (u n ) is increasing, u = sup n u n ∈ B α+1 . Moreover, u ≥ s n for all n ∈ N, hence u ≥ s. Similarly, u ≥ t. This completes the proof that B α+1 is directed. Since the limit induction step is obvious, the proof of (i) is completed. (ii) By (i) the mapping P Aσ is a well-defined projection with range s∈Aσ P s X. Since A ⊂ A σ , P A X ⊂ P Aσ X. Conversely, using the sets B α defined within the proof of (i) and the property (iii) of projectional skeletons, by transfinite induction we deduce that P s X ⊂ P A X for each s ∈ A σ . Hence P A X = P Aσ X. Let us continue by proving P A = P Aσ . Fix x ∈ X. Then Indeed, the first equality follows from Lemma 2.2(c). To prove the second one observe that P Aσ x ∈ P A X due to the previous paragraph. Finally, take any x ∈ P A X. By the definition of P A there is a sequence (s n ) in A such that P s x − x < 1 n whenever s ∈ A satisfies s ≥ s n . Using the fact that A is directed, we can assume without loss of generality that the sequence (s n ) is increasing. Set s = sup n s n . Then s ∈ A σ and (iii) The properties (i)-(iii) of projectional skeleton are obvious, the last property follows from (ii). Let us continue by proving (2.2). The first equality is just the definition of the induced subspace. To show the second equality fix s ∈ A σ . Take any y * ∈ Y * and any x * ∈ X * such that y * = x * | Y (such x * exists by the Hahn-Banach theorem). Then (P s | Y ) * y * = P * s x * | Y (see the proof of Lemma 2.1(a)), so the second equality follows. The last inclusion is obvious. Next suppose that the skeleton on X is commutative. By Lemma 2.2(b) we see that, for any s ∈ Γ we have P s P A = P A P s and hence the subspace Y = P A X is invariant for P s . It follows from Lemma 2.1(a) that (P s | Y ) s∈Γ is a projectional skeleton on Y and the respective induced subspace is Indeed, the first equality is just the definition of the kernel. The inclusion '⊃' from the second one follows from the definition of P A . To prove the converse observe that P s = P s P A by Lemma 2.2(a). The third equality is a consequence of the Hahn-Banach theorem and the last two equalities follow easily from definitions. Since P * s X * ⊂ D for each s ∈ A, we conclude that ker P A is σ(X, D)-closed. The rest of (iv) now follows immediately from Lemma 2.1(b). The next proposition deals in more detail with the description of D A from (2.2) and characterizes the situation when it is maximal possible. Proposition 2.4. Let A ⊂ Γ be a directed subset. The following assertions are equivalent. (3) P A X is a σ(X, D)-closed subspace of X. We claim that for any s ∈ Γ ′′ we have P s P A = P A P s . Indeed, since P s (P A X) ⊂ P A X, we deduce P A P s P A = P s P A . Moreover, ker P A = (I − P A )X is also invariant for P s , thus P s (I − P A )X ⊂ ker P A , i.e., P A P s (I − P A ) = 0. In other words, P A P s = P A P s P A . Therefore P s P A = P A P s and the proof is complete. Since Γ ′ is cofinal in Γ, without loss of generality we may assume that s ∈ Γ ′ . By the choice of Γ ′ we have P s P A = P A P s , hence (6) Note that under the assumptions of (4) the system (P s ) s∈Γ ′ is a 1-projectional skeleton on X with induced subspace D. Lemma 2.1(a) applied to the skeleton (P s ) s∈Γ ′ yields that (P s | PAX ) s∈Γ ′ is a 1-projectional skeleton on P A X with induced subspace (i) P A is σ(X, D)-to-σ(X, D) continuous whenever A ⊂ Γ is a countable directed subset. (ii) If the skeleton is commutative, then P A is σ(X, D)-to-σ(X, D) continuous for any directed subset A ⊂ Γ. Proof. The assertion (ii) follows immediately from Proposition 2.4. Let us show the assertion (i). Enumerate A = {s n ; n ∈ N}. Since Γ is directed, we can find an increasing sequence (t n ) in Γ such that t n ≥ s n for n ∈ N. Let t = sup n t n . Then the set is a cofinal σ-closed subset. Further for each s ∈ Γ ′ and n ∈ N we have s ≥ t ≥ s n , thus P s P sn = P sn P s (= P s ). By Lemma 2.2(b) we deduce that P s P A = P A P s . Hence, we can conclude by using Proposition 2.4. The following lemma is the key step to constructing a PRI starting from a 1-projectional skeleton. Its proof is completely standard. In the proof of [30,Theorem 17.6] it is used without explicit formulation and proof. We provide a proof for the sake of completeness. Lemma 2.6. Let κ = dens X. Then there is a transfinite sequence (A α ) α≤κ of subsets of Γ satisfying the following properties. ( (v) A λ = α<λ A α whenever λ ≤ κ is limit. Moreover, in this case the system (P s ) s∈(Aκ)σ is a 1-projectional skeleton on X with induced subset D. Proof. Since Γ is up-directed, we can fix a mapping ϕ : If B ⊂ Γ is any nonempty subset, we define the sequence (B k ) by Now we are going to perform the main construction. Let {x α ; α < κ} be a dense subset of X not containing 0. We proceed by transfinite induction. Set A 0 = ∅. Then (i) is fulfilled. Fix some s ∈ Γ with P s x 0 = x 0 , let B 1 ⊂ Γ be an infinite countable set containing s and set A 1 = η(B 1 ). Then A 1 is directed, hence (ii) is satisfied. Moreover, A 1 is infinite countable and P A1 X is separable (as it is the closure of s∈A1 P s X), hence (iii) is satisfied as well. The remaining conditions are void, so the first step of the induction is completed. Suppose that 1 ≤ α < κ and we have constructed A β for β ≤ α satisfying the conditions (i)-(v). Since dens P Aα X < κ, we have P Aα X X. Let γ < κ be the smallest ordinal such that x γ / ∈ P Aα X. and, clearly, dens P Aα+1 X ≤ card A α+1 (as s∈Aα+1 P s X is dense in P Aα+1 X. Thus (iii) is valid as well. Since (v) and (vi) are void in this case, the 'isolated' induction step is completed. Next suppose that λ ≤ κ is limit and we have constructed A α for α < λ such that the conditions (i)-(v) are satisfied. We simply let A λ = α<λ A λ . Then clearly the conditions (ii), (iv) and (v) are again fulfilled. Moreover, dens P Aα X = card λ and dens P A λ X ≤ card λ as s∈A λ P s X is dense in P A λ X. So, the condition (iii) is fulfilled as well. It remains to prove (vi). We will show that P Aκ X contains x α for each α < κ. To this end it is enough to observe that x α ∈ P Aα+1 X for each α < κ. This can be proved by transfinite induction: x 0 ∈ P A1 X by the construction. Suppose that α < κ is such that x γ ∈ P γ+1 for each γ < α. Then {x γ ; γ < α} ⊂ P Aα X Therefore, by the construction of A α+1 we have x α ∈ P Aα+1 X. The next proposition is the main achievement of this section. The assertions (a) and (b) are just a bit more precise version of [30,Theorem 17.6], the assertion (c) is new and provides a proof of Theorem 1.1. Proposition 2.7. Let κ = dens X and let (A α ) α≤κ be the family provided by Lemma 2.6. Then the following assertions hold. (a) The family (P Aα ) α≤κ is a PRI on X. is a 1-projectional skeleton on P Aα X. The induced subspace is In case the skeleton on X is commutative, we have (c) For each α < κ the space (P α+1 − P α )X admits a 1-projectional skeleton with induced subspace Proof. (a) The property (i) of a PRI follows from the properties (i) and (vi) of the family (A α ). By the property (ii) it is clear that P Aα is a norm one projection for α > 0, thus the property (ii) of a PRI is fulfilled. The properties (iii)-(v) of a PRI follow from the respective properties of the family (A α ), in case of (iv) together with Lemma 2.2(c). A characterization of commutativity of a projectional skeleton The aim of this section is to prove two theorems -Theorem 3.1 characterizing Σ-subspaces and Theorem 3.4 characterizing commutativity of a projectional skeleton. A large part of the characterization given in Theorem 3.1 is not new, but we provide a unified approach and some new points of view. This is explained in more detail in Remarks 3.3 below. We continue by recalling definitions of some notions used in the following theorem. A projectional generator on a Banach space X is a pair (D, Φ), where D is a norming subspace of X * and Φ is a mapping defined on D whose values are countable subsets of X satisfying moreover the condition The notion of a projectional generator was introduced in [36] as a technical tool for constructing a PRI. It is used for example in [15]. There are some minor differences between the definitions used by different authors, it is not clear whether the definitions are equivalent but the differences are not important for applications. Further, if M is any set, by [M ] ≤ω we denote the family of all the countable subsets of M (including the finite sets). A mapping ϕ : This terminology is a bit misleading (a more natural name would be σ-continuous monotone mapping), but we prefer to use the usual terminology which is nowadays becoming standard, cf. [40,7,8]. Recall that a Markushevich basis of a Banach space X is an indexed family (x α , x * α ) α∈Λ in X × X * satisfying the following three conditions. • x * α (x α ) = 1 and x * α (x β ) = 0 for α = β in Λ (i.e., it is a biorthogonal system); • span{x α ; α ∈ Λ} = X; • the set {x * α ; α ∈ Λ} separates points of X. Moreover, a Markushevich basis (x α , x * α ) α∈Λ is said to be strong if ∀x ∈ X : Finally, a topological space is said to be primarily Lindelöf if it is a continuous image of a closed subset of the space (L Γ ) N for a set Γ, where L Γ is the one-point lindelöfication of the discrete set Γ (i.e., L Γ = Γ ∪ {∞}, the points of Γ are isolated in L Γ and neighborhoods of ∞ are complements of countable subsets of Γ). This class of topological spaces was used in [37] to characterize Corson compact spaces (see also [3]), the characterization was generalized to Valdivia compacta and 1-Plichko Banach spaces in [21] (see also [24,Chapter 2]). Theorem 3.1. Let X be a Banach space and D ⊂ X * a norming subspace. The following assertions are equivalent. (2) D is weak * -countably closed and there is a linearly dense subset M ⊂ X such that the pair is a projectional generator on X. (3) There is a linearly dense subset M ⊂ X and an ω-monotone mapping ψ : There is a linearly dense subset M ⊂ X and an ω-monotone mapping ϑ : is a bijection of (M \ ψ(A)) ⊥ onto (span ϑ(A)) * . (5) D is induced by a commutative projectional skeleton on X. Proof. It is clear that the assertions (1)- (9) are not changed by taking an equivalent norm. Therefore without loss of generality we may and shall assume that D is 1-norming. So, we can fix a mapping η assigning to each x ∈ X a countable subset η(x) ⊂ D ∩ B X * such that (1)⇒(2) Suppose that D is a Σ-subspace and let M ⊂ X be a linearly dense set witnessing it. Define Φ as in the statement of (2). We will show that the pair (D, Φ) is a projectional generator. By the very definition of a Σ-subspace it is clear that Φ is countably-valued. Further, take any A ⊂ D. Then Finally, to show that D is weak * -countably closed, fix a countable set C ⊂ D. Then Indeed, the first inclusion follows from the definition of Φ and the second one follows from the definition of a Σ-subspace as Φ(C) is countable. Since (M \ Φ(C)) ⊥ is weak * -closed, the prove is completed. (2)⇒(1) Let M and Φ be as in the statement of (2). Since M is linearly dense, we can define D ′ to be the Σ-subspace generated by M . Φ is countably-valued, thus D ⊂ D ′ . Further, D is 1-norming and weak * -countably closed, thus D ∩ B X * is weak * -dense and weak * -countably closed in D ′ ∩ B X * . Since D ′ ∩ B X * equipped with the weak * -topology has countable tightness (in fact, it is Fréchet-Urysohn, cf. (1)⇒(3) We are going to prove the implication for real spaces. The proof for complex ones is exactly the same, one just needs to replace everywhere Q by its complex version Q + iQ. Suppose that D is a Σ-subspace and let M be a linearly dense set witnessing it. Define Φ as in the statement of (2). Define a mapping θ 1 : It is obvious that θ 1 is an ω-monotone mapping. Further, for each n ≥ 2 define a mapping θ n by It is clear that θ n is an ω-monotone mapping for each n ∈ N. Let us prove that ψ has the required properties. To this end fix A ∈ [D ∪ M ] ≤ω . By construction it is obvious that Since ψ(A) is a countable subset of M , we deduce that (M \ ψ(A)) ⊥ ⊂ D by the definition of a Σ-subspace. It remains to prove the last property. We start by showing that So, fix any x ∈ span Q ψ(A) and y ∈ span Q (M \ ψ(A)). Observe that there is n ∈ N such that which completes the proof of ( * ). Next we are going to show that Since the inequality '≤' is obvious, it is enough to prove the converse one. Fix any c < x * . Then there is z 0 ∈ B X with |x * (z 0 )| > c. Since M is linearly dense, span Q M is norm-dense in X, thus there is z 1 ∈ B X ∩ span Q M with |x * (z 1 )| > c. Then z 1 can be uniquely expressed as z 1 = x + y with x ∈ span Q ψ(A) and y ∈ span Q (M \ ψ(A)). By ( * ) we get x ≤ z 1 ≤ 1. Moreover, x * (y) = 0, hence Thus ( * * ) is proved. The last ingredient is Define first x * on span Q M = span Q ψ(A) + span Q (M \ ψ(A)) by It follows from ( * ) that span Q ψ(A)∩span Q (M \ψ(A)) = {0}, hence x * is a well-defined Q-linear functional. Moreover, it also follows from ( * ) that |x * (z)| ≤ y * for each z ∈ B X ∩ span Q M . It follows that x * can be uniquely extended to an element of X * . It is clear that this extension belongs to (M \ ψ(A)) ⊥ and extends y * . Finally, putting together ( * * ) and ( * * * ) we get that which completes the proof. (3)⇒(4) Let M and ψ be as in the statement of (3). Let us define the mapping ϑ by the formula It is clear that the mapping ϑ is ω-monotone. For any A ∈ [M ] ≤ω obviously A ⊂ ϑ(A) and the mapping x * → x * | span ϑ(A) is a bijection of (M \ ϑ(A)) ⊥ onto (span ϑ(A)) * , due to the properties of ψ. It remains to show the formula for D. By the properties of ψ and definition of ϑ we deduce that To prove the converse inclusion fix any This completes the proof. (4)⇒(5) Let M and ϑ be as in the assertion (4). We are going to construct a projectional skeleton. We start by defining the respective index set. Set and consider the partial order on Γ given by inclusion. This index set has the following properties: (iii) if (A n ) n is an increasing sequence in Γ, then n A n ∈ Γ; (iv) Γ is closed to taking arbitrary intersections; (v) Γ is a lattice, i.e., any two-point subset of Γ admits a supremum and an infimum in Γ. Let us now prove these properties: ≤ω . Set C 1 = C and define, by induction, C n+1 = ϑ(C n ) for n ∈ N. Finally, set A = n C n . By the properties of ϑ we deduce that the sequence (C n ) is increasing. Hence A ⊂ C and, moreover, (iv) Let Γ ′ ⊂ Γ be any subset. If Γ ′ = ∅, it is an element of Γ. Suppose that A = Γ ′ = ∅. Then the intersection belongs to Γ) and their supremum is Indeed, the subset of Γ on the right-hand side is nonempty by (i) and the intersection belongs to Γ by (iv). This completes the proof of the properties of Γ. It remains to construct the projections. To this end we will use the following easy lemma. The lemma was essentially used in [11], but we formulate it explicitly as we will use it also in the following section. Lemma 3.2. Let X be a Banach space, Y ⊂ X a closed subspace, V ⊂ X * a weak * -closed subspace such that the mapping Then there is a bounded linear projection P on X such that P X = Y , P * X * = V and ker P = V ⊥ . Proof. By the open mapping theorem the above restriction map is an isomorphism of Indeed, take y ∈ Y and v ∈ V ⊥ . Fix y * ∈ Y * such that y * = 1 and |y * (y)| = y . By the assumption there is x * ∈ V with x * | Y = y * . By the above we get x * ≤ 1 c , thus cx * ≤ 1. Therefore y + v ≥ |cx * (y + v)| = c |x * (y)| = c |y * (y)| = c y . It follows that Y ∩ V ⊥ = {0} and the projection of P : Indeed, we used the assumptions that V is weak * -closed and that the only x * ∈ V with x * | Y = 0 is the zero functional. Hence the bipolar theorem shows that Y + V ⊥ = X. It follows that the projection P is defined on the whole X. Moreover, P * X * is weak * -closed (as P * is a weak * -to-weak * -continuous projection), thus This completes the proof. Suppose that A ∩ B = C = ∅. To show that P A P B = P C it is enough to prove the equality for any x ∈ M : Now we are ready to prove that (P A ) A∈Γ is a commutative projectional skeleton. Firstly, Γ is an up-directed partially order set by the above property (ii). Let us check the properties of a projectional skeleton. P A X = span A, so it is separable for each A ∈ Γ, hence the property (i) is fulfilled. The property (ii) follows from (•). To prove the property (iii) fix an increasing sequence (A n ) in Γ. By the property (iii) of Γ the union A = n A n belongs to Γ. Then A is clearly the supremum of the sequence (A n ) and, moreover, hence P A X = n P An X. Further, let us prove the property (iv). To this end fix any x ∈ X. Since M is linearly dense, there is a countable set C ⊂ M with x ∈ span C. By the property (i) of Γ there is A ∈ Γ with A ⊃ C. Then x ∈ span C ⊂ span A = P A X. Finally, the skeleton is commutative by (•). It remains to show that D is the subspace induced by this skeleton, i.e., The inclusion '⊃' follows from the assumption (4), as for any A ∈ Γ we have Conversely, let x * ∈ D. By (4) there is C ∈ [M ] ≤ω such that x * ∈ (M \ ϑ(C)) ⊥ . By the property (i) of Γ there is A ∈ Γ with A ⊃ ϑ(C). Then (5)⇒(6) This implication will be proved by transfinite induction on the density of X. If X is separable, then D = X * and X admits a countable Markushevich basis, so the statement is obvious. Let κ be an uncountable cardinal such that the implication holds whenever dens X < κ. Suppose that dens X = κ and (5) is satisfied. Let (P s ) s∈Γ be a commutative projectional skeleton inducing D. Since D is 1-norming, we can without loss of generality assume that it is a 1-projectional skeleton (up to passing to a closed cofinal subset of Γ, see Lemma 1.3). Let (P α ) α≤κ be a PRI on X provided by Proposition 2.7. Then the space (P α+1 − P α )X, for each α < κ admits a commutative 1-projectional skeleton with the induced subspace D α+1 α = {x * | (Pα+1−Pα)X ; x * ∈ D}. For any α < κ there is, due to the induction hypothesis, a Markushevich basis (x α,j , x * α,j ) j∈Jα of the space (P α+1 − P α )X such that By the proof of [15, Proposition 6.2.4] the family (x α,j , x * α,j • (P Aα+1 − P Aα )) j∈Jα,α<κ is a Markushevich basis of X. It remains to show that α for each α < κ. So, to show that x * belongs to the set on the right-hand side it is enough to show that the set is countable. We start by observing that, by the definition of D, there is s ∈ Γ with P * , the set {P s zα; α ∈ C 1 } is an uncountable discrete subset of the separable space P s X, which is a contradiction. ⊃: Let x * belong to the set on the right-hand side. Then the set is countable. Moreover, So, for any α ∈ C there is y * α ∈ D such that y * α | (Pα+1−Pα)X = x * | (Pα+1−Pα)X , Then there is s α ∈ Γ such that y * α = P * sα y * α . Since C is countable, there is s ∈ Γ such that s ≥ s α for α ∈ C. To show that x * ∈ D it is enough to prove that P * s x * = x * . Recall that by the construction of the PRI the projection P s commutes with each P α (by Lemma 2.2(b)). Suppose that α < κ and x ∈ (P α+1 − P α )X. Then α , x α ) α∈Λ be the Markushevich basis provided by (6). It follows immediately that D is a Σ-subspace, thus it is weak * -countably closed by the already proved implication (1)⇒(2). Further, since the Markushevich basis is a biorthogonal system, obviously x * α ∈ D for any α ∈ Λ. It remains to show that the set H = {x α ; α ∈ Λ} ∪ {0} is σ(X, D)-Lindelöf. So, let U be a cover of H consisting of σ(X, D)-open sets. Then there is U ∈ U such that 0 ∈ U . By the definition of the topology σ(X, D) there are x * 1 , . . . , x * n ∈ D and ε > 0 such that {x ∈ X; x * j (x) < ε for j = 1, . . . , n} ⊂ U. For each j ∈ {1, . . . , n} the set is countable and, moreover, H \ n j=1 M j ⊂ U . So, H \ U is countable and hence one can find a countable subfamily of U covering H. (7)⇒(8) Let (x * α , x α ) α∈Λ be the Markushevich basis provided by (7). Set H = {x α ; α ∈ Λ} ∪ {0} and observe that all the nonzero points of H are isolated. Indeed, let α ∈ Λ. Since x * α ∈ D, the set Since H is σ(X, D)-Lindelöf, it follows that for each σ(X, D)-open neighborhood U of 0 the set H \ D is countable. Therefore H is a canonical continuous image of the space L Λ , thus it is primarily Lindelöf. Further, the closed unit ball B X is σ(X, D)-closed as D is 1-norming, hence B X ∩ span H is primarily Lindelöf and thus the product space is primarily Lindelöf as well. Finally, the mapping F : Z → X defined by is well defined (the series converges absolutely in the norm) and maps Z onto X (as span H is dense in X). So, to complete the proof it is enough to show that F is continuous to the topology σ(X, D). To this end it suffices to prove that x * • F is continuous on Z for each x * ∈ D. But the partial sums are continuous on Z and the limit is uniform on Z. (3) and (4), secondly in a detailed analysis of the properties of Markushevich bases in the assertions (7) and (8) and, finally, in providing a proof of (1)⇔(5) avoiding the set-theoretical method of elementary submodels. The assertions (3) and (4) provide another view on projectional skeletons which combine some approaches from [11,12] with an idea of [8]. For example, a similar statement to the implication (1)⇒(3) is [12,Lemma 11] where rich families are used instead of ω-monotone mappings. In [8] the author shows the equivalence of separable reduction methods using rich families and ω-monotone mappings. We show that ω-monotone mappings can be used to characterize projectional skeletons as well. Another use of ω-monotone mappings is demonstrated in [11] by the use the notion of Asplund generator to characterize Asplund spaces. (b) We point out that the projectional skeleton constructed in the proof of (4)⇒(5) is automatically simple in the sense of [13, Section 4] (i.e., 'indexed by the ranges of projections') and, moreover, its index set is a lattice. (c) The assertion (6) can be strengthened by requiring that the Markushevich bases in question is moreover strong. The proof can be done by transfinite induction exactly in the same way as the proof of (5)⇒(6). Indeed, separable spaces admit a strong Markushevich basis by [43] (see also [19,Theorem 1.36]) and this property is preserved in the induction step as remarked in the proof of [19,Theorem 5.1]. (d) Observe that in the proofs of (6)⇒(7)⇒(8) the Markushevich basis has not been changed. Thus any Markushevich basis with the property from (6) has also the properties from (7) and (8). Moreover, if the basis satisfies the properties from (7), the set H = {x α ; α ∈ Λ}∪{0} is σ(X, D)-closed and its nonzero points are isolated. The latter statement was proved above. To see the first one, fix any x ∈ X \ H. We distinguish the following three cases: α∈Λ be a Markushevich basis with the properties from the assertion (6). Then D is a Σ-subspace, as the set M = {x α ; α ∈ Λ} witnesses it. Therefore (1) is satisfied and, going through the proofs of (1)⇒(3)⇒(4)⇒(5) we can construct a commutative projectional skeleton (P s ) s∈Γ with induced subspace D. Moreover, by the construction, this skeleton has a special behavior on the basis. More precisely, This behavior is specific for the commutative case. Indeed, suppose we have a projectional skeleton with induced subspace D and a Markushevich basis such that ( ) is satisfied. Then D is the Σ-subspace generated by the set M = {x α ; α ∈ Λ}. To see this take any and this set is countable as it is relatively discrete in the weak topology, hence, a fortiori, in the norm topology, and P s X is separable. Conversely, let x * ∈ X * be such that the set is countable. By the properties of projectional skeletons there is s ∈ Γ with {x α ; α ∈ Λ 0 } ⊂ P s X. Then we have We continue by the following theorem. Given a projectional skeleton, it characterizes when the induced subspace is in fact a Σ-subspace. Theorem 3.4. Let X be a Banach space, (P s ) s∈Γ a projectional skeleton on X and let D be the induced subspace. The following assertions are equivalent. ( Note that the assertion (1) can be replaced by any of its equivalents provided by Theorem 3.1 and the continuity requirement in (3) can be replaced by any of its equivalents from Proposition 2.4. An important tool to the proof of the theorem is the following lemma on uniqueness. Lemma 3.5. Let X be a Banach space and let (P s ) s∈Γ and (Q j ) j∈J be two projectional skeletons on X inducing the same subspace D ⊂ X * . Then for any choice of s ∈ Γ and j ∈ J there are s ′ ∈ Γ and j ′ ∈ J such that s ′ ≥ s, j ′ ≥ j and P s ′ = Q j ′ . Proof. First observe that if P is any bounded projection on X with separable range, then P * X * is weak *separable. Indeed, Y = P X is separable and the projection P can be expressed as P = T Q, where T is the canonical isometric embedding of Y into X and Q is the projection P considered as an operator Secondly, up to passing to cofinal σ-closed subsets of Γ and J we may assume without loss of generality that the projections from both skeletons are uniformly bounded [31, Proposition 9] and hence the stronger condition (iii') holds (see the introductory section). Let us define sequences (s n ) in Γ and (j n ) in J inductively as follows: • s 0 = s, j 0 = j. • Given s n−1 and j n−1 defined, find s n ∈ Γ, s n ≥ s n−1 such that P sn X ⊃ Q jn−1 X and P * sn X * ⊃ Q * jn−1 X * . This is possible by the properties of projectional skeletons, as Q jn−1 X is a separable subspace of X and Q * jn−1 X * is a weak * -separable subspace of D. • In the same way, given s n and j n−1 defined, find j n ∈ J, j n ≥ j n−1 such that Q jn X ⊃ P sn X and Q * jn X * ⊃ P * sn X * . Finally, set s ′ = sup n s n and j ′ = sup n j n . Then due to the property (iii) of projectional skeletons and by the property (iii') of projectional skeletons. Indeed, by the property (iii') we have, given any x * ∈ X * and x ∈ X, thus P * sn x * w * → P * s x * , and similarly for the other skeleton. Therefore, the projections P s ′ and Q j ′ have the same ranges and the same kernels, thus they are equal. Proof of Theorem 3.4. Note that D is norming (by [31, Proposition 9 and Section 4.3]). Since the assertions (1)-(3) are not affected by renormings, we may assume without loss of generality that D is 1-norming. So, up to passing to a cofinal σ-closed subset of Γ (this does not affect the assertions (1)-(3)) we may assume that (P s ) s∈Γ is a 1-projectional skeleton. (1)⇒(2) Suppose that D is a Σ-subspace. By Theorem 3.1 It follows that there is a commutative 1-projectional skeleton (Q j ) j∈J inducing D. Let By Lemma 3.5 we see that Γ 0 is cofinal in Γ. Observe that any cofinal set is automatically up-directed. So, it makes sense to define Γ ′ = (Γ 0 ) σ (using the notation from Section 2). Then clearly Γ ′ is a cofinal σ-closed subset of Γ. We claim that P s P t = P t P s whenever s, t ∈ Γ ′ . To prove that we will use the transfinite construction of (Γ 0 ) σ described in the proof of Proposition 2.3(i). Let Γ α , α < ω 1 , be the respective approximations or (Γ 0 ) σ . We will prove by transfinite induction that The validity for α = 0 follows from the definition of Γ 0 and commutativity of the skeleton (Q j ) j∈J . Suppose it holds for some α < ω 1 and suppose s, t ∈ Γ α+1 . Then there are increasing sequences (possibly constant) (s n ) and (t n ) in Γ α with s = sup n s n and t = sup n t n . Then, using the property (iii') of projectional skeletons, we deduce that for any x ∈ X we have thus P s P t = P t P s . Since the limit induction step is obvious, the proof is complete. (3)⇒(1) Assume (3) holds. Without loss of generality assume that Γ ′ = Γ. We are going to prove that the assertion (6) of Theorem 3.1 holds. This can be shown by repeating the proof of the implication (5)⇒(6) of Theorem 3.1 with few differences. We use again transfinite induction. The first step, the separable case, is exactly the same. In the induction step we build a PRI (P α ) α≤κ using Proposition 2.7. Observe that any P α is of the form P Aα for some A α ⊂ Γ directed. By the assumption the projection P A is σ(X, D)-to-σ(X, D) continuous. So, by Proposition 2.4(2)⇒(6) and Proposition 2.7(c) the space (P α+1 − P α )X admits a 1-projectional skeleton with induced subspace D α+1 α of the same form as in the proof of Theorem 3.1. Finally, to be able to use the transfinite induction, it remains to show that the skeleton on (P α+1 −P α )X satisfies the assumption of (3) as well. So, recall that the skeleton is of the form where ∆ is a suitable cofinal σ-closed subset of (A α+1 ) σ . So, fix any directed set B ⊂ ∆. Recall that ∆ is chosen in such a way that (P α+1 − P α )X is invariant for P s for any s ∈ ∆ (see Proposition 2.3(iv)). It follows that (P α+1 − P α )X is invariant for P B as well. Thus, is a projection on (P α+1 − P α )X and it is enough to show that this projection is σ((P α+1 − P α )X, D α+1 α )to-σ((P α+1 − P α )X, D α+1 α ) continuous. To this end we will check the validity of the property (1) of Indeed, the first equality is just the definition of an adjoint mapping; the second one follows from the fact that x ∈ (P α+1 − P α )X; the third one uses the invariance of (P α+1 − P α )X for P B and the choice of y * ; the fourth one is again the use of the definition of an adjoint mapping; and the last one follows from the fact that x ∈ (P α+1 − P α )X. Finally, since P B is σ(X, D)-to-σ(X, D) continuous by the very assumption of (3) and y * ∈ D, Proposition 2.4 yields that P * B y * ∈ D. It follows that which completes the proof of the validity of the condition (1) continuous. This completes the proof. Remark 3.6. An alternative proof of the implication (3)⇒(1) in Theorem 3.4 may be done using [31,Theorem 23]. Indeed, let us use transfinite induction on dens X. The separable case is obvious, so assume that κ is an uncountable cardinal and the statement holds whenever dens X < κ. As above, without loss of generality assume Γ ′ = Γ. We build a PRI (P α ) α≤κ using Proposition 2.7. Observe that any P α is of the form P Aα for some A α ⊂ Γ directed. Given α < κ, the family (P s | PαX ) s∈(Aα)σ is a 1-projectional skeleton on P α X (by Proposition 2.3(iii)) and, due to Proposition 2.4(2)⇒(6), the respective induced subspace is Moreover, given any directed B ⊂ (A α ) σ , the projection P B is σ(X, D)-to-σ(X, D) continuous (by the assumptions of (3)), hence P * B (D) ⊂ D by Proposition 2.4. Similarly as in the above proof we show that (P B | Pα X) * D α ⊂ D α and using Proposition 2.4 we deduce that P B | PαX is σ(P α X, D α )-to-σ(P α X, D α ) continuous. Thus, using the assumption hypothesis, D α is a Σ-subspace of (P α X) * . By [31,Theorem 23] we deduce that D is contained in a Σ-subspace of X * , thus D itself is a Σ-subspace (as D is weak *countably closed and any Σ-subspace is weak * -countably tight). Corollary 3.7. Let X be a Banach space with a full projectional skeleton, i.e. having a projectional skeleton whose induced subspace is X * . Then X * is a Σ-subspace of itself, i.e., X is weakly Lindelöf determined. Proof. This follows immediately from Theorem 3.4(3)⇒(1), as the topology σ(X, D) now coincides with the weak topology on X and any bounded linear operator is automatically weak-to-weak continuous. Equivalents of a projectional skeleton In this section we study characterizations of subspaces induced by a possibly non-commutative projectional skeleton. They are collected in Theorem 4.1 which can be viewed as a non-commutative version of Theorem 3.1. However, as we will see, the analogy is not complete, some problems remain open. Before formulating the theorem we give the definitions of two more notions used in the statement or in the proof. A topological space T is called monotonically retractable if there is an assignment and, moreover, the mapping N is ω-monotone. Further, a topological space T is called monotonically Sokolov if there is an assignment where F (T ) denotes the family of all the nonempty closed subsets of T , such that for any is a countable family of subsets of T and for any open subset U ⊂ T and any ; and, moreover, the mapping N is ω-monotone. Monotonically retractable spaces were introduced in [39], monotonically Sokolov spaces in [40]. Monotonically retractable spaces are closely related to retractional skeletons [14,7], monotonically Sokolov spaces can be viewed, in a sense, as a non-commutative version of primarily Lindelöf spaces (cf. the next theorem and the questions in the last section). Theorem 4.1. Let X be a Banach space and D ⊂ X * a norming subspace. The following assertions are equivalent. (1) D is induced by a projectional skeleton in X. (2) There is an ω-monotone mapping ψ : and for any A ∈ [X] ≤ω the following properties hold: Proof. Since the assertions are not affected by renorming, we may and shall assume that D is 1-norming. (1)⇒(2) Let (P s ) s∈Γ be a projectional skeleton on X such that the respective induced subspace in D. By Lemma 1.3 we can assume without loss of generality that it is a 1-projectional skeleton. Let us fix a mapping σ : We continue by choosing for each s ∈ Γ a countable set η(s) ⊂ D ∪ X such that This choice is possible as P s X is separable and P * s X * is weak * -separable for each s ∈ Γ. Further, the set-valued version η : Clearly ψ 0 is an ω-monotone mapping. For n ∈ N and A ∈ [X ∪ D] ≤ω set ψ n (A) = ψ 0 (ψ n−1 (A)) and ψ(A) = n ψ n (A). It is clear that ψ is ω-monotone. The property (i) is obvious, The property (ii) follows from the fact that D is induced by a skeleton and hence weak * -countably closed. It remains to prove the properties (iii) and (iv). To this end fix A ∈ [X ∪D] ≤ω and set C = υ(σ(ψ(A))). Then C is a countable up-directed subset of Γ, hence C has a supremum s ∈ Γ. We claim that Next observe that The property (iii) now easily follows. To prove the property (iv) fix , so the respective assignment is an isometry, thus it is one-to-one. It is also onto, as for any y * ∈ (P s X) * we have x * = y * • P s ∈ X * , P * s x * = x * and x * | PsX = y * . (2)⇒(3) Let ψ be the mapping provided by (2). Further, for each . Then θ is an ω-monotone map. The properties (i)-(iii) follow immediately from the properties of ψ. It remains to prove the formula for D. To this end set We perform the following inductive construction. We start by setting A 1 = ψ({x * }). Given A n we proceed as follows. Set A = n A n . Then ψ(A) = A, x * ∈ A and x * n ∈ A for n ∈ N. Further, the construction yields (3)⇒(1) Let θ be the mapping provided by (3). and X A = span(X ∩ θ(A)). By the property (iv) and Lemma 3.2 there is a bounded linear projection P A : ≤ω is a projectional skeleton on X. Indeed, the properties (i) and (iv) are obvious, the property (ii) has been just proved and the property (iii) follows from ω-monotonicity of θ. Moreover, the subspace induced by this skeleton is exactly D by the property (ii) of θ. This completes the proof. (1)⇒(4) Firstly, D is weak * countably closed being induced by a skeleton. To prove that (X, σ(X, D)) is monotonically Sokolov we shall construct the respective mappings using similar ideas as in the proof of (1)⇒(2). Let (P s ) s∈Γ be a projectional skeleton on X such that the respective induced subspace in D. By Lemma 1.3 we can assume without loss of generality that it is a 1-projectional skeleton. Let φ : Γ× Γ → Γ and υ : [Γ] ≤ω → [Γ] ≤ω be the mappings defined in the proof of (1)⇒(2). For For any s ∈ Γ set where U (x, r) denotes the open ball centered at x with radius r (in the norm of X). It is clear that N 0 (s) is a countable family of subsets of X and that the set-valued version of N 0 , considered as a mapping from [Γ] ≤ω → [P(X)] ≤ω is ω-monotone. Let F (X) denote the family of all the nonempty σ(X, D)-closed subsets of X. Let us define by induction ω-monotone mappings φ n : It is clear that φ 1 is ω-monotone. Further, given an ω-monotone mapping φ n : Note that the range of each P s is norm-separable, hence the formula has a sense. Moreover, the mapping φ n+1 is ω-monotone. It follows that also the mappings Finally, set N (A) = s∈Γ(A) N 0 (s). Then N (A) is an outer network for r A X and the assignment N is ω-monotone. This completes the proof. The class of monotonically Sokolov spaces is stable to taking closed subsets and countable products [40,Theorem 3.4(c,d)]. Further, compact metric spaces are monotonically Sokolov for trivial reasons. It follows that the class of continuous images of monotonically Sokolov spaces is stable to the same operations and, moreover, to taking continuous images and countable unions. Indeed, a countable union is a continuous image of a countable topological sum and monotonically Sokolov spaces are obviously stable to taking countable topological sums. Therefore the proof can be done by copying the proof of the implication (8)⇒(9) of Theorem 3.1. (6)⇒(1) Fix a monotonically Sokolov space T and a continuous surjection F : T → (X, σ(X, D)). By [40, Theorem 3.5] the space C p (T ) is monotonically retractable. Define the mapping G : it is a closed mapping). Hence D ∩ B X * is monotonically retractable and so it admits a full retractional skeleton by [14, Theorem 1.1] (see [7,Theorem 4.3] for an elementary proof). By [40,Theorem 3 (1) and (2) of Theorem 3.1 are missing. Indeed, there is up to now no known analogue of (1). As for (2), existence of a projectional generator is a sufficient for condition for the existence of a projectional skeleton, but it is not clear whether it is necessary. Related problems are discussed in the last section. (b) Assertions (2) and (3) of the previous theorem can be viewed as non-commutative analogues of the assertions (3) and (4) (d) The assertions (4)-(6) of the previous theorem can be viewed as a noncommutative analogue of the assertion (9) of Theorem 3.1. Monotonically Sokolov spaces (or, more precisely, their continuous images) serve as a noncommutative analogue of primarily Lindelöf spaces. Some more discussion on the relationship of these two classes is contained in the last section. (f) As a consequence of the Theorem 4.1 we get that Theorem 18 (and hence Corollary 19) of [31] is true in spite of the gap in the proof pointed out in the proof of Lemma 1.4 above. Indeed, assume that D is a subspace of X * induced by a projectional skeleton on X. By Theorem 4.1 we get that (X, σ(X, D)) is monotonically Sokolov, hence C p (X, σ(X, D)) is monotonically retractable by [40,Theorem 3.5], so it has countable tightness by [14, Fact 2.1(g)]. Since (D, w * ) is homeomorphic to a subset of C p (X, σ(X, D)), it has countable tightness as well. The next corollary is one of the promised results on the relationship of Markushevich bases and projectional skeletons. It is an immediate consequence of the implication (5)⇒(1) of Theorem 4.1. is monotonically Sokolov in the topology σ(X, D), then D is induced by a projectional skeleton on X. Let us point out that it is not clear whether the converse implication holds as well. This problem is discussed in more detail in the last section. The second result is the following improvement of the assertion (3) of Theorem 4.1. and for any A ∈ [Λ] ≤ω the following properties hold: Proof. Let θ : [X] ≤ω → [X ∪ D] ≤ω be the mapping from Theorem 4.1(3). We will modify it using the Markushevich basis. To this end we define one more mapping. For any x ∈ X let C(x) ⊂ Λ be a countable set such that x ∈ span{x α ; α ∈ C(x)}. Further, for any A ∈ [Λ] ≤ω we set Then the mapping It is clear that ϕ is an ω-monotone mapping such that A ⊂ ϕ(A) and To prove the property (iii) of ϕ fix A ∈ [Λ] ≤ω . Then Indeed, the inclusion ⊂ is obvious. To see the converse observe that So, the property (iii) of ϕ follows immediately from the property (iii) of θ. It remains to prove the formula for D. The inclusion ⊃ follows from the properties of θ. One possibility to prove the converse inclusion is to observe that the set on the right-hand side is a weak * -countably closed subspace which separates points of X, hence it is weak * -dense. Since D has countable tightness in the weak * topology by Remark 4.2(f), the conclusion follows. Remark 4.5. In the previous proposition no special assumption on the Markushevich basis is needed. Just a mere existence of some Markushevich basis is used. In fact, a similar statement can be formulated for an arbitrary linearly dense subset of X in place of {x α ; α ∈ Λ}. The proof of such a statement would be essentially the same. However, the fact that we start with a Markushevich basis can be used to construct a simple projectional skeleton (in the sense of [14,Section 4]) by applying the method of the proof of the implication (3)⇒(1) of Theorem 4.1 to the mapping ϕ in place of θ. Again, the only important thing is the existence of some Markushevich basis (this corresponds to the methods of [14]). Examples of spaces with a noncommutative projectional skeleton While 1-Plichko spaces, i.e., spaces with a commutative 1-projectional skeleton appear often and naturally in mathematics (see [27,4,4,6]), the supply of spaces with a non-commutative skeleton is not so large. Up to now they include spaces of continuous functions on ordinal segments, spaces of continuous functions on certain trees equipped with the coarse-wedge topology and duals to Asplund spaces. And, of course, spaces made by certain standard constructions starting from the mentioned examples. In this section we provide an analysis of the three mentioned classes. We focus on explicit description of projectional skeletons, Markushevich bases and projectional generators on these spaces. The related open problems are discussed in the last section. We also show the applications of Theorem 3.4 in these cases. Since two of these classes are spaces of continuous functions, in the first subsection we recall some notions and facts on retractions on compact spaces. 5.1. Retractions on compact spaces. If K is a compact Hausdorff space, C(K) denotes the space of (real-or complex-valued) continuous functions on K equipped with the supremum norm. Its dual C(K) * is, by the Riesz representation theorem, canonically isometric to M(K), the space of (real-or complex-valued) Radon measures on K equipped with the total variation norm. In the sequel we will identify C(K) * with M(K). An analogue of projectional skeleton in the realm of compact spaces is the notion of a retractional skeleton introduced in [32]. We recall that a retractional skeleton on a compact Hausdorff space K is a family (r s ) s∈Γ of continuous retractions on K satisfying the following conditions. (i) r s (K) is metrizable for each s ∈ Γ; (ii) r s • r t = r t • r s = r s whenever s, t ∈ Γ are such that s ≤ t; (iii) if (s n ) is an increasing sequence in Γ, then s = sup n s n exists in Γ and r s (x) = lim n r sn (x) for x ∈ K; (iv) lim s∈Γ r s (x) = x for x ∈ K. is said to be induced by the skeleton. A notion related to a Σ-subspace is that of a dense Σ-subset. Recall that A is a Σ-subset of a compact space K if there is a homeomorphic injection h : K → R Γ such that A compact having a dense Σ-subset is called Valdivia. If K is even a Σ-subset of itself, it is called Corson. By [32, Theorem 6.1] (more precisely by its proof) a dense subset of K is a Σ-subset if and only if it is induced by a commutative retractional skeleton. Next we recall few fact on the relationship of retractions on K with projections on C(K). We start by the following well-known result. Lemma 5.1. Let K be a compact Hausdorff space and let r : K → K be a continuous retraction. Define the operator P : C(K) → C(K) by P f = f • r, f ∈ C(K). Then the following assertions hold. (a) P is a linear projection of norm one. is an isometric isomorphism of P (C(K)) onto C(r(K)). I.e., it is a linear onto isometry, which moreover preserves multiplication and in the complex case also complex conjugation.ometric isomorphism of P (C(K)) onto C(r(K)). I.e., it is a linear onto isometry, which moreover preserves multiplication and in the complex case also complex conjugation. (c) The adjoint projection P * is given by the formula Proof. The assertions (a) and (b) are well known and obvious. The assertion (c) is also easy and known, let us give a proof for completeness. Fix µ ∈ M(K). Then r(µ) is a well-defined Borel measure on K. Moreover, r(µ) is a Radon measure -this is obvious in case µ is nonnegative; any real-valued measure is a difference of two non-negative ones and any complex-valued measure is a linear combination of four non-negative ones. So, r(µ) ∈ M (K). Moreover, by the rule of integration with respect to the image measure we have, for any f ∈ C(K), Finally, let us show the last equality. Let µ ∈ M (K) and B ⊂ K \ r(K) be a Borel set. Then Conversely, assume that µ belongs to the set on the right-hand side. Then for any B ⊂ K Borel we have r(µ)(B) = µ(r −1 (B)) = µ(r −1 (B ∩ r(K))) = µ(r −1 (B ∩ r(K)) ∩ r(K)) = µ(B ∩ r(K)) = µ(B), so µ = r(µ) = P * (µ). Lemma 5.2. Let K be a compact Hausdorff space and let (r s ) s∈Γ be a net of continuous retractions on K which pointwise converges to a continuous retraction r and, moreover, Define the projections Then P s P t = P t P s = P s whenever s ≤ t and, moreover, the net (P s ) converges to P in the strong operator topology. Proof. The equalities P s P t = P t P s = P s for s ≤ t are obvious. Further, it is clear that for any f ∈ C(K) and any It remains to show that this can be strengthened to the norm convergence. To this end set A = s∈Γ P s (C(K)). By Lemma 5.1(b) we know that each P s (C(K)) is an algebra containing constant functions and stable to complex conjugation in the complex case. Since A is a directed union of such algebras, it is an algebra with the same properties. Further, it is a subalgebra of P (C(K)) and it separates points of r(K). Indeed, if x, y ∈ r(K) are different, then there is s ∈ Γ with r s (x) = r s (y). By the Urysohn lemma there is g ∈ C(r s (K)) with g(r s (x)) = g(r s (y)). Then g • r s ∈ A and separates x and y. So, the Stone-Weierstrass theorem (together with Lemma 5.1(b) applied to r) we see that A is norm dense in P (C(K)). To complete the proof fix f ∈ C(K) and ε > 0. By the previous paragraph there is g ∈ A with P f − g < ε. Fix some s ∈ Γ with g = P s (g). Then for each t ∈ Γ, t ≥ s we have where we used the equalities P t g = P t P s g = P s g = g and P t P = P . This completes the proof. The first part of the assertion (a) of the following proposition is stated in [31,Proposition 28]. It is claimed there that the assertion is clear. We add an easy proof using Lemma 5.2. Proposition 5.3. Let K be a compact Hausdorff space and let (r s ) s∈Γ be a retractional skeleton on K. Denote by S the respective induced subset of K. Define P s (f ) = f • r s for f ∈ C(K) and s ∈ Γ. Then the following hold. (a) (P s ) s∈Γ is a 1-projectional skeleton on C(K) and the respective induced subspace is D = {µ ∈ M(K); spt µ is a separable subset of S}. (b) If the skeleton (r s ) s∈Γ is commutative, then so is the skeleton (P s ) s∈Γ . (c) If D is a Σ-subspace, then there is a cofinal σ-closed subset Γ ′ ⊂ Γ such that r s • r t = r t • r s for s, t ∈ Γ ′ . So, in particular, S is induced by a commutative retractional skeleton on K, hence it is a Σ-subspace of K. Proof. (a) Let us check the properties (i)-(iv) of projectional skeletons. Given s ∈ Γ, P s (C(K)) is isometric to C(r s (K)) (by Lemma 5.1(b)), so it is separable (as r s (K)) is metrizable. Hence the property (i) is fulfilled. The properties (ii) and (iii) (in fact (iii')) follow from the respective properties of a retractional skeleton using Lemma 5.2. Further, by the property (iv) or retractional skeletons and Lemma 5.2 it follows that f = lim s∈Γ P s f, f ∈ C(K). So, given f ∈ C(K) one can find an increasing sequence (s n ) in Γ such that P sn f − f < 1 n . Let s = sup n s n . Since P sn f → P s f , necessarily P s f = f . This completes the proof of the property (iv) of projectional skeletons. By Lemma 5.1(a) it is even a 1-projectional skeleton. Let D be the subspace of M(K) induced by the skeleton. If µ ∈ D, then there is s ∈ Γ with P * s µ = µ. By Lemma 5.1(c) the support of µ is contained in r s (K), so it is a separable subset of S. Conversely, suppose that spt µ is a separable subset of S. Fix a countable dense set C ⊂ spt µ. Then there is s ∈ Γ such that r s (x) = x for x ∈ C. It follows that spt µ ⊂ r s (K), thus by Lemma 5.1(c) we deduce µ ∈ P * s M(K) ⊂ D. This completes the proof of the formula for D. (1) Any isolated point of A is an isolated ordinal. (2) A ∩ I(η) is a dense subset of A. (3) The mapping r A defined by is a continuous retraction of [0, η] onto A. Proof. Let us first remark that the mapping r A from (3) (1) it is an isolated ordinal. Thus U ∩ A intersects I(η), which completes the proof of (2). (2)⇒(3) As remarked above, r A is a well-defined retraction of [0, η] onto A. Hence, we are going to prove it is continuous. It is clearly continuous at each isolated ordinal. So, assume x ∈ [0, η] is a limit ordinal and let us show that r A is continuous at x. Let U be any neighborhood of r A (x). By the definition of the order topology there is some y < r A (x) such that (y, r A (x)] ⊂ U . Since r A (x) ∈ A and (y, is an open neighborhood of x and (3)⇒(1) Let us proceed by contraposition. Assume (1) fails, hence there is an isolated point x ∈ A which is a limit ordinal. Then y = sup(A ∩ [0, x)) < x and y ∈ A as A is closed. Thus r A (x) = x and r A (z) = y for z ∈ [y, x), which shows that r A is not continuous at x. The family of subsets of [0, η] satisfying the equivalent conditions of the previous lemma is very important for the study of retractions on [0, η]. Therefore we denote it by A(η). I.e., we set It is clear that the family A(η) is closed to taking finite unions, so it is up-directed by inclusion. We continue by investigating its properties. The following lemma is trivial. The following lemma establishes a continuity-like property of the family A(η). Lemma 5.6. Let A ′ ⊂ A(η) be a nonempty subset up-directed by inclusion. Then and, moreover, Proof. It is clear that B ∈ A(η). It remains to prove the equality. To this end fix any x ∈ [0, η] and any U , a neighborhood of r B (x). By the definition of the order topology there is y < r B (x) such that (y, and hence Proof. It is clear that A ω (η) is closed to taking finite unions, so it is up-directed by inclusion. Each r A is a continuous retraction by Lemma 5.4. Let us prove the properties (i)-(iv) of a retractional skeleton. We have r A ([0, η]) = A, which is a countable compact, hence metrizable. This proves the property (i). The property (ii) follows from Lemma 5.5, the property (iii) from Lemma 5.6 (using the fact that the closure of a countable set of ordinals is countable). The property (iv) follows from Lemma 5.6 applied to A ′ = A ω (η) as clearly A ω (η) is dense in [0, η] (it contains all the isolated ordinals). Finally, the subset induced by the skeleton is Then S(η) contains no ordinal of uncountable cofinality. Indeed suppose that there is some A ∈ A ω (η) containing some x of uncountable cofinality. Since A is countable, x is an isolated point of A, so by the definition of A(η) it must be an isolated ordinal, which is a contradiction. Conversely, if x ∈ [0, η] is an isolated ordinal, then {0, x} ∈ A ω (η), hence x ∈ S(η). Finally, assume that x is a limit ordinal of countable cofinality. Then there is a strictly increasing sequence (x n ) of ordinals with supremum x. Then {0, x} ∪ {x n + 1; n ∈ N} ∈ A ω (η), hence x ∈ S(η). Let us continue by investigation of the associated projections on . By Lemma 5.1 we know that it is a norm-one projection. (b) Let A ′ ⊂ A ω (η) be up-directed. Then the projection P A ′ defined by (2.1) coincides with the projection P A ′ defined above. Proof. The assertion (a) follows immediately from Proposition 5.7 and Proposition 5.3; the assertion (b) follows from Lemma 5.6 and Lemma 5.2. Next we are going to characterize ordinals η for which C([0, η]) is 1-Plichko. This is not a new result (see the comments in the proof) but we wish to provide a proof using Theorem 3.4. To this end we first need to characterize σ(C([0, η]), D(η))-continuity of the projection P * A . This is done in the following easy lemma. (i) Let A 0 ∈ A ′ be arbitrary. (3)⇒(2) We will use again Theorem 3.4. If η ≤ ω 1 , then the family A ω itself witnesses that D is a Σ-subspace (using Theorem 3.4). If η > ω 1 , then the whole family does not work, we need to restrict to a cofinal σ-closed subfamily. To this end fix a bijection ξ : I(ω 1 ) → I(η) such that ξ(0) = 0 and set Then (A α ) α<ω1 is a strictly increasing transfinite sequence in A ω . Moreover, the family {A α ; α < ω 1 } is clearly a cofinal σ-closed subset of A ω . Since it is linearly ordered, the respective projections commute, hence the assertion (2) of Theorem 3.4 is fulfilled. (Note that the validity of the assertion (3) of Theorem 3.4 is in this case also obvious due to the characterization from Lemma 5.9.) We continue by describing a canonical Markushevich basis and a projectional generator on C([0, η]). The second property follows from the stronger property defining strong Markushevich bases. Fix f ∈ C([0, η]). Set A = {α ∈ I(η); ν α (f ) = 0}, M = {g α ; α ∈ A}. The proof will be complete if we show that f ∈ span A. This will be done by the Hahn-Banach theorem. Let µ ∈ M([0, η]) be such that µ| M = 0. We will show that µ(f ) = 0 as well. If f = 0, the conclusion is obvious. So, suppose that f is not the constant zero function and set Our aim is to show that η ∈ J. The first step is to show that J = ∅ as β is well defined as f is continuous. If f (0) = 0, then β < η and β + 1 ∈ A, so g β+1 ∈ M . Thus β ∈ J. Let us continue the proof of the assertion (b). By Lemma 5.12 we can work with the topology τ p (S(η)). First observe that H is τ p (S)-closed. Indeed, a continuous function belongs to H if and only if it is nondecreasing and attains only values 0 and 1. Since S(η) is dense in [0, η] we have This formula obviously implies that H is τ p (S(η))-closed. This completes the proof. Note that the Markushevich bases from the preceding proposition satisfies the properties from Theorem 3.1 if and only if η ≤ ω 1 . However, D is a Σ-subspace if and only if η < ω 2 . Therefore for η ∈ (ω 1 , ω 2 ) there should be another Markushevich basis satisfying the respective properties. In fact, the Markushevich basis from the previous proposition coincides with the Markushevich basis canonically constructed using Theorem 1.2 if and only if η is a cardinal number. Next we are going to describe such a Markushevich basis for general η. , the family (A α ) α≤κ is strictly increasing, and A λ = α<λ A α if λ ≤ κ is limit. Moreover, A α ∈ A for each α ≤ κ. Therefore this family generates a PRI on C([0, η]) and we will describe the Markushevich basis provided by this PRI. To define the basis We will use the following two auxiliary functions: The following lemma summarizes basic properties of this function. x] is nonempty (it contains 0) and it is a closed set. So, it has a maximum. Since x / ∈ A α , the maximum is strictly less than x. This shows that p(x, α) is well defined. Let us continue by looking at z(x, α). hence y is an isolated point of A α . Since A α ∈ A, y is an isolated ordinal and z(x, α) = y − 1. The assertion (ii) is obvious. hence by the definition of the function z(·, ·) we deduce z(x, β) ≥ z(x, α). Moreover, p(x, β) ≤ p(x, α) as Using the functions z(·, ·) and p(·, ·) we are going to define a Markushevich basis (f α , µ α ) α∈I(κ) . Proof. (a) Observe that (P Aα ) α≤κ is a PRI on C([0, η]) (more precisely, it satisfies all the properties of a PRI except that P A0 is not the zero projection, but a one-dimensional projection -but this difference does not affect the applications). We will show that the family (f α , µ α ) α∈I(κ) is the Markushevich basis resulting from this PRI in the sense of [15, Proposition 6.2.4]. (c) By Lemma 5.12 we may work with the topology τ p (S). Set Then clearly H ⊂ F . Moreover, F is τ p (S(η))-closed. Indeed, as S(η) is dense in [0, η], we have Now let us analyze the τ p (S(η))-accumulation points of H. We start by proving (c-i). Let U be a τ p (S(η))-neighborhood of 0. It follows that there is a finite set C ⊂ S(η) such that {f ∈ C([0, η]; f | C = 0} ⊂ U. If η has uncountable cofinality, then max C < η. So, we can find x ∈ I(η) such that x > max C. Then clearly f ξ −1 (x) ∈ U ∩ (H \ {0}). Next assume that κ has uncountable cofinality. For each x ∈ C limit choose a countable set B(x) ⊂ I(η) with supremum x. Then So, fix some α ∈ I(κ) strictly greater than the left-hand side. This completes the proof of (c-i). Let us continue by proving (c-ii). By the above any accumulation point of H is a characteristic function of a clopen interval in [0, η]. We distinguish three cases of such intervals. , where a ∈ [0, η] is an ordinal with uncountable cofinality and b ≥ a + 1. Let us define θ and d as in the statement of (c-ii) and set Note that the definition of liminf together with the fact that ordinals are well ordered yield Since ξ −1 is one-to-one, θ is necessarily a limit ordinal. It follows that the following construction can be performed. The assertion (c) of the previous proposition indicates that the properties of a Markushevich basis constructed from a PRI may depend on the concrete choice of PRI (i.e., on the choice of the mapping ξ). The next example provides a strong evidence of this dependence. (b) Assume moreover η ≥ ω 2 . Define a bijection ξ : I(η) → I(η) by the formula Proof. The assertion (a) is obvious. Let us prove the assertion (b). Observe that in this case we have for other α ∈ I(η). The fact that all the nonzero elements of H are isolated points of H follows from Proposition 5.14(c), but it can be easily seen directly. Indeed, if λ < η has uncountable cofinality, set Moreover, let A ∈ A ω be such that The assertion (ii) of the previous example shows that the topological properties of a Markushevich basis constructed from a PRI can be very bad. Related problems are discussed in the last section. Continuous functions on trees. Further examples of Banach spaces with a non-commutative retractional skeleton are spaces of continuous functions on certain trees equipped with the coarse wedge topology studied for example in [41,42]. Let us start by recalling the basic setting. A tree is a partially ordered set (T, ≤) such that for any t ∈ T the set {s ∈ T ; s < t} is well ordered. A tree (T, ≤) is called rooted if it has a unique minimal element (called the root of T and usually denoted by 0). T is called chain complete if any chain in T (i.e., any totally ordered subset of T ) has a supremum (i.e., the smallest upper bound). By a tree we will always mean a rooted chain-complete tree. Let (T, ≤) be a tree. For any t ∈ T we set t = {s ∈ T ; s ≤ t}, We will further need the following two important subsets of T . The coarse wedge topology on a tree T is the topology on T whose subbase is is the family It is easy to check that a neighborhood basis of t ∈ T is the family in case ht(t, T ) is an isolated ordinal; and the family in case ht(t, T ) is a limit ordinal. Any tree equipped with the coarse wedge topology is a compact Hausdorff space [34,Corollary 3.5]. This topology is one of many topologies studied on trees, see [34]. It also coincides with the path topology considered in [44, pp. 288-289] or [45]. Let us explain it a bit. If T is a tree (not necessarily rooted or chain complete), we can consider its path space, i.e., the set of all the initial totally ordered segments embedded as characteristic functions to the product space {0, 1} T . In this way we obtain a compact Hausdorff space. Moreover, it is easy to check that the class of path spaces of arbitrary trees canonically coincides with the class of rooted chain complete trees equipped with the coarse wedge topology. Any ordinal segment [0, η] is a special case of a tree with the coarse wedge topology. Therefore the results of the previous section can be viewed as a special case of the results in the current section. We will see that the situation of trees is more complicated. Let us start by the following analogue of Lemma 5.4. Lemma 5.16. Let T be a tree equipped with a coarse wedge topology. Let A ⊂ T be a closed set containing 0. The following are equivalent. (1) The mapping r A : T → T defined by is a continuous retraction of T onto A. (2) For each x ∈ A on a limit level (i.e., such that ht(x, T ) is a limit ordinal) we have x = sup{y ∈ A; y < x}. Proof. First observe that the mapping r A is a well-defined retraction of T onto A -this follows easily from the assumptions that A is closed and contains 0. Hence the point of the assertion (1) is the continuity of r A . (1)⇒(2) Assume r A is continuous and x ∈ A is on a limit level. Note that x is order isomorphic and homeomorphic to an ordinal segment and r A | x coincides with the mapping r A∩ x from Lemma 5.4. Hence we can conclude by Lemma 5.4(3)⇒(1). (2)⇒(3) Fix any x ∈ A. If x ∈ I(T ), the the equality trivially holds -x is even maximum of the set on the right-hand side. So, assume that x is on a limit level. Fix an arbitrary y < x. By (2) we know that the order interval (y, x) intersects A, hence we can define z = min((y, x) ∩ A) (recall that initial segments of T are well ordered). Since z ∈ A and (z, y) ∩ A = ∅, another use of (2) yields that z ∈ I(T ). This completes the proof. (3)⇒(1) We will show that r A is continuous at each point. So, fix any x ∈ T . There are the following possibilities: Case 1: x / ∈ A. Since A is closed, there is a basic neighborhood W F y of x (recall that y ≤ x is on an isolated level and F ⊂ ims(x) is finite) such that W F y ∩ A = ∅. Then r A is constant on W F y , so it is continuous at x. Case 2: x ∈ A ∩ I(T ). Then r A (x) = x. So, fix any open set U containing x. By the definition of the topology there is a finite set F ⊂ ims(x) such that W F x ⊂ U . Since r A (W F x ) ⊂ W F x ⊂ U , the proof of continuity at x is complete. Case 3: x ∈ A, x on a limit level. Again, r A (x) = x. Fix any open set U containing x. By the definition of the topology there is some y < x on an isolated level and a finite set F ⊂ ims(x) such that W F y ⊂ U . By (3) there is some z ∈ (y, x) ∩ A ∩ I(T ). Then W F z is a neighborhood of x and r A (W F z ) ⊂ W F z ⊂ U . Let A 0 = A 0 (T ) denote the family of all the closed subsets of T containing 0 and satisfying the equivalent assertions of Lemma 5.16. Then we have the following analogue of Lemma 5.5. The proof is easy -either one can copy the argument of Lemma 5.5 or one can apply this lemma to the initial segments of T . Here we reached the limits of easy analogies. An analogue of Lemma 5.6 fails for the family A 0 . It is witnessed by the following example. So, to get analogous results as for ordinal segment one should proceed more carefully. Firstly, while any ordinal segment admits a retractional skeleton, for trees it is not the case. We recall the following result of J. Somaglia (see [41,Theorem 3.1] and [42,Theorem 5.2]). Proposition 5.19. Let T be a tree. The following assertions are equivalent (1) T admits a retractional skeleton. A retractional skeleton constructed in [41] is formed by the retractions r A where A runs through a carefully chosen subfamily of A 0 . Using similar ideas we present a simplified more canonical approach. We start by restricting ourselves to a special case. The following lemma shows that this can be done without loss of generality. Lemma 5.20. Let T be a tree such that Then there is a new partial order on T satisfying the following properties: • is finer than ≤, i.e., x y whenever x, y ∈ T satisfy x ≤ y. Proof. For any z ∈ T with cf(z) uncountable and ims(z) = ∅ set N (z) = card ims(z) and fix an enumeration ims(z) = {z [1], z [2], . . . , z[N (z)]}. Then the new order which does the job may be defined by In other words, for each z ∈ T with cf(z) uncountable and ims(z) = ∅ we set preserving the relations of the remaining points. In the sequel by an r-tree we will mean a tree satisfying the condition (3) from Proposition 5.19, i.e., a tree having a retractional skeleton. Further, an r 1 -tree will be a tree satisfying the stronger condition from Lemma 5.20, i.e., such that ims(x) contains at most one point for each x ∈ T with cf(x) uncountable. It is clear that r 1 -trees form a subclass of r-trees. However, Lemma 5.20 says in particular, that any r-tree is homeomorphic to some r 1 -tree. Thus dealing with r 1 -trees instead of r-trees does not result in loosing generality. We will need certain topological properties of trees. To investigate them we will use the following function. Let T be a tree. For s, t ∈ T set This is a well-defined element of T -note that s ∩ t is nonempty (it contains 0), closed and well ordered. The following lemma summarizes several properties of this operation. Lemma 5.21. Let T be a tree. (a) The mapping (s, t) → s ∧ t is continuous as a mapping T × T → T . Proof. (a) Fix any pair (s, t) ∈ T × T . We distinguish three possibilities. Case 1: s and t are incomparable. Then s ∧ t < s and s ∧ t < t. So, we can fix s ′ , t ′ ∈ ims(s ∧ t) such that s ′ ≤ s and t ′ ≤ t. Then Case 2: s = t. Let W F x be a basic neighborhood of s ∧ t = s = t (i.e., x ≤ s ∧ t is on an isolated level and F ⊂ ims(s ∧ t) is finite). Then W F x is also a neighborhood both of s and of t and u ∧ v ∈ W F x whenever u, v ∈ W F x . Case 3: s and t are comparable but different. Without loss of generality s < t. Then s ∧ t = s. Let W F x be a basic neighborhood of s ∧ t = s (i.e., x ≤ s is on an isolated level and F ⊂ ims(s) is finite). Let y ∈ ims(s) be such that y ≤ t. Then V y is a neighborhood of t, W F ∪{y} x a neighborhood of s and and v ∈ V y . (b) Assume x ∈ A. If x ∈ A, the conclusion is obvious. Thus suppose x ∈ A \ A. We distinguish two cases: Case 1: x is on an isolated level. Then V x is a neighborhood of x, so there is some Case 2: x is on a limit level. Fix any y < x. Let z ∈ ims(y) be such that z ≤ x. Then z < x and V z is a neighborhood of x, thus we can find some s ∈ V z ∩A. If s ≤ x, the proof is complete (as s∧s = s ∈ (z, x]). So, assume s ≤ x, i.e., s ∧ x < s. Let u ∈ ims(s ∧ x) be such that u ≤ s. (c) Denote the set from the statement by A. Assume a, b, c, d ∈ A. We need to show that (a∧b)∧(c∧d) ∈ A. Consider the three elements a ∧ b, a ∧ c, a ∧ d. They are contained in a which is a linearly ordered set, so one of them should be smaller than the others. If the smallest one is Assume that the smallest one is a ∧ c. Then Moreover, let x ∈ ims(a ∧ c) be such that x ≤ a. Then x ≤ a ∧ b, thus x ∈ a ∧ b. On the other hand, The case when the smallest one is a ∧ d is analogous to the previous one (just interchange the role of c and d). (d) Assume A is invariant for ∧. Let x, y ∈ A. If they are comparable, then x ∧ y ∈ {x, y} ⊂ A. So, assume x and y are incomparable, i.e. x ∧ y < x and x ∧ y < y. By (b) there are a, b, c, d ∈ A such that x ∧ y < a ∧ b ≤ x and x ∧ y < c ∧ d ≤ y. It follows that Then A is invariant for ∧ (by Lemma 5.21(c)), clearly it has the same cardinality as A and it is a dense subset of T . It follows from Lemma 5.21(b) that A ⊃ I(T ). Proposition 5.23. Any tree is a monolithic space, i.e., the weight and density coincide for each its subset. Proof. Let T be a tree and A ⊂ T any its infinite subset (for finite sets the statement is trivial). Let κ = dens(A). Let A = {x ∧ y; x, y ∈ A}. By Lemma 5.21(a) we see that A is a continuous image of A × A, hence dens A ≤ κ. By Lemma 5.21(c) A is invariant for ∧, hence F = A is also invariant for ∧ by Lemma 5.21(d). Clearly dens F ≤ κ. Further, F with the inherited order is a tree and the subspace topology coincides with the coarse wedge topology of F by [42, Lemma 2.1]. By Lemma 5.22 we get that w(F ) = dens(F ) ≤ κ. Hence w(A) ≤ κ = dens A. Since the converse inequality holds always this completes the proof. To present a canonical retractional skeleton on an r 1 -tree, we introduce for any such T the following family. A = A(T ) = {A ∈ A 0 (T ); x ∧ y ∈ A whenever x, y ∈ A} Lemma 5.24. Let T be an r 1 -tree. Then the following hold: Proof. (i) By Proposition 5.23 the weight and density coincide for subsets of trees. So, we can work with densities and, moreover, the density of a subset is not larger than the density of the original set. Similarly as in [41] choose for any x ∈ S(T ) \ I(T ) a countable set φ(x) ⊂ x ∩ I(T ) with supremum x. Now we are ready to provide a proof of (i). Fix any A ∈ A 0 and let κ = max{w(A), ℵ 0 }. Let us define by induction the following sequences of sets. Set A 0 = A. If n ∈ N is given and A n−1 is defined we set B n = {x ∧ y, ; x, y ∈ A n−1 }, It is clear that A n is closed for each n ∈ N ∪ {0}. Further, B n is closed as well by Lemma 5.21(a). We continue by showing that all the sets A n , B n and C n have density at most κ. A 0 = A has density at most κ by the definition of κ. So, assume that dens A n−1 ≤ κ. It follows from Lemma 5.21(a) that B n is a closed set of density at most κ. Further, by Lemma 5.21(c) it is invariant for ∧, so by [42, Lemma 2.1] the topology on B n coincides with the coarse wedge topology generated by the restricted order. Hence, by Lemma 5.22 card I(B n ) ≤ κ. Further, clearly {x ∈ B n ∩ (S(T ) \ I(T )); x > sup{y < x; y ∈ B n }} ⊂ I(B n ), hence card(C n \ B n ) ≤ κ. So, dens A n ≤ κ. We set B = n A n . Then B is a closed set of density at most κ. Let us show that B ∈ A. Clearly 0 ∈ B. Further, B is closed to the operation ∧. Indeed, by construction B = n B n , each B n is closed to ∧ and the sequence (B n ) is increasing, so we can use Lemma 5.21(d). It remains to show that B ∈ A 0 . To this end fix any x ∈ B \ I(T ) and any y < x. We need to find z ∈ (y, x) ∩ B. Let us distinguish three cases: Case 1: x / ∈ n A n . By Lemma 5.21(b) there are n ∈ N and a, b ∈ A n such that Case 2: x ∈ A n ∩ S(T ) for some n ∈ N. If (y, x) ∩ A n = ∅, then φ(x) ⊂ C n+1 ⊂ B. So, any z ∈ φ(x) ∩ (y, x) does the job. Case 3: x ∈ A n \ S(T ) for some n ∈ N. If x ∈ A 0 , the conclusion follows from the assumption A 0 = A ∈ A 0 . So, assume x / ∈ A 0 . Then there is some n ∈ N with x ∈ A n \ A n−1 . By Lemma 5.21(b) there are a, b ∈ C n such that y < a ∧ b ≤ x. Then a ∧ b ∈ B n+1 ⊂ B. So, it is enough to show that a ∧ b < x. Assume that a ∧ b = x. Since cf(x) is uncountable, the assumption that T is an r 1 -tree implies that a = x or b = x, so x ∈ C n . But C n \ B n ⊂ I(T ), so x ∈ B n . Hence x = c ∧ d for some c, d ∈ A n−1 . Using again that T is an r 1 -tree we deduce that x = c or x = d, thus x ∈ A n−1 , a contradiction. (ii) This assertion follows from (i) as A ∪ B ∈ A 0 . (iii) Let A ′ ⊂ A be up-directed by inclusion and B = A ′ . Let us show that B ∈ A. Clearly B is closed and 0 ∈ B. Further, each A ∈ A ′ is invariant for ∧ (as A ′ ⊂ A), hence A ′ is invariant for ∧ (as A ′ is up-directed).So, by Lemma 5.21(d) B is invariant for ∧ as well. It remains to show that B ∈ A 0 . So, fix x ∈ B on a limit level and any y < x. We shall prove that there is some z ∈ (y, x) ∩ B. This completes the proof that B ∈ A. It remains to prove the limit formula for r B . So, take any x ∈ T . Let us distinguish the following two cases: . This proves the convergence. Case 2: y = r B (x) / ∈ A ′ . Then y is on a limit level of T . Indeed, assume that y ∈ I(T ). By Lemma 5.21(b) there are a, b ∈ A ′ with a ∧ b = y. Since A ′ is directed, there is A ∈ A ′ with a, b ∈ A. Then y = a ∧ b ∈ A, a contradiction. Let U be an open set containing y. Then there are z < y on an isolated level of T and a finite set F ⊂ ims(y) such that W F z ⊂ U . In case y has uncountable cofinality, we may and shall assume that denote the family of all the separable sets from A. Then we get the following result. Proposition 5.25. Let T be an r 1 -tree. Then (r A ) A∈Aω is a retractional skeleton on T . The induced subset is S(T ). Proof. By Lemma 5.24(ii) A ω is up-directed. If A ∈ A ω , then r A (T ) = A, so it is metrizable by Proposition 5.23, hence the property (i) of retractional skeletons is satisfied. The property (ii) follows from Lemma 5.17, the property (iii) from Lemma 5.24(iii). Further, Indeed, the first equality follows from the fact that r A (T ) = A for each A ∈ A ω . Let us prove the second one. ⊂: Let A ∈ A ω . Assume that there is some x ∈ A with uncountable cofinality. Since A ∈ A, the set x ∩ A is uncountable. Since this set is well ordered and the inherited topology coincides with the order topology, it is not separable. Since separability is hereditary for subsets of T by Proposition 5.23, A is not separable, which is a contradiction. In particular, A ′ is dense, hence the property (iv) of retractional skeletons follows from Lemma 5.24(iii). Therefore (r A ) A∈Aω is a retractional skeleton on T . The formula for the induced subset follows from the above argument. For any Then we get the following By [42,Theorem 3.2] any µ ∈ M(T ) has separable support, which proves the first equality. Let us show the second one. The inclusion ⊂ is obvious, let us prove the converse one. I.e., assume that µ ∈ M(T ) is such that µ({x}) = 0 whenever cf(x) is uncountable. Let µ = µ d + µ c , where µ d is a discrete measure and µ c is a continuous measure. Let C = {x ∈ T ; µ({x}) = 0}. Then C is a countable subset of S(T ), thus C ⊂ S(T ). Since spt µ d = C, we deduce spt µ d ⊂ S(T ). It remains to prove that spt µ c ⊂ S(T ) as well. Assume that x ∈ T with cf(x) uncountable. We know that µ c ({x}) = 0, hence also |µ c | ({x}) = 0. Since |µ c | is regular, there is a sequence (y n ) in x ∩ I(T ) such that |µ c | (W ims F yn ) < 1 n . Let y = sup n y n . This supremum exists as (y n ) belongs to the well-ordered set x, Moreover, y < x as cf(x) is uncountable. Let z ∈ ims(y) be such that z ≤ x. Then W Let us now provide a Markushevich basis of C(T ) which is a generalization of the canonical Markushevich basis of C([0, η]). Note that by the following proposition C(T ) admits a strong Markushevich basis for an arbitrary T , projectional skeleton is not required. We will further discuss its properties in case T is an r 1 -tree. We start by defining the respective basis. For x ∈ I(T ) let g x = χ Vx . Then g x ∈ C(T ). Further, set where x − denotes the immediate predecessor of x. Proof. (a) It is clear that (g x , ν x ) x∈I(T ) is a biorthogonal system, i.e., the first property of Markushevich bases is fulfilled. Let us continue by the third one, i.e., by showing that the measures ν x , x ∈ I(T ) separate points of C(T ). To this end fix f ∈ C(T ) \ {0}. There is some y ∈ T with f (y) = 0. Recall that y is well ordered, so we can take the smallest x ∈ y with f (x) = 0. Since f is continuous, necessarily x ∈ I(T ). Moreover, clearly ν x (f ) = 0. To finish the proof we will need the following property of measures on T : ∀ν ∈ M(T ) ∀C ⊂ I(T ) consisting of mutually incomparable elements : Indeed, since C consist of mutually incomparable elements on isolated levels, the family V x , x ∈ C, is a disjoint family of open sets. Therefore the equality follows from τ -additivity of Radon measures. The second property of Markushevich bases follows from the stronger property defining strong Markushevich bases. Fix f ∈ C(T ). Set The proof will be complete if we show f ∈ span M . To this end we will use the Hahn-Banach theorem. So, fix any µ ∈ M(T ) such that µ| M = 0. We are going to show that µ(f ) = 0. If f = 0, the assertion is trivial, so suppose f = 0. If f is constant, then f = f (0) = 0, thus 0 ∈ A and χ V0 = 1 ∈ M . Therefore So, assume f is not constant. We will construct by transfinite induction subsets T α ⊂ T and R α ⊂ T as follows. Set T 0 = ∅. Assume that α > 0 and that we have constructed T β for β < α. Assume moreover that (T β ) β<α is an increasing transfinite sequence of closed sets which are also downward closed (i.e., x ⊂ T β whenever x ∈ T β for β < α). We define R α to be the set of all the minimal elements from T \ β<α T β . Note that R α consists of mutually incomparable elements of T and R 1 = {0}. Set It is clear that T α ⊃ β<α T β ∪ R α and it is downward closed. Further, it is also a closed subset of T . Indeed, fix any y ∈ T \ T α . Then, in particular, y ∈ T \ β<α T β , thus there is x ∈ R α with x ≤ y. Since x ∈ T α , necessarily x < y. Indeed, assume that T \ α<ω1 T α = ∅. So, fix a minimal x ∈ T \ α<ω1 T α . For each α ∈ [1, ω 1 ) let x α be the unique element of x ∩ R α . By construction the net (x α ) is strictly increasing and has supremum x. It follows that cf(x) = ℵ 1 . But f , being continuous on x, is constant on [y, x] for some y < x. Let α < ω 1 be such that x α > y. Indeed, let B denote the set of all the Borel subsets of R α which have r H (µ)-measure zero. Observe that V y ∩ R α ∈ B for any y ∈ β<α R β+1 . Indeed, let y ∈ R β+1 for some β < α. Then The first two equalities follow from definitions. The third one follows from the equality of the respective sets, which we are going to prove. ⊂: Assume x ∈ V y ∈ R α . Let γ ∈ (β, α) be arbitrary. Since y ∈ R β+1 ⊂ T γ and x / ∈ T γ , there is (a unique) z ∈ R γ+1 with z ∈ (y, x). Then z ∈ V y ∩ R γ+1 and V x ⊂ V z . Further, the sets of the form R α ∩ V y , y ∈ β<α R β+1 , form a basis of the topology of R α . This basis is σ-disjoint and closed to finite intersections. It follows that each open set belong to B, thus B contains all Borel sets. Thus hence Tα f dµ = 0, completing the induction argument. Finally, since (T α ) α<ω1 is an increasing transfinite sequence of closed sets covering T and spt µ is separable (see [42,Theorem 3.2]), there is some α < ω 1 such that spt µ ⊂ T α . It follows that T f dµ = 0 which completes the proof. (b) Assume that T is an r 1 -tree. To prove (b-i) we observe that Indeed, the inclusion ⊂ is obvious. To prove the converse one fix any f in the set on the right-hand side. Since S(T ) is dense, f attains only the values 0 and 1, so f = χ A for a clopen set A ⊂ T . Given x ∈ A, we have f (x) = 1. By continuity of f we can find some y ∈ x ∩ I(T ) with f (y) = 1. So, we get V y ∩ S(T ) ⊂ A. Since S(T ) is dense, we deduce V y ⊂ A. It follows that A is covered by sets V y , y ∈ A ∩ I(T ). By compactness we can find a finite subcover. Moreover, this subcover can be disjoint (as any two sets of the form V y are either disjoint or one of them contains the other). We claim that this cover contains only one set. Indeed, given any two points y, z ∈ A ∩ I(T ) such that the sets V y and V z belong to the subcover and are disjoint, we deduce that y and z are incomparable, thus y ∧ z ∈ S(T ) (as T is an r 1 -tree), so y ∧ z ∈ A. It follows that there is some x ∈ I(T ) ∩ A such that V z belongs to the subcover and y ∧ z ∈ V x . But then V y ∪ V z ⊂ V x , a contradiction with the assumption that the subcover is disjoint. So, the equality is proved. Finally, it is clear that the set of the right-hand side is τ p (S(T ))-closed and hence, a fortiori, σ(C(T ), D(T ))-closed. Let us continue by proving the assertion (b-ii). Let x ∈ I(T ). If x = 0, then g x is an isolated point of Finally, assume that x ∈ I(T ) \ {0} and x − has uncountable cofinality. We are going to prove To this end take any µ ∈ D(T ). Then spt µ is a compact subset of S(T ). In particular, x − / ∈ spt µ. Thus there is some y 0 ∈ x ∩ I(T ) such that W {x} y0 ∩ spt µ = ∅ (recall that ims(x − ) = {x}). Then for each y ∈ (y 0 , x) ∩ I(T ) we have Hence, the convergence is proved, so g x is an accumulation point of H and the proof of (b-ii) is completed. Let us look at (b-iii). Denote by M the set of all the maximal elements of T . If there is some x ∈ M \ S(T ), then in the same way as above we prove that Next assume that M is infinite. We will construct by induction elements x n ∈ M and y n ∈ I(T ) such that the following conditions are fulfilled for each n ∈ N. • y n ≤ x n ; • y n > max{y j ∧ x n ; 1 ≤ j < n}; • M \ n j=1 V yj is infinite. We start by fixing two distinct points a, b ∈ M . Since they are incomparable, a ∧ b < a and a ∧ b < b. Without loss of generality assume the first case occurs. Then set x 1 = a and y 1 = c and all the conditions are fulfilled for n = 1. Further, assume that n ∈ N and x j and y j are given for j ≤ n such that the conditions are fulfilled for j ≤ n. Fix two distinct points a, b ∈ M \ n j=1 V yj (this is possible as the respective set is infinite). Fix c, d ∈ I(T ) such that max{a ∧ b, a ∧ y 1 , . . . , a ∧ y n } < c ≤ a, max{a ∧ b, b ∧ y 1 , . . . , b ∧ y n } < d ≤ b. Then V c and V d are disjoint, hence at least one of the sets M \ ( n j=1 V yj ∪ V c ), M \ ( n j=1 V yj ∪ V d ) is infinite. Assume without loss of generality that the first case occurs. Then we can set x n+1 = a and y n+1 = c. Therefore, the construction can be performed. Note that the sets V yn , n ∈ N, are pairwise disjoint, hence g yn = χ Vy n → 0 pointwise on T, hence also g yn → 0 weakly in C(T ) (by Lebesgue dominated convergence theorem), hence, a fortiori, g yn → 0 in σ(C(T ), D(T )). It follows that 0 is a σ(C(T ), D(T ))-accumulation point of H. This completes the proof of the 'if part' of (b-iii). To prove the 'only if part' assume that M is finite and M ⊂ S(T ). Then It remains to prove the assertion (b-iv). So, fix A ∈ A ω (T ). Then, of course, P A 0 = 0. Further, clearly Since A is closed and stable to the operation ∧, it follows that the set A ∩ V x admits a minimum, say y. Then P A g x = g y . The next proposition provides a construction of a projectional generator in the spaces C(T ). Proposition 5.28. Let T be an r 1 -tree. • For each µ ∈ D(T ) set Then the pair (D(T ), Φ) is a projectional generator on C(T ). Proof. Let µ ∈ D(T ). Then spt µ is a separable subset of S(µ). Let C 0 (µ) be a countable dense subset of spt µ. Set C 1 (µ) = {x ∧ y; x, y ∈ C 0 (µ)}. Then C 1 (µ) is countable and it is contained in S(T ) (as T is an r 1 -tree). So, we can find a countable subset C(µ) ⊂ I(T ) such that Let us show that C(µ) has the property. Let x, y ∈ spt µ. We distinguish the following possibilities: Case 1: x and y are comparable. Without loss of generality x ≤ y, i.e., x ∧ y = x. This case splits into two subcases: (a) x ∈ I(T ): By Lemma 5. Case 2: x and y are incomparable. Then x ∧ y < x and x ∧ y < y. By Lemma 5. This completes the induction. So, we have constructed in V x an infinite decreasing sequence, which is impossible. This contradiction completes the proof. Remark 5.29. (a) In the several preceding statements we deal with r 1 -trees, but they admit variants for r-trees. One possibility is to use Lemma 5.20 to transfer the results. Another possibility is to define a more technical variant of the families A(T ) and A ω (T ). (b) If T is an r-tree which is not an r 1 -tree, then the set H from Proposition 5.27(b) is is not σ(C(T ), D(T ))-closed. It can be shown that its nonzero accumulation points are exactly the characteristic functions of the sets (c) The above-defined Markushevich basis satisfies the properties from Theorem 3.1 (6,7) if and only if ht(T ) ≤ ω 1 + 1 (i.e., Lev ω1+1 (T ) = ∅, in other words ims(x) = ∅ whenever cf(x) is uncountable). However, D(T ) is a Σ-subspace in more cases, see [42,Theorem 4.2]. It follows that, at least in some cases, the canonical Markushevich basis cannot be constructed using a PRI. Let us now look at the question when C(T ) is 1-Plichko. First observe that the following equivalences follow from the results of [41,42]. Proposition 5.30. Let T be a tree. The following assertions are equivalent. (1) T is a Valdivia compact space. ( Proof. The implications (4) A partial characterization of Valdivia trees is given in [42,Theorem 4.2], a complete characterization is still missing. We will provide an alternative proof of the assertion (1) of the quoted theorem. The original proof is done by a clever transfinite induction. We are going to present a short proof using Theorem 3.4 (the transfinite induction is hidden therein). The statement we are going to prove is the content of the following proposition. can be expressed as the union of ω 1 -many relatively discrete sets. Then T is Valdivia. We will use the following lemma characterizing σ(C(T ), D(T ))-to-σ(C(T ), D(T )) continuity of projections P A . Assume that T is an r 1 -tree. Then for any A ′ ⊂ A ω (T ) up-directed we have A = A ′ ∈ A and the projection P A ′ from (2.1) coincides with the projection P A (due to Lemma 5.17 and Lemma 5.2). Lemma 5.32. Let T be an r 1 -tree and A ∈ A(T ). The following are equivalent. for any x ∈ T . It follows that r A (S) ⊂ S. (2)⇒(1) Assume r A (S) ⊂ S. We claim that P * A (D) ⊂ D. To show that fix µ ∈ D. Let F = spt µ. Then F is a compact separable subset of D. Thus r A (F ) is also a compact separable subset of D. Moreover, Assume that R(T ) = ∅. Let η = ht(T ). Then η > ω 1 + 1. Moreover, by the assumption η < ω 2 , thus card η = ℵ 1 . So, we can fix a bijection ξ : I(ω 1 ) → I(η). By Lemma 5.20 we can assume that T is an where each R α is relatively discrete. We can choose such a basic neighborhood, so there is z( z(x) , x ∈ R α is disjoint. Indeed, let x, y ∈ R α be two distinct points. If the points z(x) and z(y) are incomparable, then even V z(x) and V z(y) are disjoint. Assume that z(x) and z(y) are comparable, without loss of generality z(x) ≤ z(y). Since y / ∈ W ims(x) Let us define a subfamily of A ω (T ) by the formula Let us show that A ′ is a cofinal and σ-closed subfamily of A ω (T ). Let (A n ) be an increasing sequence in A ′ . We will show that A = n A n ∈ A ′ . Clearly we have A ∈ A ω (T ). Further, fix any α ∈ I(ω 1 ) such that A ∩ Lev ξ(α) (T ) = ∅, β ≤ α and x ∈ R β such that is an open set, there is some m ∈ N with A m ∩ W ims(x) z(x) = ∅. Further, choose some y ∈ A ∩ Lev ξ(α) (T ). Since ξ(α) is an isolated ordinal, Lemma 5.21(b) yields a, b ∈ n A n with y = a ∧ b. Since the sequence (A n ) is increasing, there is some n ∈ N with a, b ∈ A n . Then y = a ∧ b ∈ A n as well. So, Lev ξ(α) (T ) ∩ A k = ∅ for k ≥ n. It follow that ims(x) ⊂ A k for k ≥ max{m, n}. This shows that A ∈ A ′ which completes the proof that A ′ is σ-closed. Let us continue by showing that A ′ is cofinal. To this end fix any A 0 ∈ A ω (T ). Given A n−1 ∈ A ω (T ) for some n ∈ N we perform the following construction. 5.4. Duals of Asplund spaces. The third class of spaces having a possibly non-commutative projectional skeleton is the class of duals of Asplund spaces. Asplund spaces can be even characterized in this way. These characterizations are summarized in the following theorem. Theorem 5.33. Let X be a Banach space. The following assertions are equivalent. (2) There is a projectional generator on X * of the form (X, Φ) (i.e., its domain is X). (3) There is an ω-monotone mapping ψ : is dense in X * and, moreover, for each C ∈ [X] ≤ω we have • ψ(C) ⊃ C; • ψ(C) ∩ X and ψ(C) ∩ X * are linear subspaces; • For each C ∈ [X] ≤ω the mapping x * → x * | span C maps G(C) onto (span C) * . (5) There is a projectional skeleton on X * such that the induced subspace contains X. Proof. The implication (1)⇒(2) is the deep one and is proved in [15,Proposition 8.2.1]. Let us recall just a sketch of the proof. Let X be an Asplund space. It follows from a selection theorem [15, Theorem 8.1.2] that there is a function g : X → X * with the properties • g(x) = 1 and g(x)(x) = x for each x ∈ X; • g is of the first Baire class. It follows there is a sequence (g n ) of continuous functions g n : X → B X * which pointwise converges to g. If we take Φ(x) = {g n (x); n ∈ N}, x ∈ X, then the pair (X, Φ) is a projectional generator. Indeed, assume M ⊂ X is such that M is a linear subspace and that there is some Since the functions g n are continuous, we deduce that x * * ∈ Φ(M ) ⊥ , so, without loss of generality M is a closed linear subspace of X. Fix some x * ∈ X * such that x * * (x * ) = 0. We will construct by induction points x n ∈ B X ∩ M and y * n,k ∈ X * for k, n ∈ N such that the following conditions are fulfilled for each n ∈ N. Let J : V → X be the canonical isometric inclusion. Then J * : X * → V * is the restriction map and J * * : Further, for each x ∈ V we have g(x)| V ≤ 1 and g(x)(x) = x . It follows that {g(x)| V ; x ∈ V } is a James boundary for V . Since V * is separable, by Rodé theorem (see [38] or [16,Theorem 5.7 Finally, note that the proof was done for real spaces, but the complex case easily follows. Indeed, if X is a complex Asplund space, its real version is a real Asplund space and the projectional generator for the real version works for the complex case as well. (2)⇒(3) This implication is rather easy, it follows essentially from the proof of [15, Lemma 6.1.3]. We will provide a proof in the real case. The proof in the complex case is the same, one just needs to replace Q by Q + iQ at the appropriate places. Fix any C ∈ [X] ≤ω . Let ψ 0 (C) = span Q C and define for n ∈ N ∪ {0} by induction ). Clearly the mappings ψ n are ω-monotone, thus the mapping ψ defined by is ω-monotone as well. We will show that ψ is the sought mapping. Fix any C ∈ [X] ≤ω . Then clearly C ⊂ ψ(C) and both ψ(C) ∩ X and ψ(C) ∩ X * are countable Q-linear spaces, hence their closures are linear spaces. Further, for any x * ∈ ψ(C)∩X * we have η(x * ) ⊂ ψ(C)∩X, thus x * = x * | ψ(C)∩X . So, it follows that the restriction mapping x * → x * | ψ(C)∩X is an isometry of To complete the proof of the third property it remains to show that it is even onto. To this end denote Y = ψ(C) ∩ X, Z = ψ(C) ∩ X * and let j be the canonical isometric embedding of Y into X. Then j * : X * → Y * is the restriction mapping. Above we have proved that j * | Z is an isometry, so it has a closed range. If it is not onto, Hahn-Banach theorem yields y * * ∈ Y * * \ {0} such that y * * | j * (Z) = 0. Set x * * = j * * y * * . Since j * * is again an isometric embedding, x * * = 0. Moreover, is the range of j * * ). Further, for any x * ∈ Z we have a contradiction with the properties of projectional generator. (4)⇒(5) Let G be the mapping provided by (4). The index set for the skeleton will be Let us show that Γ is a cofinal σ-closed subset of [X] ≤ω . Fix an increasing sequence (C n ) in Γ and set C = n C n . Then C ∈ Γ as well. Indeed, let x * ∈ G(C). Then there is some n ∈ N with x * ∈ G(C n ). It follows that Passing to the closure shows that C ∈ Γ. Having the index set, we will construct the projections. Fix any C ∈ Γ. Let Y = span C and Z = G(C). Then the mapping x * → X * | Y is an isometry of Z onto Y * . Let j : Y → X and ι : Z → X * be the canonical isometric inclusions. Since j * : X * → Y * is the restriction mapping, we get that j * • ι is an Since ι * (x * * ) = x * * | Z , Lemma 3.2 shows that there is a bounded linear projection P C on X * such that P C X * = Z and P * C X * * = Y w * (in fact, P C has norm one by the respective proof). Let us show that (P C ) C∈Γ is a projectional skeleton on X * . We already know that each P C is a bounded linear projection. Since P C X * = G(C) for each C ∈ Γ, it is separable, so the property (i) is fulfilled. The property (iii) follows from the assumption that G is ω-monotone. Let us show the property (ii). Assume C 1 , C 2 ∈ Γ are such that C 1 ⊂ C 2 . Then P C1 X * = G(C 1 ) ⊂ G(C 2 ) = P C2 X * and P * C1 X * * = span w * C 1 ⊂ span w * C 2 = P * C2 X * * , so P C1 P C2 = P C2 P C1 = P C1 . Finally, the property (iv) follows from the first property of G. Indeed, let x * ∈ X * . Then there are sequences (C n ) in [X] ≤ω and (x * n ) in X * such that x * n ∈ G(C n ) and x * n → x * . Let C ∈ Γ be a set containing each C n . then x * n ∈ G(C) for each n ∈ N, thus x * ∈ G(C) = P C X * . Finally, since P * C X * * = span w * C ⊃ C, the induced subspace contains X. (5)⇒(1) Let (P s ) s∈Γ be a projectional skeleton on X * such that the induced subspace contains X. Since X is 1-norming in X * * we may assume without loss of generality that it is a 1-projectional skeleton (by Lemma 1.3). Let Y be a separable subspace of X. Let C ⊂ Y be a countable dense set. Then there is some s ∈ Γ such that P s x = x for x ∈ C. It follows that P s x = x for x ∈ Y . Let y * ∈ Y * . Hahn-Banach theorem yields x * ∈ X * with x * | Y = y * . Moreover, for any y ∈ Y we have P s x * (y) = x * (P * s y) = x * (y) = y * (y). It follows that the mapping x * → x * | Y maps P s X * onto Y * . Since P s X * is separable, we infer that Y * is separable as well. Remark 5.34. (1) Let us stress that the characterizing property of Asplund spaces is not the existence of a 1-projectional skeleton on the dual space, but the existence of such a skeleton whose induced subspace contains the original space (canonically embedded in the bidual). Indeed, for example C(K) * is 1-Plichko for any compact space K (see, e.g., [25, Example 4.10(a)] or [27, Theorem 5.5]), but not every C(K) space is Asplund. More generally, dual to any C * -algebra is 1-Plichko by [4, Corollary 1.3] (for further generalizations see [5,6]). (2) Let X be an Asplund space. By the preceding theorem we know that there is a projectional skeleton on X * such that the induced subspace contains x. We point out that the induced subspace is not equal to X (unless X is reflexive), it is larger and equal to This follows easily from the topological properties of induced subspaces. (3) The projectional skeleton from the preceding theorem need not be commutative. The class of those spaces X such that X is contained in a Σ-subspace of X * * (thus such that D(X) is a Σ-subspace) is called class (T) in [25]. By the above theorem the class (T) is a subclass of Asplund spaces (see also [25, There are Asplund spaces which do not belong to the class (T) but simultaneously their duals are 1-Plichko. Indeed, if K is any scattered compact space, then C(K) is Asplund and, moreover, C(K) * is canonically isometric to ℓ 1 (K) which is 1-Plichko. If K is uncountable, then there are many 1-norming Σ-subspaces of ℓ 1 (K) * = ℓ ∞ (K) (see [24,Example 6.9]), but it may happen that none of them contains C(K) (this takes place for example if K = [0, ω 2 ], see [25,Example 4.10(b)] and its proof). We do not know any nontrivial characterization of the class (T). However, there is a smaller subclass having nice characterizations. They are collected in the following theorem. Theorem 5. 35. Let X be a Banach space. The following assertions are equivalent. It follows from the properties of θ that M is a norm-dense subset of X * . For each x * ∈ M fix some C(x * ) ∈ [X] ≤ω with x * ∈ θ(C(x * )). Define a mappingθ : It is clear thatθ is ω-monotone andθ(A) ⊃ A for each A ∈ [X ∪ M ] ≤ω . Further, sinceθ(A) = θ(C) for some C,θ has the obvious analogues of the properties of θ. Let us continue by modifying ψ. Since M is norm-dense in X * , there is a mapping ζ : X * → [M ] ≤ω such that x * ∈ ζ(x * ) for each x * ∈ X * . Moreover, we can assume that ζ(x * ) = {x * } for x * ∈ M . For A ∈ [X ∪ M ] ω set ψ 0 (A) = A and define by induction ψ n (A) = ψ(ψ n−1 (A)) ∩ X ∪ ζ(ψ(ψ n−1 (A)) ∩ X * ) for n ∈ N. (2)⇒(3) Let ϕ be the mapping from (2). For the index set take [X ∪ M ] ≤ω . By Lemma 3.2 for each A ∈ Γ there is a bounded linear projection P A on X such that P A X = ϕ(A) ∩ X and P * A X * = ϕ(A) ∩ M w * = ϕ(A) ∩ M . Now, as in the proof of Theorem 4.1(3)⇒(1) we see that (P A ) A∈Γ is a projectional skeleton on X with induced subspace X * . Moreover, by the ω-monotonicity of ϕ together with the coincidence of the weak * and norm closures of ϕ(A) ∩ M we infer that (P * A ) A∈Γ is a projectional skeleton on X * . Set P α = Q * α | X . We claim that (P α ) α∈Λ is a projectional skeleton on X and P * α = Q α for each α ∈ Λ. It is clear that each P α is a projection such that P α ≤ C. Moreover, obviously P * α = Q α . It remains to check the properties (i)-(iv) of projectional skeletons. Firstly, Q α X * = P * α X * is isomorphic to the dual of P α X. So, P α X is separable (as its dual is), which proves the property (i). The property (ii) is obvious. To prove the property (iii) let (α n ) be an increasing sequence in Λ. Since (Q α ) α∈Λ is a projectional skeleton, there is α = sup n α n ∈ Λ and, moreover, Q αn x * → Q α x * for each x * ∈ X * (as the skeleton (Q α ) α∈Λ satisfies the property (iii')). Now it easily follows that Q * α x * * w * −→ Q * α x * * for each x * * ∈ X. Since the restriction of the weak * topology on X * * to X coincides with the weak topology of X, we deduce P αn x w −→ P α x for each x ∈ X, so P α X = n P αn X w = n P αn X as the union is a linear subspace. This completes the proof of the property (iii). To prove the property (iv) set P x = lim α∈Λ P α x, x ∈ X. Then P is a well-defined projection on X with P ≤ C (cf. (2.1)). We claim that P is the identity of X. If not, then ker P = {0}, so there is some x ∈ X \ {0} with P x = 0. It means that P α x = 0 for each α ∈ X (see Lemma 2.2(a)). Thus a contradiction. So, P is the identity mapping and now the property (iv) easily follows from the property (iii). The proof will be done by transfinite induction on the density character of X. First assume that X is separable. Let (P s ) s∈Γ be a shrinking projectional skeleton on X. It follows from the properties of the skeleton that there is some s ∈ Γ such that P s is the identity on X. Thus P * s is the identity on X * . Since the adjoint projections form a projectional skeleton on X * , they have separable ranges. So, X * is separable. Now, it is a classical result that any space with a separable dual admits a shrinking Markushevich basis (see, e.g., [19,Theorem 1.22]). Moreover, the basis is countable, so the weak σcompactness trivially follows. Further, assume that κ is an uncountable cardinal such that the implication holds for any space of density strictly less than κ. Let X be a Banach space of density character κ having a shrinking projectional skeleton (P s ) s∈Γ . Since the induced subspace equals whole X * , we can assume that it is a commutative 1-projectional skeleton (by Theorem 3.4 and Lemma 1.3). Let (A α ) α≤κ be a transfinite sequence of subsets of Γ provided by Lemma 2.6. By Proposition 2.7 the transfinite sequence (P Aα ) α≤κ is a PRI on X. Further, denote Q s = P * s for s ∈ Γ. By the assumptions (Q s ) s∈Γ is a projectional skeleton on X * (in fact a commutative 1-projectional skeleton by the above). So, for any directed set A ⊂ Γ we can define the projection Q A on X * by the formula (2.1). We claim that Q A = P * A . Indeed, given x * ∈ X * and x ∈ X we have It follows that (P * Aα ) α≤κ = (Q Aα ) α≤κ is a PRI on X * . Now fix α < κ and let P = P α+1 − P α . Since the skeleton (P s ) s∈Γ is commutative, Lemma 2.2 yields P s P = P P s for each s ∈ Γ. In particular, (P s | P X ) s∈Γ is a projectional skeleton on P X. We will show it Example 5.36. Let K be a scattered locally compact space. Then X = C 0 (K) is an Asplund space and X * can be canonically identified with ℓ 1 (K). Consider the canonical Markushevich basis of ℓ 1 (K), i.e., (e x , e * x ) x∈K , where e x and e * x are the canonical basic vectors in ℓ 1 (K) and ℓ ∞ (K), respectively. Regardless of the concrete topological structure of K the set is σ(X * , D(X))-closed in X * . Indeed, it is even weak * -closed -if K is even compact, then (H \{0}, w * ) is homeomorphic to K, if K is not compact, then (H, w * ) is homeomorphic to the one-point compactification of K. Moreover, for the Markushevich basis (e x , e * x ) x∈K defined above the following holds. • e * x ∈ X for each x ∈ K if and only if K is discrete, i.e., if X = c 0 (K). • e * x ∈ D(X) for each x ∈ K if and only if each point of K is G δ , i.e., if K is locally countable. In some cases there is a better Markushevich basis than the one described in the previous example. Some concrete cases are described in the following examples. Example 5.37. Let K be a countable compact space. Then C(K) admits a (countable) shrinking Markushevich basis (as C(K) * is separable), but it must be different from the basis from the previous example unless K is finite. An explicit formula can be given as follows. Firstly, K is homeomorphic to the ordinal segment [0, η] for some η < ω 1 (we assume K is infinite, so η ≥ ω). Fix a bijection ξ : [0, ω) → [0, η] such that ξ(0) = 0. We define a Markushevich bases (f n , µ n ) n<ω in C(K) as follows: It is easy to check that (f n , µ n ) n<ω is a shrinking Markushevich basis of C(K). It is easy to check that (µ α , f α ) α<ω1 is a Markushevich basis of C(K) * . Moreover, f α ∈ D(C(K)) for each α < ω 1 as each f α is a function of the first Baire class, being the characteristic function of a closed G δ set. Open problems In this section we collect several questions on projectional skeletons, projectional generators, Markushevich bases and related topics which remain open. We start by the following question on a possible generalization of Corollary 3.7. Question 6.1. Assume that X is a Banach space and D ⊂ X * is a subspace induced by a projectional skeleton on X which is of finite codimension in X * . Is D necessarily a Σ-subspace? Banach spaces whose duals admit a Σ-subspace of finite codimension have been studied in [29]. It is not clear whether there are non-commutative variants. It is easy to observe that they cannot be found among continuous functions on ordinals or on trees. A natural candidate could be a dual to a quasireflexive space. Indeed, let X be quasireflexive. Then X is Asplund, so there is a projectional skeleton on X * such that the induced subspace contains X. Since X is of finite codimension in X * * , a fortiori D(X) is of finite codimension in X * * . However, any quasireflexive space is weakly compactly generated by [46], so it belongs to the class (T). In fact, since the dual of a quasireflexive space is again quasireflexive, hence weakly compactly generated, necessarily D(X) = X * * . But it seems that the following problem is open. Question 6.2. Let X be an Asplund space such that D(X) has finite codimension in X * * . Does X belong to the class (T)? A further question is connected to the existence of a nice Markushevich basis. We know that any Banach space admitting a projectional skeleton has a Markushevich basis (by Theorem 1.2). However, the following natural question is open. Question 6.3. Assume that a Banach space X admits a projectional skeleton (P s ) s∈Γ and D ⊂ X * is the induced subspace. Does there exist a Markushevich basis (x α , x * α ) α∈Λ such that the set H = {x α ; α ∈ Λ} ∪ {0} satisfies • H is σ(X, D)-closed in X; or • P s (H) ⊂ H for s ∈ Γ ′ for some cofinal σ-closed Γ ′ ⊂ Γ; or at least • (H, σ(X, D)) is monotonically Sokolov? Observe that the positive answer to the first question implies the positive answer to the second one (by Lemma 2.1(b)). Further, the positive answer to the second question implies the positive answer to the third one. This follows from the following lemma. Lemma 6.4. Let X be a Banach space with a projectional skeleton (P s ) s∈Γ . Let D denote the respective induced subspace. Let H ⊂ X. Assume that P s (H) ⊂ H for s ∈ Γ ′ for some cofinal σ-closed Γ ′ ⊂ Γ. Then (H, σ(X, D)) is monotonically Sokolov. Proof. Since (P s ) s∈Γ ′ is a projectional skeleton on X with induced subspace D, without loss of generality we assume Γ ′ = Γ. Let A → (r A , N (A)) be the assignment constructed in the proof of the implication The answer to the first question is positive in case the skeleton is commutative and it is witnessed by the Markushevich basis constructed using a PRI (see Theorem 3.1 and Remark 3.3(d,e)). The noncommutative case seems to be more complicated. The answer is positive for spaces of continuous functions on ordinals (by Proposition 5.11), for continuous functions on trees (by Proposition 5.27) and for duals to Asplund C(K) spaces (by Example 5.36). Let us point out that the Markushevich basis witnessing the positive answer is in all the cases in a sense 'canonical', but it need not come from a PRI (see the comments after the quoted results). Moreover, a Markushevich basis constructed using a PRI may Question 6.11. Are monotonically Sokolov spaces stable to continuous images? Note that primarily Lindelöf spaces are stable to continuous images by the very definition, monotonically Sokolov spaces are stable to R-quotient images by [40,Theorem 3.4(g)]. The stability to continuous images is not discussed in [40]. We conjecture that the stability fails but we do not know any counterexample. Assuming the answer is negative, the following question is natural. Question 6.12. Assume that T is simultaneously primarily Lindelöf and monotonically Sokolov. Is T an R-quotient image of a closed subset of (L Γ ) N ? Another question is inspired by the fact that primarily Lindelöf spaces are defined by an explicit representation, while monotonically Sokolov are defined by existence of a certain family of retractions. So, we can ask the following general question. Question 6.13. Is it possible to characterize monotonically Sokolov space by an explicit representation (similar to that of primarily Lindelöf spaces)? Note that this is related to a similar problem of the existence of an explicit representation of compact spaces with a retractional skeleton (similar to that of Valdivia compacta) or of Banach spaces with a projectional skeleton (similar to that of Plichko spaces). It seems to be related also to the problem of a relationship of a Markushevich basis to the subspace induced by a projectional skeleton discussed above.
2018-10-02T13:58:29.000Z
2018-05-30T00:00:00.000
{ "year": 2018, "sha1": "7db4665c4d0ad1fdba920b3bde2ed59e0b54bb4c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1805.11901", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7db4665c4d0ad1fdba920b3bde2ed59e0b54bb4c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
250669046
pes2o/s2orc
v3-fos-license
Multiple attractors in the response of a flexible rotor in active magnetic bearings with geometric coupling Numerical results on the response of a flexible rotor supported by nonlinear active magnetic bearings are presented. Nonlinearity arising from the magnetic actuator forces that are nonlinear functions of the coil current and the air gap between the rotor and the stator, and from the geometric coupling of the magnetic actuators is incorporated into the mathematical model of the flexible rotor - active magnetic bearing system. For relatively large values of the geometric coupling parameter, the response of the rotor with the variation of the speed parameter within the range 0.05 ⋚ Ω ⋚ 5.0 displayed a rich variety of nonlinear dynamical phenomena including sub-synchronous vibrations of periods -2, -3, -6, -9, and -17, quasi-periodicity and chaos. Numerical results also reveal the occurrence of bi-stable operation within certain ranges of the speed parameter where multiple attractors may co-exist at the same speed parameter value depending on the operating speed of the rotor. Introduction Active magnetic bearings are increasingly being favored over the conventional fluid-film and rollingelement bearing types in rotating machinery applications. This is mainly due to their higher mechanical efficiency since the absence of contact between the rotor and the stator during operation of the machine reduces the losses due to friction. The magnetic bearings are, however, highly nonlinear and their interaction with the rotor that they support can lead to various nonlinear phenomena in the rotor's response. The main source of nonlinearity in active magnetic bearings is the relationship between the forces generated in the electromagnetic actuator and the coil current and the air gap between the rotor and the stator. The force is proportional to the current squared and inversely proportional to the gap squared. Cross-coupling between the electromagnetic forces acting in two orthogonal directions is also a source of nonlinearity in magnetic bearing systems. One of the main causes of the cross-coupling effect is attributed to the geometry of the actuators. The air gap at a point on a magnetic pole is actually not constant over the entire pole area due to the geometrical curvature of the pole. This results in a normal force, which is perpendicular to the principal force, which in turn causes geometric coupling between these forces. Other causes of cross-coupling are attributed to gyroscopic and eddy-current effects. The effect of nonlinearity arising from cross-coupling due to gyroscopic motion on the response of a rigid rotor in magnetic bearings examined in [1] showed the occurrence of Hopf bifurcation at 1 To whom any correspondence should be addressed. certain values of operating speed. Multiple co-existing solutions were found at primary resonance of a rigid rotor response in magnetic bearings incorporating nonlinearity due to geometric coupling of the actuators [2]. The effects of geometric coupling on the response of a rigid rotor in magnetic bearings investigated in [3] and [4] revealed the existence of quasi-periodic and period-2 vibrations, as well as jump phenomena. Numerical integration and numerical continuation methods were used to investigate the unbalance response of a rigid rotor in magnetic bearings [5]. This work showed the occurrence of symmetry-breaking and period-doubling bifurcations. The response of a flexible rotor supported by magnetic and auxiliary bearings investigated numerically in [6] revealed the occurrences of subsynchronous vibrations of periods-2, -4 and -8, and quasi-periodic and chaotic vibrations. The stability and bifurcations of a flexible rotor supported by radial and thrust magnetic bearings were examined using the Floquet theory in [7]. This work showed the importance of incorporating thrust magnetic bearings into the mathematical model of the rotor-bearing system, as they significantly influence the nonlinear dynamics of the system. The effect of geometric coupling parameter on the response of a flexible rotor in radial active magnetic bearings is numerically investigated in this work. Nonlinearity arising from cross-coupling due to the actuators' geometry, as well as from the magnetic actuator forces that are nonlinear functions of the coil current and the air gap between the rotor and the stator is incorporated into the mathematical model of the rotor-bearing system. Governing Equations The governing equations of a flexible rotor in active magnetic bearings are derived with the following assumptions being valid: (i) rotor is symmetric with part of its mass lumped at the rotor mid-span and the remainder at the bearing stations, (ii) rotor speed is constant, (iii) rotor and support stiffness are radially symmetric, (iv) damping force acting on the disc at rotor mid-span due to air dynamics is viscous, (v) rotor imbalance is defined in a single plane on the disc at the rotor mid-span, (vi) rotor motion in the axial direction is neglected, (vii) gyroscopic effect is neglected, (viii) flux leakage is neglected, i.e., the flux runs entirely through the iron except in the air gap, (ix) fringing effect, i.e., the spreading of flux in the air gap, is neglected, (x) magnetic iron operates below saturation level and well within the linear range of the iron magnetization curve, which implies constant permeability of the iron, and (xi) flux is homogeneous both in the iron and in the air gap and runs entirely within the magnetic loop, and the cross-section of the iron that is assumed constant along the entire loop is equal to that of the air gap. Accounting for the external forces acting on the rotor mid-span and bearing journal that include the rotor imbalance force, shaft elastic force, viscous damping force, magnetic bearing forces, and gravity, the governing equations can be expressed in non-dimensional form by equation (1). The motion of the system can be described by the non-dimensional displacements and of the geometric center of the rotor mid-span, and the displacements and of the geometric center of the journal. ω . U , the unbalance parameter, which is a measure of the rotor imbalance, is defined as the ratio of the eccentricity of the rotor center of mass from its geometric center of rotation, to the nominal air gap of the magnetic bearing. Ω , the speed parameter, is the ratio of the rotor operating speed, ω , to the linear natural frequency of the magnetic bearing system, n ω . τ is the non-dimensional time. W , the gravity parameter, represents the unidirectional static force acting on the disc at the rotor mid-span, and at the bearing stations. α is the geometric coupling parameter, which is the ratio of the attractive, on-axis force between each magnet and the bearing journal to the normal, off-axis force. γ , the mass ratio, is the ratio of the journal mass, , to half-mass of the disc at the rotor mid-span, . J m D m P and D are respectively the non-dimensional proportional and derivative feedback gains of the controller. , , and are the magnetic bearing forces, and their derivation can be found in [8]. (nT in the Poincaré map with Ω is then plotted to form the bifurcation diagram. The power spectrum, which exhibits the frequency contents of the rotor response at the bearing station, is determined from the Fourier transformation of the time series of the journal response in the X -direction. The bifurcation diagram for the rotor response with increasing Ω is shown in figure 1. For the range , the response of the rotor was synchronous, i.e., period-1. Chaotic motion of the rotor was observed in the range . Sub-synchronous rotor response of period-6 was found to exist for Ω = 0.81. The response of the rotor was found to be synchronous for The response of the rotor with decreasing Ω is shown in the bifurcation diagram of figure 3. For , the response of the rotor was synchronous. Quasi-periodic vibration was seen in the rotor's response for the ranges and . For the range , the rotor response was chaotic and for 10 . ≥ Ω ≥ Ω = 2.09, a period-17 attractor was observed. With further decrease in Ω , synchronous vibration response which was seen to exist in the range eventually underwent a period-doubling bifurcation resulting in periodic response of period-2 for the range . Chaotic vibration was largely seen to dominate the rotor's response for the range , except at specific frequencies where periodic vibrations were observed; period-1 at Ω = 0.27, period-3 at Ω = 0.5 and period-9 at . Attractors of period-1 and period-3 were seen to co-exist for the range . Period-1 attractors were also seen to co-exist with period-6 attractors for Ω = 0.81. For the range and for Ω = 2.29, quasi-periodic attractors co-existed with period-1 attractors. A quasi-periodic attractor was seen to co-exist with a period-17 attractor for Ω = 2.09, figure 5. Concluding Remarks A rich variety of nonlinear dynamical phenomena were observed in the response of a flexible rotor supported by active magnetic bearings for relatively large values of geometric coupling parameter α . In particular, sub-synchronous vibrations of periods-2, -3, -6, -9 and -17, as well as quasi-periodic and chaotic vibrations were seen to exist in the rotor's response within the speed parameter range . For certain speed parameter ranges, bi-stable operation was found to occur where multiple attractors may co-exist at the same speed parameter value depending on whether the operating speed of the rotor is increasing or decreasing. In practical rotating machinery supported by active magnetic bearings, one should not discard the possibility of synchronous rotor vibration to become non-synchronous or even chaotic when subjected to external excitations due to preloads or fluid forces that may cause the rotor's initial conditions to move from one basin of attraction to another. Non-synchronous and chaotic vibrations should be avoided in the operation of rotating machinery as they cause fluctuating stresses, which in turn may rapidly induce fatigue failure of its main components.
2022-06-28T06:10:38.070Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "5cb33ee36036994e878de80b5779d0a0442fd54a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/96/1/012032", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5cb33ee36036994e878de80b5779d0a0442fd54a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
209961858
pes2o/s2orc
v3-fos-license
OXIDATIVE STRESS AND GENETIC INSTABILITIES IN POLYCYSTIC OVARIAN SYNDROME. The polycystic ovary syndrome (PCOS) is a hyper androgenic disorder associated with chronic oligo-anovulation and polycystic ovarian morphology. Women with PCOS have an increased risk of miscarriage, gestational diabetes, preeclampsia and preterm labour. The pathogenesis of PCOS is however unclear but is thought to be multifactorial consisting of endocrine, metabolic, genetic and environmental factors. The present study consists of 40 study subjects with PCOS and 35 healthy control subjects. MDA by TBA Cytokinesis-block Micronuclei Assay (CBMN) assay performed by using Cytochalasin-B extent somatic DNA This study proved significant evidence of oxidative stress that might play a crucial role in the pathogenesis of PCOS and hence, oxidative stress parameters could be suggested as diagnostic markers for early diagnosis of high-risk groups. Lifestyle modifications and proper dietary management to decrease the overweight and obesity would be able to reduce the unwanted clinical symptoms and hirsutism in females with polycystic ovary syndrome. Maintaining healthy lifestyle factors, regular exercise, good dietary pattern were also help to reduce DNA damages. The polycystic ovary syndrome (PCOS) is a hyper androgenic disorder associated with chronic oligo-anovulation and polycystic ovarian morphology. Women with PCOS have an increased risk of miscarriage, gestational diabetes, preeclampsia and preterm labour. The pathogenesis of PCOS is however unclear but is thought to be multifactorial consisting of endocrine, metabolic, genetic and environmental factors. The present study consists of 40 study subjects with PCOS and 35 healthy control subjects. Detailed demographic, clinical and lifestyle characteristics were recorded and compared with other clinical parameters. The association of various physiological and lifestyle factors which leads to oxidative stress was analysed by evaluating MDA concentration and subsequent DNA damages, if any, was quantified by Cytokinesis-block Micronuclei (CBMN) assay. Study subjects demonstrated a statistically significant increased serum MDA level and mean CBMN frequency than the control subjects. Subjects with abnormal biochemical, physiological and endocrinological characters showed increased mean CBMN frequency. These findings denoted that there is a strong evidence of genetic instability among subjects with PCOS. Hence the study can be concluded that, oxidative stress and somatic DNA damages may play a major role in the risk of PCOS and further complications. Optimal metabolic control and lifestyle modification can reduce the unwanted clinical symptoms of infertility. Healthy lifestyle factors, including exercise, are associated significantly with reduced DNA damages. The close association between oxidative stress and lifestyle-related diseases has become well known. Oxidative stress has been defined as harmful because oxygen free radicals attack biological molecules such as lipids, proteins and DNA. Lipid degradation occurs, forming products such as malondialdehyde (MDA) and ethane that are commonly measured as end products of lipid peroxidation. Human DNA remains continuously in exposure to free radicals attack. The majority of DNA damage occurs in human beings in response to oxidative stress (OS). The pathogenesis of PCOS is however unclear but is thought to be multifactorial consisting of endocrine, metabolic, genetic and environmental factors. Even though root cause behind PCOS is unknown, further research is needed to evaluate the predisposing factors, particularly genetic background and environmental factors such as endocrine disruptors and lifestyle that increases the risk of PCOS. Hence the present study was undertaken to evaluate the association of various physiological and lifestyle factors which leads to oxidative stress and subsequent DNA damages in women with PCOS. Materials and Methods:- The study subjects comprised of forty women in the age group of 20 to 36 years with a clinical diagnosis of PCOS referred from various gynecology and infertility centers of Kerala to Genetika, Centre for advanced Genetic studies, Trivandrum. Twenty healthy study subjects having regular menstruation and without any chronic illness were selected as control for this study. Various demographical, physiological, life style, clinical and biochemical characteristics of the subjects were analysed. Venous blood was collected aseptically from all the subjects. 5 ml of venous blood was collected from all the subjects by venipuncture. From that, 2 ml blood was transferred to a sterile vacuutainer and was used for CBMN assay and remaining 3 ml blood was transferred in to a plain tube and allowed to clot. The biochemical parameters such as Fasting Blood Sugar, Total cholesterol, Triglyceride, HDL-C and LDL-C were estimated by enzymatic method. Hormones viz. Luteinizing hormone (LH), follicle stimulating hormone (FSH), prolactin and estradiol were also performed. MDA was performed by TBA method to evaluate the oxidative stress. Cytokinesis-block Micronuclei Assay (CBMN) assay was performed by using Cytochalasin-B for quantitating the extent of somatic DNA damages. Observations and Results:- The demographic and physiological findings of 40 study subjects were compared with 35 control subjects. The age of the study subjects ranged from 20 to 36 years with a mean age of 28.2 years and age of control subjects ranged from 17 to 35 years with mean age of 27.05 years. Family history of PCOS was reported in 8 out of 40 study subjects. Family history of infertility/sub-fertility was reported in 3 study subjects. The history of chronic illness was reported among 2 out of 40 study subjects. Majority of the study subjects (n=38) attained menarche on or before 16 years of age and the remaining two have menarche at the age above 16 years. Out of the 40 study subjects, irregular menstruation was reported in 9 subjects. Five study subjects had endometriosis. The study subjects showed a statistically significant increase in MDA level than the control subjects (t=5.477; p=<.00001) The CBMN analysis revealed that, the study subjects showed a mean CBMN frequency of 13.18±0.783 and control subjects showed a mean CBMN frequency of 10.88 ± 0.384 (t=13.032; p=<.00001). The study revealed that, the mean CBMN frequency increases with increased age. Among the 40 study subjects, age between 32-37 years showed highest mean CBMN frequency (13.31). Study subjects belonged to birth order >5 showed an increased mean CBMN frequency (13.3). Based on residence of these study subjects, highest mean CBMN frequency was observed among subjects who belonged to coastal area (13.34). Moreover, increased mean CBMN frequency was observed among subjects with sedentary type of occupation and subjects with higher level of socio-economic status. Significantly increased mean CBMN frequency was observed in subjects with family history of (FH/o) PCOS compared to that of subjects without FH/o PCOS. Similarly, increased mean CBMN frequency was observed in subjects with history of (H/o) chronic illness. Study subjects with the FH/o infertility/sub-fertility showed a high 1338 mean CBMN frequency of 13.41. The mean CBMN frequency of subjects with endometriosis was higher than that of the subjects without endometriosis. Among the study subjects, 31 subjects reported regular menstrual periods and 9 reported irregular menstrual periods. Subjects with irregular menstrual periods showed increased MDA level than the subjects with regular menstrual periods. CBMN analysis revealed that subjects with irregular menstrual periods also showed an increased mean CBMN frequency than the rest. The subjects who had menarche at age >16 years had a higher mean CBMN frequency (13.19) than subjects with menarche at the age ≤16 years. Moreover, subjects with these risk factors showed increased MDA level than the subjects without these risk factors. It is also true that, a positive correlation was observed between mean CBMN frequency and MDA level. Fourteen out of forty subjects reported obesity and these subjects showed increased mean CBMN frequency and MDA level (Table: 1). The study frankly demonstrated that, the mean CBMN frequency was higher among subjects who had increased FBS, Total Cholesterol, Triglyceride and LDL-C. While, subjects had low level of HDL-C showed increased mean CBMN frequency (Table: 2). Hormonal analysis revealed that, subjects with increased FSH and LH showed higher mean CBMN frequency. Moreover, subjects with increased FBS, Total Cholesterol, Triglyceride and LDL-C and decreased HDL-C level showed increased MDA values. Increased MDA level was also observed among subjects with, increased FSH and LH levels. The present study also observed that, subjects who are obese showed increased MDA concentrations. Patients with PCOS have higher gonadotropin releasing hormone (GnRH), which in turn results in an increase in LH/FSH ratio in females with PCOS. The majority of patients with PCOS have insulin resistance and/or obesity (Kabel, 2016). The present study observed that, among 40 study subjects, 29 subjects showed increased FSH level and 3 subjects showed increased LH level. Moreover, mean CBMN frequency was observed in subjects with elevated level of LH and FSH. According to Josey et al., (2017) the highest mean CBMN frequency of 13.38 was shown by 35 subjects (46.6%) of age between 29 to 36 years. According to Tehrani et al., (2015) PCOS is a problem with hormones that affects women during their childbearing years (ages 15 to 44). The present study is in agreement with the above mentioned studies. Moreover, the present study also revealed that, the mean CBMN frequency was increased with increase in age. In a study done by Arun et al., (2016) observed that, a relationship between birth order and extent of somatic DNA damages in PCOS and reported that the highest mean CBMN frequency (13.8) was demonstrated by subjects with birth order >6. According to Josey et al., (2017) subjects with increased birth order showed the increased mean CBMN frequency (13.7). Similarly, in this study also observed that, mean CBMN frequency was increased with increase in birth order. The highest mean CBMN frequency (13.3) was showed by subjects with >6 birth order. Mokhtar et al., (2006) revealed that females with the age of menarche more than 15 years were more risky to develop infertility than those with age of menarche less than 15 years. In this present study the subjects who had delayed menarche showed increased mean CBMN frequency. Conclusions:- In short, majority of the PCOS subjects showed abnormal biochemical, physiological and hormonal parameters. The study observed a statistically significant increased serum MDA concentration and mean CBMN frequency. Moreover, subjects with abnormal biochemical, physiological and endocrinological characters showed increased MDA concentration and mean CBMN frequency. There is a positive correlation between these risk factors with MDA concentration and mean CBMN frequency. The findings imply that there is a strong evidence of genetic instability. DNA damage may play a major role in the risk of PCOS and further complications especially, Cardiovascular risks. The present study highlights the molecular and physiological association of oxidative stress among women with PCOS. This study proved significant evidence of oxidative stress that might play a crucial role in the pathogenesis of PCOS and hence, oxidative stress parameters could be suggested as diagnostic markers for early diagnosis of high-risk groups. Lifestyle modifications and proper dietary management to decrease the overweight and obesity would be able to reduce the unwanted clinical symptoms and hirsutism in females with polycystic ovary syndrome. Maintaining healthy lifestyle factors, regular exercise, good dietary pattern were also help to reduce DNA damages.
2019-10-31T09:11:30.426Z
2019-09-30T00:00:00.000
{ "year": 2019, "sha1": "fb8c2fa6375b9ffe18628db92480cfaf7f59ae1a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21474/ijar01/9788", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "7ff091303417fcd12905ab314b148aa46dbdf021", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Mathematics" ] }
155336689
pes2o/s2orc
v3-fos-license
Electrosynthesis of Al(OH)3 by Al(s)|KCl(aq)||KCl(s)|C(s) system Electrosynthesis of Al(OH)3 by using Al(s)|KCl(aq)||KC(aq)l|C(s) system had been done.The presence of electrolyte was much influenced to the electrolysis result. The purpose of this work was to investigate the purity of Al(OH)3 compound with the presence of KCl concentration using XRD characterization. Electrolysis process was done in 2-compartment system, by using aluminium plate electrode as anode, carbon electrode as cathode, and electrolyte solutions of KCl were varied at the concentrations of 0.25; 0.30; 0.35; 0.40; 0.45 M. The electrolysis process was done at room temperature and potential of 12 V for 6 h.The electrolysis products were characterized using XRD and the the best electrolysis result was determined its thermal property by using TGA-DSC.The result showed that electrolysis of Al(s)|KCl(aq)||KC(aq)l|C(s) system with the variation of electrolyte concentration solutions gave white precipitations with the masses of 50.6; 51.8; 56.1; 64.9; 97.2 mg, respectively. The XRD characterization of the electrolysis product precipitates gave the best product as Al(OH)3 when using KCl concentration of 0.3 M, while at KCl concentration of 0.45 M was resulted KAlOCl2, and another variation KCl concentration gave a mixture product of Al(OH)3and KAlOCl2. Thermal analysis result of the best electrolysis product using TGA-DSC showed the reduced total sample mass of 45.47 % and two endothermic peaks at 270.08°C as the transformation of Al(OH)3 to AlOOH (boehmite) with the absorbed energy of 2.47 kJ/mol. At temperature of 660.28°C was the phase change of AlOOH (boehmite) to y-Al2O3 with absorbed energy of 0.23 kJ/mol. Then, at temperature above 800°C there was no mass reduction, that meant it formed stable phase of Al2O3. To sum up, the purity of Al(OH)3 compound was much influenced by KCl concentration and the increase of temperature transformed the Al(OH)3 sample to AlOOH (boehmite) and γ-Al2O3. Introduction Al(OH)3 is metal hydroxide that is much used as flame retardant additives for polymer due to its property in thermal degradation that absorbs heat (endothermic), releases water, and forms oxide film of Al2O3 at the surface of polymer [1]. Electrosynthesis is a part of electrolysis methods that uses electricity to produce chemical reactions, in this case that involving redox reaction, such as electrosynthesis of Al(OH)3. The method has advantages such as using small amount of reactant, the process is simple and fast [2]. Synthesis of Al(OH)3 was done by Tchomgui-Kamga et al. [3] with electrolysis method using 1compartment system. An aluminium metal electrode with successive electrolyte solutions of Al2(SO4)3, Al(NO3)3, AlCl3, (NH4)2SO4, NH4Cl, (NH4)HCO2, Na2SO4, NaNO3, NaCl, NaClO4, Na2C2O4, NaCH3CO2 were used in the system. The result of bayerite, Al(OH)3,was found in all solutions, except [4] did an electrolysis in 1-compartment system using aluminium anode in electrolyte solutions of NaCl, KCl, NaNO3, and NaNO2 with concentration variations of 0.5 to 5 g/L. The result was Al(OH)3 and the higher electrolyte concentration increased the efficiency of the produced coagulation. In this work, we used different system as previous researches, since by using 1-comparment the main and side products will mix together and makes the product characterization difficult therefore, 2-compartment system was used [5,6]. It contained aluminium as anode in one compartment and graphite as cathode in another compartment that was connected by salt bridge.Yan et al. [7] had done a synthesis by using 2compartment system namely by separation of Al(OH)3 and NaOH from NaAlOH4 solution with titanium coated with ruthenium as the electrodes. Graphite has an advantage of its inertness, therefore it is not easy to be oxidized or reduced, while alumilum anode as Al 3+ ion sources from its oxidation reaction. Electrolyte solution of KCl was used as in 1-compartment since it has a good conductivity, in this case the KCl concentrations were varied (0.25, 0.30, 0.35, 0.40, 0.45 M). The purpose of this work was to get Al(OH)3 by electrosynthesis and to characterize the purity of the products using XRD as well as to get the thermal property of pure Al(OH)3 using TGA-DSC. Experiment methods The solution of 200 mL KCl (concentrations of (0.25, 0.30, 0.35, 0.40, 0.45 M) was soaked to 2-compartment system. Aluminium electrode (9 cm x 4 cm) in anode compartment and 8 bar of graphite electrodes (from bateray) in cathode compartment with the distance among the electrodes was 2.5 cm. The two compartments were connected with salt bridge. The electrolysis was done for 6 h with potential of 12 V and the electrolysis obtained product was Al(OH)3. Then, the product was filtered, dried and characterized using XRD (Shimadzu Maxima 7000) to know crystal products in Al(OH)3 sample. TGA-DSC (Perkin Elmer 6000) was used to investigate the decomposition temperature and thermal property of the product. Results and Discussion In the electrolysis process, aluminium metal anode dissolve to form Al 3+ [4], while graphite is inert and remains stable, therefore in cathode compartment water will reduce to hydrogen gas. The reduction and oxidation reactions that occur in electrolysis process as the following [4]: Al(OH) 2+ (aq)+ H2O(aq) Al(OH)2 + (aq) + H + (aq) Al(OH)2 + (aq)+ H2O (aq) Al(OH)3 (s) + H + (aq) Overall reaction: The cation of K + flows to cathode and reacts with OHfrom water reduction reaction to form KOH. The reaction as follow [8]: The formation of KOH was proven by the increase of pH solution at catode compartment. Effect of KCl concentration to electrolysis products The increase in KCl concentration will improve the current that flows through the electrolysis cell [9], as consequence it will improve the precipitate resulted. as in shown in Fig. 1. TGA-DSC. The electrosynthesis result from KCl 0.3 M as the product is Al(OH)3 analyzed by XRD above, was characterized using TGA-DSC instrument with heating temperature from 50°C to 950°C with heating rate of 10°C/min. Thermogram of the sample is depicted in Fig. 3. DSC curve is formed two endothermic peaks and TGA curve shows the reduced mass of the sample. The sample masss loss is 45.47%. The hydroxyl release process occurs at temperature 100-350ºC [10], with first endorthermic peak at 270.08ºC with absorbed energy of 31.74 J/g or 2.47 kJ/mol and the total energy absorbed is 231.79 mJ. The peak shows the tansformation from Al(OH)3 (bayerite) to AlOOH (boehmite) [11]. Transformation process is from AlOOH (boehmite) toγ-Al2O3occurs at 550-800ºC [11] with second endothermic peak at 660.28°C with absorbed energy of 2.99 J/g or 0.23 kJ/mol and total of absorbed energy is 21.89 mJ. At temperature above 800ºCthere is no mass loss of the sample that shows the stable Al2O3 phase formation. Theoretically, thermal decomposition process from Al(OH)3 to Al2O3 absorbs energy of 173.61 kJ/mol, while from the characterization is 2.71 kJ/mol. This is due to the impurity that exists in the synthesis result of Al(OH)3. In methane gas combustion, the released energy is 802.36 kJ/mol. Some moles of Al(OH)3 can absorb the energy released and retard the combustion from methane. Thermal analysis of Al(OH)3 decomposition using TGA-DSC instrument occurs endothermically. The Al(OH)3 decomposition that absorbs energy will increase the fire retardation due to water release from Al(OH)3 and liquidifyes the flammable gas. The decomposition reaction of Al(OH)3 is as the following [12]: Al2O3 + H2O (6) Beside that it also forms a thermally stable Al2O3 layer on the the flammable gas. Conclusion White precipitates with different masses were formed for electrolysis processes using aluminium anode with varied KCl solutions. From purity analysis using XRD found that precipitate of Al(OH)3 was found in electrolysis using KCl 0.3 M, while KAlOCl2 was formed in the solution of KCl 0.45 M. The mixed compounds of Al(OH)3 and KAlOCl2were formed in other variations of KCl concentrations. Thermal analysis using TGA-DSC for Al(OH)3 from the electrolysis using KCl 0.3 M solution shows reduced mass total of 45.49 % and two endothermic peaks at 270.08°C as the transformation of Al(OH)3 to AlOOH (boehmite) with absorbed energy of 2.47 kJ/mol, while at 660.28°C is the phase change of AlOOH (boehmite) to γ-Al2O3with absorbed energy of 0.23 kJ/mol. At temperature above 800°C there is no mass loss of Al(OH)3 sample due to the formation of thermally stable phase of Al2O3.
2019-05-17T13:55:27.437Z
2019-05-03T00:00:00.000
{ "year": 2019, "sha1": "c098bd35e21e24da3b851d3d4895104ebd305007", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/509/1/012066", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1b2f58c05c98b64e84a7bf921454154491b86005", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
18876933
pes2o/s2orc
v3-fos-license
Semantic clustering of Russian web search results: possibilities and problems The paper deals with word sense induction from lexical co-occurrence graphs. We construct such graphs on large Russian corpora and then apply this data to cluster Mail.ru Search results according to meanings of the query. We compare different methods of performing such clustering and different source corpora. Models of applying distributional semantics to big linguistic data are described. Introduction The presented paper deals with the problem of semantic clustering of search engine results page (SERP). The problem arises from the obvious fact that many user queries are ambiguous in some way. Thus, search engines strive to diversify their results and to present such results that are related to as many query interpretations as possible. For example, Google search for the Russian word 'максим' returns: 1. five results related to a popular singer, 2. two results for a magazine, 3. one result for http://lib.ru, Maxim Moshkow's electronic library, 4. one result for a proper name. However these results are not sorted by their meaning and are returned simply according to their relevance ranking, which for many of them seems to be almost equal. The obvious way to cluster the results is by the words their snippets share. Unfortunately, often snippets for results belonging to one query sense do not have a single content word in common (except for the query itself, which is useless). Cf. two snippets for the first query meaning from the example above: 1. 'МакSим начинает самостоятельно заниматься своей карьерой, пишет новые песни. В этот период певица выступает как малобюджетный проект, ...' 2. 'МакSим презентовала видеоклип «Я буду жить», получивший широкую огласку еще до момента появления видео в сети.' They do not have a single common word, but still belong to one meaning (popular singer). Moreover, snippets for different query senses can share some words. Cf. two snippets from the same search engine results page. They share the word 'автор' ('author'), however the first snippet relates to the first meaning, while the second snippet shows the third one: 1. 'МакSим (Марина Абросимова) -одна из самых популярных и коммерчески успешных певиц в России, являющаяся автором и исполнителем...' 2. 'Работает с 1994 года. Книги и тексты, разбитые по жанрам и авторам.' That means that there is a need for more sophisticated way to cluster search results. We should somehow learn which senses the query has and with which words these meanings are (probabilistically) associated. One of the possible ways to solve this problem is by extracting co-occurrence statistics from large corpora. The idea behind this is that word meaning is in fact the sum (or the average) of its uses. So, meaning is a function of distribution (cf. [1]). Thus, if we know with which words the query typically co-occurs and how these neighbors are related to each other, then we know the 'sense set' of the query. After that we can somehow measure semantic similarity of each search snippet on the SERP with each of the senses and map them to each other. This information can then be used to either rank the results, or mark them with appropriate labels. The structure of the paper is as follows. In Section 2 we briefly overview work previously done on the subject. Section 3 describes the process of building co-occurrence graphs from large Russian corpora. In Sections 4 and 5 we conduct an experiment on clustering SERPs with ambiguous queries from Mail.ru search engine with the help of the methods described before. The results are evaluated in Section 6. Section 7 draws conclusions concludes and provides suggestions for further research. Related Work As stated in the previous Section, we are inspired by a fundamental hypothesis than meaning depends on the distribution [1] and that frequency of linguistic phenomena (in our case, word co-occurrence) is important for determining these phenomena's place in the system of language [2]. Our work is also based on the idea that the senses of ambiguous lexical units should be induced from the data itself, not from a dictionary. No dictionary is perfect or comprehensive, because 'senses as identified in the dictionary identify points on a continuum of possibilities for how the word is used' [5]. The only robust source of words' meanings in the text is the text itself. That's why we shift our focus away from selecting the most suitable senses from a pre-defined inventory towards discovering senses automatically from the raw data, which is natural text. One of the first notes on practical application of this idea to word sense disambiguation and word sense induction is found in [3], where vector representations of word similarity derived from co-occurrence data are used. Broad review of contemporary (by 2012) state of the field is provided in [4]. The main source of methods for our present research is [6], which describes workflow for clustering web search results using graph analysis over co-occurrence networks. Specifically, we use the notion of query graph, consisting of query terms and words from search engine results page augmented with nearest neighbors and relations from a reference corpus. For partitioning query graph and clustering query senses we employed Curvature algorithm [6] and Hyperlex algorithm proposed in [7]. Building Co-Occurrence Graph The first thing we had to do was to select a text corpus to build the graph upon. It is well known that the larger the corpus is the more co-occurrence information it contains. However, increasing corpus size also leads to exponentially growing computation time. Thus, for the sake of time and because of the preliminary nature of our research, we restricted ourselves to three Russian corpora of smaller but still decent size: 1. Open Corpora 1 (1 million tokens), further 'OC' ; 2. Disambiguated fragment of Russian National Corpus 2 (1 million tokens), further 'RNC' ; 3. Corpus of random search queries from Mail.ru search engine 3 (2 million tokens), further 'QC'. The first two items are academic corpora of Russian texts, supposedly representing (written) language in general. They differ in that the first one consists of full texts published under various free and open licenses, while the second one is a random sample of sentences from the larger Russian National Corpus. Both of them come with morphological annotation. The third corpus was taken for comparison. It is important in view of the aim of our research (to test semantic SERP clustering). Our intuition was that perhaps query corpus provides more 'real-life' sense inventory. It is two times as big as its counterparts, because 'connectivity' between its members is lower (see Table 2) and we had to compensate for this. At the same time, it turned out that the first two corpora mixed into one give better results, thus below we will often refer to such 'meta-corpus' as 'Mix corpus'. Before constructing the graph itself, we preprocessed the corpora, namely: 1. Removed from QC all queries which did not contain Cyrillic characters (as apparently they are not Russian), 2. Processed QC with Freeling analyzer [8] to extract lemmas and morphological information for all tokens, 3. Removed stop words, 4. Removed all tokens except nouns, as we restrict ourselves to inducing only nominal senses (the same strategy was applied in [6]). Sizes of preprocessed corpora are given in Table 1. Average query length in QC is 2.47 noun tokens per query. After the corpus has been built, the process of constructing co-occurrence graph is rather straightforward: we create an empty graph and then populate it with vertexes denoting word types in the text (lemmas). After that for each lemma we find all its immediate neighbors in the corpus, that is, words to the left and to the right (sentence boundaries not crossed, queries considered to be 'sentences' as well). If two lemmas were neighbors at least one time, we draw an edge between corresponding neighbors. Finally, we have an undirected graph in which noun lemmas are vertexes and co-occurrence relations are edges. For each edge we also calculate Dice coefficient [9]. It measures the 'strength' of the collocation, based on absolute frequency (c) of both words (w and w' ) and collocation (w,w' ): One can also think about the graph as a matrix of Dice coefficient values for all possible pairs of lemmas in the corpus. Table 2 gives an overview of the basic features of the graphs. One can see that the average degree of QC is lower in comparison with the other corpora (because queries are typically shorter that sentences in natural texts). That is one of the reasons for our decision to use a larger query corpus. It should also be noted that all corpora comply to 'small world' definition [10], because their average path length is approximately the same as in a random graph with the same number of vertexes ( ) and average degree ( ), while clustering coefficient is significantly higher than it should be in the random graph. For example if Mix corpus were a random one, its average path length would be equal to 3.24 (= ( ) ( ) ), very close to the actual value. However, in this case, its clustering coefficient should be 0.0015 (= 2× ), which is significantly lower than the actual value. The same is true for all other corpora. 'Small world' nature of our graphs means that vertexes in them tend to bundle into clusters, which is typical of many real-world networks. This finding supports the idea of extracting senses from such clusters. It also additionally proves the applicability of graph sense induction methods to our corpora, as English-language graphs in the related publications also showed such properties. Building Query Graph We experimented with clustering search engine results page on a set of sixty ambiguous one-word Russian queries, taken from Analyzethis homonymous queries analyzer 4 . Analyzethis is a search engines evaluation initiative, offering various search performance analyzers, including one for ambiguous or homonymous queries. We crawled Mail.ru search for these queries, getting titles and snippets (10 for each result). The procedure of semantic clustering starts with building the so called query graph. Here we closely follow [6]. First, we lemmatize all snippets and titles and remove stop words and the query word itself. Then we construct a graph with all nouns from snippets and titles as vertexes. Then we use one of the large corpora graphs (those that we built in Section 3) to find words strongly connected to the query word and add these words to the query graph. We consider a connection 'strong' if it falls under the following constraints: where c is absolute frequency in the corpus, q is the query and w is the word under analysis. Thresholds 0.01 and 0.005 were determined empirically while experimenting on the above mentioned ambiguous queries set. These thresholds produced most convincing sense clustering. However, the issue of choosing the thresholds is a subject for thorough evaluation in future. Thus, we now have with no edges and vertex set consisting of words from the search result and strong neighbors of the query word. After that, for each pair of words (w,w' ) in we check if they co-occur in the large corpus. If they do and ( , ′ ) ≥ 0.005, we connect these words in with an edge with weight = Dice(w,w'). Finally, we delete disconnected vertexes (those with the degree equal to 0). With query graph at hand, we are ready to find which senses the query has. What we need is an optimal partition of the query graph, in which words related to different senses are in different parts of the graph. We apply two techniques for that, namely, Curvature from [6] and Hyperlex from [7]. Curvature algorithm aims at finding vertexes from with low local clustering coefficient. Our hypothesis is that these are words which serve as 'links' between different senses or 'uses' of the query. Then we remove vertexes with clustering coefficient below a certain threshold. It leads to the graph disjointing into several components related to different senses. Vertexes in these components represent lexical inventory of each sense. Disconnected vertexes are removed from the final graph. Curvature Let us illustrate the process with the example of 'амур' ('Amur') query. Figure 1 shows its query graph. It is already disconnected into two components and the meaning of love god (associated with words 'лук', 'стрела' and 'юноша' ) is separated. However, other 'senses' of the query remain hidden in the giant component. Vertexes shown as triangles have low clustering coefficient and are thus marked for deletion. So, we delete 'triangular' vertexes. Note that we chose threshold 0.3 -all vertexes with clustering coefficient below this are removed. It is also important that we do not delete vertexes with clustering coefficient = 0. This is because neighbors of such vertexes are not connected to anything except this vertex. If we remove it, a lot of disconnected vertexes will appear. Such clusters (consisting of only one word) do not make much sense. For example, the word 'лук' on Figure 1 is characterized by clustering coefficient = 0. If we remove it, then the whole component representing 'love god' meaning disappears. Figure 2 shows the query graph after removing vertexes with low clustering coefficient. We now have 6 components (note that the labels for these clusters are introduced by us, not by the algorithm): 1. River (all vertexes except enumerated below) 2. Love god ('юноша, лук, стрела' ) 3. Hockey club ('клуб, болельщик' ) 4. Movie ('любовь, фильм' ) 5. Dictionary-1 ('календарь, словарь, википедия, энциклопедия, академик, сюжет' ) 6. Dictionary-2 ('значение, описание, характеристика' ) First 4 components clearly represent different meanings of the word 'амур' . The last two are rather 'uses', typical contexts. However they can still be useful in clustering as they allow to keep encyclopedic results together. Hyperlex Hyperlex algorithm described in [7] introduces the notion of 'hubs' within the graph, meaning most inter-connected vertexes and employs the graph's maximum spanning tree. Just like the previous algorithm, it takes as an input the query graph we prepared in Section 4 and the query itself. First we create a list L with all vertexes from sorted in decreasing order by their absolute frequency in the large corpus. Then for each item of this list we check if the corresponding vertex complies to the following constraints: 1. Vertex normalized degree is greater than or equal to 0.05, 2. Average Dice coefficient of vertex edges is greater than or equal to 0.007. If the constraints are met, we add this word to the hub list, considering it to be a kind of a connector. Simultaneously, we remove this vertex and its neighbors from the list L and continue iterating. In case we meet a word which does not satisfy the requirements above, we check whether the list of hubs has at least two elements. If it does, we stop iterating, if not, we continue to the next item. Note that it differs from the original Hyperlex algorithm, where one should stop no matter how long the hub list is. In our Russian material it sometimes caused the hub list to remain empty or contain only one item, which is useless. After we have the list of hubs, we augment with query vertex and connect this vertex to all hubs putting infinite (or very high) Dice coefficient on the corresponding edges. Then, we produce a maximum spanning tree from this graph. Maximum spanning tree is an attempt to keep all the vertexes connected while eliminating cycles and using as few edges as possible with as high weights (in our case it is Dice coefficient) on them as possible. In the spanning tree, there is only one path between any two vertexes and this path lies through edges with maximum Dice. Because the query vertex and the hubs are connected by edges with infinite Dice, they are sure to be the center of the spanning tree and directly linked. At last we remove the query vertex from the spanning tree, producing disjointed subtrees with hubs as roots. These subtrees represent query meanings. Note that we also delete all disconnected vertexes (those with degree = 0). Let us present an example of Hyperlex at work with the same query 'амур' . Our corpus is Mix. Initial state of the query graph is the same as in Figure 1. We add the word 'амур' to the graph and connect it to vertexes selected as hubs: 'область, фильм, команда' . The result is presented on Figure 3 with query vertex drawn as a diamond. For reference, vertexes which were introduced from the corpus and not from search results ('сюжет', 'океан' , etc) are drawn as triangles. Now we produce maximum spanning tree from with Dice coefficient as weight measure. The tree is visualized on Figure 4. Note that it has much fewer edges than the initial . Finally, we remove the query vertex and all vertexes that become disconnected after this removal. As a result, we have a disjointed graph shown in Figure 5. The number of the components has grown from 2 to 4 (once again, labels are assigned by us): 1. Love god ('юноша, лук, стрела' ) 2. Movie ('любовь, фильм' ) 3. Hockey club ('клуб, игра, болельщик, команда, цвет' ) 4. River (all the remaining vertexes) One can see that Hyperlex successfully extracted the same four important meanings as the previous algorithm. At the same time, unlike Curvature, it managed to avoid two 'encyclopedic' clusters (obviously in common for too many queries) and leave their vertexes in the 'river' cluster. Also, Hyperlex is better because it describes 'hockey club' cluster in a richer way, using 5 relevant words instead of 2. One can again note that in fact what we call 'senses' are not senses like meanings in the dictionaries. We agree with Jean Veronis who argues that co-occurrence networks reflect 'uses' rather than senses. So, what we have are typical environments where the word is used, and these environments are only loosely connected to what a lexicographer would call 'senses' or 'meanings'. However, we are fine with that, as we assume that clustering SERP according to 'typical uses' is at least equally important as clustering according to 'proper senses'. Perhaps, these senses are in fact less related to real-life, as even linguists sometimes have trouble matching the 'senses' found in a dictionary and the occurrences found in a corpus [7]. Additionally, as has already been stated, dictionary senses are always limited and by design cannot cover new semantic trends and subtle meanings quickly appearing and disappearing in the modern world. Thus, theoretically typical uses are more relevant for clustering than academic dictionary senses. To strictly prove it for the Russian material, one needs manually clustered data set (see Section 6), and we leave it for further research. Mapping Results to Senses Once we possess the sense inventory for the query, we can combine it with bags-of-words for each search result to finally perform SERP clustering. We do that in a rather straightforward way. Given a set of senses represented by a lemma set each and a set of results (snippet and title) represented by lemma sets as well, for each pair of result ( ) and sense ( ) we calculate similarity measure . It is a simple number of lemmas in common for both sets divided by the number of lemmas in the result: Then we choose the sense with maximum similarity and link this sense to the result. Thus, each result receives some sense, and is 'understood'. In the future we plan to explore other means of calculating similarity measure as well, for example, counting tokens not types or considering weights on edges in the intersection. Generally, evaluation of clustering is a rather harsh task. Perhaps, the best way to do this is to employ human assessment, but for the time being we limited ourselves to simple evaluation of the correctness of cluster number (that is, number of meanings). Evaluation of SERP clustering Analyzethis service provides data about how many senses of an ambiguous query are there in the SERP. Thus we consider it to be an expert opinion and check how strong is our deviation from this 'gold standard'. For example, if Analyzethis believes that there are three senses present on the SERP, and our clustering algorithm puts all the results into one cluster, this signals that the algorithm is not optimal. The same is true if the number of clusters is, for example, eight. The less our deviation from Analyzethis assessment is, the better. So, in fact we check that the employed algorithms do not produce senseless results (too many or too few meanings). We once again note that in order to evaluate the contents of the clusters themselves, one needs manually clustered SERPs for ambiguous queries. To our knowledge, there is no such a data set for Russian. We are working on creating it. For the time being, we compared the number of clusters for each of ambiguous queries in four different settings (two corpora and two word sense induction methods). Then we calculated average deviation of our clustering number from that of Analyzethis. Table 3 provides the results of this comparison. Note that the average number of senses per query in Analyzethis data set was 2.65. It is clear that Hyperlex consistently outperforms Curvature, and that Mix corpus does the same with the query corpus. Hyperlex victory comes as no surprise, as it uses maximum spanning tree notion, which seems to allow deeper grasping of graph structure. The victory of Mix corpus (which is smaller than the query corpus) is much less expected. We believe that there are two reasons for this: 1. As we have already mentioned, the query corpus is less 'dense' because of low length of queries. Thus, there are fewer edges and less data for algorithms. 2. Query corpus was lemmatized with Freeling while Mix corpus consists of manually annotated corpora. Glitches and outright errors of Freeling could impact graph quality. This can be fixed in the future either by improving Freeling or by using another lemmatizer. Thus, at the moment, using Mix corpus and Hyperlex algorithm of word sense induction seems to be the best option. However, things surely can be different if we employ larger corpora (which we plan to do in the future). Conclusion and future work We showed that state-of-the-art methods of word sense induction and search results clustering based on semantic graphs do work for Russian data. Application of such methods can lead to search engine results presentation getting closer to actual semantics of the results, not simply term frequency ranking. For a user, it would mean the possibility to immediately grasp which results in the SERP are actually related to the query sense, and which other senses exist. The power of this approach can be increased by wider employment of Semantic Web paradigm: semantically marked up web pages are represented by generally better and clearer snippets. Such snippets, in turn, should provide better data for graph-based word sense induction algorithms. We plan to experiment on more types of query graph processing and launch a full-scale human evaluation of results. Also, it seems profitable to use not only separate words, but also compound phrases, as well as to construct graphs with not only immediate neighbors, but also with second-order co-occurrences (neighbors of neighbors). Additionally, experiments with larger query corpora may lead to new and inspiring insights in this field.
2014-10-26T06:27:11.000Z
2014-08-18T00:00:00.000
{ "year": 2014, "sha1": "6ee22a88e7c6896dced99c8e99ffaeee8d852ffb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1409.1612.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0b8a37c5944699d897709fdf9c9e7d82cf78123f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
199075187
pes2o/s2orc
v3-fos-license
Recover Iron from Bauxite Residue (Red Mud) Red mud is a hazardous waste generated from alumina refining industries. Unless managed properly, red mud poses significant risks to the local safety and environment. The Bayer Red Mud was considered as a low-grade iron ore with a grade of 5wt% to 20wt% iron. We adopted the reduction roasting-magnetic separation process to recover ferric from red mud by electromagnetic induction furnace. The effects of different parameters on the recovery rate of iron were studied in-depth. The optimum reduction reaction conditions were obtained that is 1wt% of carbon in red mud at 1450° roasting for 60 min and the magnetic field intensity is about 0.19T. The experimental results indicated that the grade of total iron and the iron recovery were 66.50% and 65.4%, respectively. The prove the electromagnetic induction furnace is more beneficial to iron recovery. Introduction Red mud is a kind of alkaline solid waste generated from the alumina refining of bauxite ore, which containing a certain amount of alumina, caustic soda, ferrotitanium oxide and a small amount of rare metals. The output, composition and characteristics of red mud vary with the type of bauxite and the production process, and are generally divided into two types: bayer red mud and sintering red mud. The production of 1 tonne of alumina generates about 1.5 tonnes of red mud [1]. With the increasing demand for alumina worldwide, it's no doubt that the generation of red mud is estimated to be over 4 billion tons [2]. At present, the comprehensive utilization and management of red mud is still a worldwide problem. In the past, large amounts of red mud are discharged into the sea [3].Today, the disposal of red mud was mainly by stockpiling [4]. We have to recognize that this method not only makes the alkaline material of red mud infiltrate the soil and groundwater, but also occupies a large amount of land resources.On the one hand, it pollutes the environment. On the other hand, it has a big security risk [5]. However, we hope to find a better way to efficiently recycle the valuable metals in red mud to provide options for its management after disposal. Many attempts have been made to find an environmentally friendly and cost-effective method to dispose of or utilize red mud. Red mud can be used in building materials, valuable metals recovery and fillers. [6,7,8] Red mud can be considered as a secondary raw material for the recovery of valuable substances. For instance, the metals in red mud have recently attracted research interest due to increasing demand and value for iron, aluminum, titanium, rare earth elements and other materials [9]. Recently, iron recovery has attracted major attention. However, it is difficult to recover metals from red mud because they are locked in complex mineral phases. Minerals in red mud include boehmite, hematite, aluminosilicate, sodalite, quartz, goethite, perovskite, cancrinite [10]. In spite of these issues, many researchers have carried out a lot of studies on t the recovery of valuable materials in red mud. Although high pressure hydro-chemistry method is one of the most effective methods to recover caustic soda and alumina, ferric in red mud can not be recovered [11]. Two main methods of recovering iron from red mud are based on: reduction roasting-magnetic separation process, and smelting in an electric shaft furnace to produce pig iron. Development of suitable metallurgical processes for ferric oxide recovery from red mud is important. In this study, recover ferric values by using reduction roasting process (Electromagnetic induction furnace) followed by magnetic separation. In this paper, the effects of different parameters on recovery rate of iron were studied and optimized. Under the same conditions, the experiments were carried out respectively in the electric resistance furnace and the electromagnetic induction furnace. It is further explained whether the electromagnetic induction furnace is beneficial to the recovery of iron from the red mud. Materials Bayer red mud used in the experiments was obtained from an aluminum company in Henan Province, China, The sample was analyzed by using XRF(X Ray Fluorescence), and the result is listed in Table 1. The X-ray diffraction pattern (XRD) of the Bayer red mud is shown in Fig.1. It is shown from Table 1 that the chemical composition of sample, showing 27.36% Al 2 O 3 , 23.21% SiO 2 , 14.90% CaO, 11.08% Fe 2 O 3 , 8.63% Na 2 O, 6.25% TiO 2. According to the XRD pattern shown in Fig.1, the main phases of red mud are katoite (Ca 3 Al 2 (SiO 4 ) (OH) 8 Experimental method Five group experiments were designed to investigate the effects of different parameters on iron recovery from the red mud, such as magnetic intensity, roasting time, roasting temperature, the ratio of carbon to red mud, and furnace. The red mud was mixed with carbon and additive according to specific proportions. The mixtures were pressed into pellets with 20 mm diameter and 20 mm thickness under 6MPa pressure. The pellets were put in crucibles to roast at high temperature in an electromagnetic induction furnace. After a given time, the roasted samples cooling to the room temperature. Products were ground and separated by a magnetic separator at a given magnetic field intensity. The experimental process is shown in Fig.2. The grade of total iron (T Fe ) was analyzed by chemical method. Then the recovery rate of iron was calculated according to mass balance in magnetic separation operation. Experimental facilities While most of the researcher performed experiments in a resistive furnace, the roasting process was performed in an electromagnetic induction furnace in this paper. The equipment is shown in Fig.3. Effect of magnetic field intensity on the recovery of iron The magnetic field intensity has an important influence on the recovery rate of the iron during separated by a magnetic separator. Samples consisted of red mud, active carbon and additive at a proportion of 100:18:6, roasting temperature of 1250 ℃ for 120 min, and Products were ground and separated by a Fig.4, the parameters of iron recovery rise rapidly with the increase of magnetic field intensity. But, the T Fe in concentrate drop with the increase of magnetic field intensity. Some of the less magnetic iron minerals and impurity minerals in the materials are also separated together with magnetic field intensity increasing. As a result, the recovery rate of iron increases and the T Fe in concentrate decreases. In the case of high recovery rate, the T Fe in concentrate is better at the same time. Generally the magnetic field intensity was about 0.19T. Effect of temperature on the recovery of iron The roasting temperature has an important influence on the recovery rate of the iron during reduction roasting process. Samples consisted of red mud, carbon and additive at a proportion of 100:18:6, and were roasted for 120 min at 1050, 1150, 1250, 1350 and 1450℃, respectively. Experimental results are shown in Fig.5. It is shown from Fig.5 that effect of roasting temperature on the recovery rate of the iron was apparent. The parameters of iron recovery rose with the increase of sintering temperature. However, the T Fe in concentrate is no obvious change with the increase of sintering temperature. Although higher roast temperature is beneficial to iron recovery, for industrial applications, generally the optimum roasting temperature was about 1450℃. Effect of roasting time on the recovery of iron The effect of roasting time on the recovery of iron is indicated in Fig.6. The materials consisted of red mud, carbon and additive at a proportion of 100:18:6, and were smelted in an electromagnetic induction furnace, and the roasting time varied from 40 to 140 min. It can be seen from Fig.6 that the T Fe in concentrate and recovery rate is no obvious relationship with the increase of roasting time. Due to the low iron content in the raw material, it doesn't take much reaction time. Therefore, for industrial applications, it was inferred that the optimum roasting time was about 60 min, during which deoxidization reaction of ferrous oxides was mostly completed. From current research, carbon is a good reducing agent. In order to determine the optimum dosage of carbon, different carbon additions were studied, respectively, with the other experimental parameters under the same amount of addition. Samples consisted of red mud and additive at a proportion of 100:6, and were roasted for 60 min at 1450℃. The results are shown in Fig.7. As shown in Fig.7, the parameters of iron recovery and T Fe in concentrate decreased rapidly with the increase of the ratio of carbon to red mud. When the ratio of carbon to red mud was over 12.5wt%, T Fe and recovery of iron kept stable. It is indicated that increasing amount of carbon added can't promote iron recovery and T Fe in concentrate. Due to the low iron content in the raw material, too much carbon is bad for iron reduction. Based on an overall consideration of various factors, the optimum percentage of carbon in red mud was 1wt%. Effect of furnace on the recovery of iron In order to prove the electromagnetic induction furnace is more beneficial to iron recovery, different furnace were studied, respectively, with the other experimental parameters kept at the same amount of addition. The effect of furnace on the recovery of iron is indicated in Table 2 under the condition of roasting temperature of 1450℃, reaction time of 1h and red mud, carbon and additive at a proportion of 100:1:6. It is shown from Table 2 that effect of furnace on the recovery rate of the iron was apparent. The experiment result of electromagnetic induction furnace is better than that of resistance furnace. It is preliminarily determined that magnetic field stirring is conducive to iron accumulation. Conclusion 1) Major chemical composition of red mud was Al 2 O 3 , SiO 2 , CaO, Fe 2 O 3 , Na 2 O and TiO 2 . Katoite, andradite, calcite, hematite, muscovite, and diaspore existed in red mud as the main mineral phases, and most of iron existed in hematite ore, accompanied by some andradite. 2) These main factors all play an important role in the recovery of iron which are the magnetic field intensity, roasting temperature and time, the ratio of carbon to red mud and furnace. The optimum reduction reaction conditions were obtained that is 1wt% of carbon in red mud at 1450 ℃ roasting for 60 min and the magnetic field intensity is about 0.19T. Based on the above optimum reaction conditions, magnetic separation concentrate can be obtained with the grade of 65.4% T Fe and 66.50% recovery of iron. 3) Compared with the resistance furnace, the electromagnetic induction furnace is more beneficial to iron recovery.
2019-08-02T16:38:21.467Z
2019-07-09T00:00:00.000
{ "year": 2019, "sha1": "d24a697a468ac3d2425a4d7aad665e915433ecb2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/252/4/042037", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3da40d7dcb2eda8bd12429e011a54e12f2e1ae2e", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Chemistry" ] }
252310142
pes2o/s2orc
v3-fos-license
The introduction of a value‐based reimbursement programme—Alignment and resistance among healthcare providers Abstract Reimbursement programmes are used to manage care through financial incentives. However, their effects are mixed and the programmes can motivate behaviour that goes against professional values. Value‐based reimbursement programmes may better align professional values with financial incentives. The aim of this study is to analyse if and how healthcare providers adapt their practices to a value‐based reimbursement programme that combines bundled payment with performance‐based payment. Forty‐one semi‐structured interviews were conducted with representatives from healthcare providers within spine surgery in Sweden. Data were analysed using thematic analysis with an abductive approach and a conceptual framework based on neo‐institutional theory. Healthcare providers were positive to the idea of a value‐based reimbursement programme. However, during its introduction it became evident that some aspects were easier to adapt to than others. The bundled payment provided a more comprehensive picture of the patients' needs but to an increased administrative burden. Due to the financial impact of the bundled payment, healthcare providers tried to decrease the amount of post‐discharge care. The performance‐based payment was appreciated. However, the lack of financial impact and transparency in how the payment was calculated caused providers to neglect it. Healthcare providers adapted their practices to, but also resisted aspects of the value‐based reimbursement programme. Resistance was mainly caused by lack of understanding of how to interpret and act on new information. Providers had to face unfamiliar situations, which they did not know how to handle. Better IT‐facilitation and clearer definition of related care is needed to strengthen the value‐based reimbursement programme among healthcare providers. A value‐based reimbursement programme seems to better align professional values with financial incentives. | INTRODUCTION Value-based reimbursement programmes (VBRP) focus on activities that generate value through quality enhancing, but also cost constraining, incentives. 17 Surgical procedures have been considered suitable for value-based reimbursement because of the discrete beginning and end of a care episode. Further, the variation in recommendations in clinical guidelines 18 of spine surgery makes it suitable for a value-based reimbursement since the reimbursement level is conditioned on the outcome of the surgery. Hence, the provider must assess whether the patient will improve enough to outweigh the cost of performing the surgery. The purchaser put the financial responsibility on the provider instead of paying for the service no matter the quality. The regional public health authority, Region Stockholm, is responsible of providing healthcare to 2.4 million people. 19 Region Stockholm introduced a value-based reimbursement programme (STHLM-VBRP) within elective spine surgery in 2013. The design of the programme is based upon the thoughts of value-based healthcare (VBHC), first outlined by Porter and Teisberg in 2006, 20 and combines bundled payment with performance-based payment (also known as pay-for-performance, P4P) in a unique design. The bundled payment extends the clinical episode to 1 year after surgery, a longer period compared to other programmes 21,22 ; and the measure used for the performance-based payment is how much pain the patient experiences 1 year after surgery. The bundled payment extends providers' financial responsibility to incentivise coordination of care and to avoid overuse. The performance-based payment conditions the reimbursement to the outcome of the surgery and aims to enhance and sustain quality by rewarding high-performing providers. In this paper, we sought to contribute to the empirical value of neo-institutional theory, by interviewing representatives from healthcare providers on how regulative changes affect daily operations within elective spine surgery in Stockholm, Sweden. The aim of this study is to analyse if and how healthcare providers adapt their practices to a value-based reimbursement programme that combines bundled payment with performance-based payment. In particular, we investigate the following research questions: How do healthcare staff experience and respond to the financial incentives the STHLM-VBRP entail? How can these experiences be understood from the perspective of neo-institutional theory in terms of alignment and resistance to the contractual changes the STHLM-VBRP imposed? | UNDERSTANDING REGULATIVE CHANGES FROM THE PERSPECTIVE OF INSTITUTIONAL THEORY The implementation of VBRP varies between, but also within, different healthcare systems. 23 In the US, focus has been on moving away from fee-for-service, 24 whereas publicly financed healthcare systems in Europe mostly have focused on coordinating care among providers. 25 Thus, the introduction of the value-based reimbursement programme (VBRP) does not happen in a vacuum. This may seem obvious, but many evaluations overlook contextual factors when assessing an intervention. 5,26,27 Organisations cannot be fully understood in isolation from the external influences that arise from a wider contextual perspective. 28 Consequently, institutional theory provides suitable frameworks for examining the nature of external demands and the behaviour of organisations. We use an approach to new-institutional theory based on Scott's conceptual framework. 28 According to Scott, institutions consist of regulative, normative and cultural-cognitive pillars that in relation to each other has stabilising and meaning-making properties. The regulative pillar refers to the practice of rule-setting, monitoring, sanctioning and incentivising. It comprises formal legislation but also less formal rule making. Instrumentalism is very central within the regulative pillar, that is, individuals conform to laws and rules because they seek rewards or wish to avoid sanctions. Hence, focus on the regulative pillar sheds light on the more formalised control systems. The normative pillar encompasses values and norms. Values refer to conceptions of the preferred or the desirable, whereas norms refer to the scripts for how to reach the desirable goals and what means are legitimate in attaining them. 28 A focus on the normative pillar emphasises the stabilising influence of social beliefs and norms, both internalised and imposed by others and highlights the 'moral roots' of behaviour and institutions. The cultural-cognitive pillar refers to the processes and frameworks of the shared perception, which enable sense making when meeting the 'external world of stimuli'. 28 The cultural-cognitive pillar emphasises features of shared understanding, professional ideologies, cognitive frames or sets of collective meanings. These aspects condition how actors interpret and respond to the world around them. A focus on the cultural-cognitive pillar sheds light on how knowledge is constructed and codified in models, assumptions and schemas, to what extent it informs and constrain behaviour. Confusion among actors usually indicates lack of support from the cultural-cognitive pillar because they do not know how to process or interpret information. Institutions are more robust when the regulative, normative and cultural-cognitive pillars are aligned and reinforce each other. Changes in one of the pillars may cause misalignment and resistance, thus weakening the institution if the pillars motivate different behaviours. However, institutions tend to converge over time and institutional theory considers misalignment of pillars as a catalyst for change. 28 The framework by Scott provides a structure to our analysis by focussing on the respective pillar, but also their interrelationship. The introduction of the STHLM-VBRP imposed contractual changes regarding the provision of elective spine surgery in Region Stockholm and can thus be regarded as regulative. This change may work as a catalyst for institutional change if the introduction causes a misalignment between the pillars. If healthcare providers resist to institutional elements imposed by the VBRP, these elements will not be institutionalised. If healthcare providers align to the institutional elements imposed by the VBRP, the elements will be institutionalised. By analysing all three pillars, we intend to provide a deeper understanding on how and why certain aspects of the new reimbursement programme are institutionalised or not. | THE CASE OF THE VALUE-BASED REIMBURSEMENT PROGRAMME The Swedish healthcare system is publicly financed with universal coverage. In Sweden there are 21 regions that are responsible for the provision and financing of healthcare, mainly through tax revenues. Since the regions are responsible for both provision and financing, they can be considered as both commissioning and purchasing organisations. As a commissioner, the region decides under what conditions healthcare organisations may provide care in the region. As a purchaser, the region pays for the healthcare consumed by the inhabitants within the region. In our article, Region Stockholm will synonymously be referred to as the purchaser. To receive public funding, private healthcare providers need accreditation by establishing a commissioning contract with the region in which they wish to deliver care. This is done either through the Public Procurement Act 29 or through the Freedom of Choice Act 30 (known as Patient Choice within healthcare), two different market-oriented solutions. Under the Public Procurement Act, healthcare providers are permitted a certain volume each year to a negotiated price, specific to each healthcare provider. Whereas Patient Choice is a contract that usually have no restriction on volume but with a set price, making providers compete based on quality and ultimately the patients' choice, a requirement for VBHC. 17 Patient Choice entails a continuous commissioning contract between the purchaser and healthcare providers, instead of a recurring procurement process. In 2013, Region Stockholm transitioned to accredit healthcare providers through Patient Choice instead of the Public Procurement within elective spine surgery. An elective surgery is scheduled in advance and does not involve an emergency. It was also decided that Patient Choice for elective spine surgery should entail a value-based reimbursement programme. This reimbursement programme will be referred to as the Stockholm value-based reimbursement programme (STHLM-VBRP). Private healthcare providers in Region Stockholm performed most of the elective surgeries, both before and after the introduction of Patient Choice with the STHLM-VBRP. The STHLM-VBRP combines bundled payment with performance-based payment, adjusted for patient characteristics. When the surgical procedure is registered, the healthcare provider receives a prospective payment, which includes the bundled payment and the expected performance-based payment. The prospective payment is adjusted for age, gender, comorbidity level and surgery that covers more than two levels of the spine. The idea is to limit differences in financial risk between patients to promote need-based healthcare. Failing to adjust for case-mix when designing a reimbursement programme leads to an increased risk of 'cherry picking', that is, providers avoid clinically complicated patients to the benefit of healthier patients. However, patients with a high potential risk of needing intensive care are not covered by the commissioning contract and must be surgically treated at a hospital with access to an intensive care unit. The bundled payment should cover the individual patient's healthcare utilisation related to the spine surgery (e.g., potential complications, reoperation, rehabilitation), for the full care episode of 1 year. That means that the bundled payment includes the patient's rehabilitation, primary care, speciality care and hospital care provided by external healthcare providers (i.e., not the provider that performed the surgery). The provider that performs the surgery receives an invoice from the purchaser if an external healthcare provider treats their patient after the surgery. Hence, the bundled payment is a multi-organisational bundle of service. The bundled payment should stimulate an effective and integrated care chain by using a fixed payment to the provider for all services provided during the entire care episode. The performance-based payment is based on the outcome measure Global Assessment (GA). The measure is a retrospective transition question asked 1 year after surgery ('How is your back/leg pain today compared to before the surgery?'). 31 The patient can choose between six response options (pain free, much better, somewhat better, unchanged, worse, did not have pain before the surgery). The registration of GA is administered and managed by the national quality registry for spine surgery in Sweden, Swespine. 32,33 The expected performance-based payment that is included in the prospective payment is based on historical outcomes of GA, adjusted for patient characteristics. If the level of pain of the patient turns out better than expected 1 year after surgery, the healthcare provider receives an additional payment. If the level of pain turns out worse than expected the healthcare provider has to repay money to Region Stockholm. Hence, the size of the adjustment depends on the discrepancy between the actual and the expected outcome. A provider cannot receive a positive performance adjustment by performing surgery on a patient that is expected to be pain free 1 year after surgery, simply because that patient cannot get any better. On the other hand, a patient that according to historical outcomes is expected to experience a somewhat better pain will generate a positive adjustment if the actual outcome is pain free. The idea is that the performance-based payment should give financial incentives to investigate further, what can be done to improve the pain 1 year after surgery. Thus, performance-based payment is a complement to the bundle payment, to avoid healthcare providers stinting on necessary care since it may negatively affect the pain patients experience. However, healthcare providers cannot perceive the full size of the performance-based payment since it is included in the prospective payment; they only perceive the adjustment if the actual outcome deviates from the expected. | STUDY DESIGN AND METHODS We conducted a systematic comparison of case studies, using respondents' own reports since neo-institutional analysis assumes that institutions are, in effect, manifested through individuals' attitudes, beliefs and motivation. The case studies of interest were healthcare providers accredited within elective spine surgery and reimbursed based on the STHLM-VBRP. At the time of the introduction of the STHLM-VBRP, there were three accredited healthcare providers in Region Stockholm. A fourth provider was accredited in 2017. All of the providers were private and for-profit, one clinic was a professional partnership whereas the other three clinics were part of a larger healthcare organisation (however, some had a history of being a cooperative/professional partnership). Two providers were located in Stockholm city, one in a Stockholm suburb and the fourth in a neighbouring region (Region Sörmland). Despite being located in different regions, all healthcare providers were providing care under the same contractual conditions after the introduction of the STHLM-VBRP. An interview guide was designed based on the structure of the reimbursement programme. The interview guide was designed as an aide-memoire, 34 to ensure that all aspects were covered but still allowing for the respondent to talk freely about the topics. To recruit respondents for interviews, we used a purposive sampling approach 35 in dialogue with the respective managers at the four clinics. We wanted the respondents to reflect the heterogeneity among staff, thus both clinically active and administrative staff were included from different professions, to attain a more comprehensive perspective. By interviewing both staff and clinicians we could reflect the potential different contextual factors and consequences of the reimbursement programme. All staff were employed by the healthcare provider and their salary was not affected by the new reimbursement programme. Before commencing the fieldwork, we obtained ethical approval (2015/94-31) from the regional board of ethics in Linköping, as well as a signed consent to participate from each respondent. The respondents were also informed that each interview was estimated to last between 30 and 60 min. We conducted semi-structured face-to-face interviews with representatives from all four accredited healthcare providers at respective spine surgery clinics. The interviews were carried out in two waves, May 2015-May 2016 and June-September 2017. The interviews were carried out in two waves to cover any potential time factor affecting how respondents experienced the reimbursement programme. For the second wave, our first option was to interview the same respondents, but because of misaligned schedules and certain staff turnover this was not always possible. In total, 41 respondents were interviewed, see Table 1. Seven respondents were interviewed in both the first and the second wave, thus 34 unique respondents were interviewed. Three interviews were conducted with two respondents at the same time after a query from the respondents. Two interviews were conducted over the telephone, both in the second wave, with respondents who had already been interviewed face-to-face during the first wave. To make the respondent feel comfortable in the situation, each interview started with more general questions about the respondent's profession and responsibilities. 35 Each interview lasted between 20 and 60 min. There was some variation in length of the interviews because respondents had been involved and affected by the reimbursement programme to a varying degree. However, each interview started by checking available time in order to adjust the disposition to the topic. The interviews were carried out in Swedish. All but one interview were audio recorded and transcribed verbatim in Swedish. The interviews were analysed using a thematic analysis. 36 We adopted an abductive approach that allows for interaction with previous and newly discovered knowledge, thus allows for a combination of an inductive and deductive approach. 37 Accordingly, the interview guide provided a helpful structure for beginning the analysis, but the themes were adapted when new aspects were discovered. The first step in the analysis was to identify what aspects of the new regulative framework healthcare providers experienced, how these aspects were perceived and whether they had any effect on their daily operations. Emerging themes were later sorted into the neo-institutional framework T A B L E 1 Represented professions and functions among the respondents by Scott,28 to connect the empirical findings to the conceptual framework based on institutional pillars. An iterative process followed, where identified themes were classified, grouped and regrouped. Only the quotes used in the article was translated from Swedish to English. The originators of the quotes used in 'Findings' have been encrypted to ensure that individual respondents cannot be identified. The healthcare providers will be denoted A-D followed by a number indicating the respondent. Further, in the following text, post-discharge care will be denoted as external care when provided by a provider other than the initial spine surgery clinic. | FINDINGS The main themes correspond to the aspects of the STHLM-VBRP that the healthcare providers experienced as most important: the bundled payment, the performance-based payment, and the continuous contract. Each theme is followed by subthemes that were generated with the inductive approach. | A more holistic perspective All healthcare providers were strongly affected financially by the increased cost responsibility. Hence, all healthcare providers experienced a strong incentive to take care of complications related to the surgery to avoid paying other healthcare providers. It further stimulated the providers to discuss how to decrease the amount of external care at other hospitals/providers. What I think is the difference, when I look at other areas of patient choice, we have a more complete picture of the patient. Otherwise you only see the surgical procedure. So I think that is the major difference, that you have a lot more patient responsibility for much longer. It also creates other routines; it creates another way of working. C2 Yes, it actually drives private healthcare providers to take responsibility. Not just under the knife, but that you actually own the process for a while longer. B8 The invoices for external care created a new flow of information to the providers when their patients were treated elsewhere. Healthcare providers expressed that this procedure gave them a more comprehensive perspective of the care chain. They realised that not all patients contacted them if they experienced complications after the surgery. That there is a slightly clearer follow-up on how the patients turned out in the end. Because clearly patients may be a bit different, some may not always call you if they don't feel well. They think like 'nah but I guess this is normal', now we can see it in a different way -more clearly. D4 I think it's good that you have to take that responsibility because it also means that you are more active in taking care of your own complications. You have to because of financial reasons. If one of our patients ends up at a university hospital or something like that, then we get a huge invoice and its's not -it has happened -it's not funny. Like you see half a week's production just disappears. Yes, it's pretty tough but I still think it's good. C1 The financial responsibility for post-discharge care affected healthcare providers. In particular, the cost of treating infections was extremely high and affected them greatly. Some respondents also said that the clinics in Stockholm all had very low infection rates. Thus, it was rather a matter of bad luck than something that was preventable and hence impossible to reduce further. Therefore, it seemed unfair to have them pay for necessary care for patients. It is more that it feels unfair, when it comes down to a complication that you reasonably couldn't have avoided in any way, that you later receive a bill of half a million, it doesn't feel fair. It is more this sense of justice, you have done your absolute best and sometimes you get hit, this penalty approach is not good. | Post-discharge care-An administrative nightmare Another aspect of the increased financial responsibility was that invoices often seemed to include care that was not related to the spine surgery. This resulted in additional administrative work for administrators at healthcare providers, but also for physicians since it required medical knowledge to audit the health records of the patients. Respondents said that it would be impossible to keep auditing health records manually in the long term and emphasised the need of better IT-facilitation for a sustainable system. And should something happen after surgery that is related to the surgery -then of course, you have to take responsibility. The only thing that I find tiresome… is that they shift a lot of costs onto the healthcare provider that aren't related to the surgery. It's just that parenthesis, otherwise I definitely think you should be responsible for what you do, absolutely. D6 The unrelated care was experienced as an 'unfair part of the contract' (B7), a way for the purchaser to pass on costs to the provider. Another problem with unrelated care was that they had to spend time and resources on writing an appeal to argue as to why they should not pay the bill, thus increasing the administration even further. | A more prominent role for physiotherapy Due to the bundled payment the cost of physiotherapy became salient to the spine surgery clinics and acquired a far more central position as compared to before the introduction of the VBRP model. The close relationship between spine surgery and physiotherapy became more evident and it was discussed at a management level. Now management can try to concentrate on how… before we've ignored physiotherapy. Now, we must include physiotherapy -what physiotherapy do we really need? And how should we work with it and to put those thoughts into practice. A1 Two of the healthcare providers perceived the more prominent role of physiotherapy as a natural step in improving spine surgery care. Healthcare professionals argued that it was good to assess patients from different perspectives, that physiotherapists can better assess the physiotherapy the patient has had previously. We have a holistic perspective all the time because we have the reception, we have the surgery, we have the physiotherapists and in the city we provide -we are accredited within Patient Choice rehabilitation. So I think we have a holistic approach regarding the spine. A3 At the other two clinics it was experienced as a big transition from being a spine surgery clinic responsible for the patient from the surgery until discharge, to suddenly being responsible for post-discharge care and rehabilitation. From 1 day to the next, physiotherapists suddenly had a great responsibility. One of the providers began to drastically change their patient flow by involving physiotherapists in the assessment but also by opening a centrally located outpatient clinic. Another provider had a more reserved attitude to the new responsibility making no major changes. We have started up a completely new flow prior to surgery that we are implementing at the moment and slowly getting used to. B5 Yes, a little understanding would feel good because, as I said, referring patients to avoid things they actually need. […] I feel really sad about it and I think it's pretty difficult to work that way. It would be nice to have an understanding of why I say it … I do as I'm told but that doesn't feel that good all times. D8 Another problem was that it was not possible to assess what patients had been treated for by external physiotherapists after surgery. This caused frustration and the spine surgery clinics felt a lack of control. Respondents expressed the need for better registration of what kind of treatment patients received when consulting a physiotherapist. We've experienced that patients who have had minor surgery, that may not really be in need of much physiotherapy, go to a physiotherapist in town and then loads of invoices drop into our mailbox and we can't figure out how that happened. We don't really have any control of the situation. But the patients do have the right to consult a physiotherapist, and then the physiotherapist can bill us. C1 The spine surgery clinic wants to be able to assess whether the treatment is related to the spine surgery or not, and the necessity of the treatment. Three out of the four clinics, offered the patients regular return visits to the clinic for physiotherapy, in an attempt to avoid being billed by external physiotherapists. This was also logistically problematic when patients lived far from the clinic and/or already had an established relationship with another physiotherapist. To increase the likelihood of the patient returning to the clinic for physiotherapy after surgery, one healthcare provider started with physiotherapy before the surgery to establish a relationship with an internal physiotherapist. | Does increased accessibility and freedom of choice require a patient contract? With an increased cost-responsibility for post-discharge care, respondents recognised that it was more important now to establish a good relationship with the patient. If the patient wanted to return to their practice, they would not have to pay other healthcare providers. To increase the chance of the patient returning for post-surgery care, all healthcare providers had increased their accessibility by extending opening hours, two providers also opened up a more centrally located outpatient clinic. We have better control and the patient has better access to us I think. So for the patient I think it's an advantage actually. And the most positive effect is probably that we feel that we must hold on to our patients, we lose out by not caring about them, I must say. A6 It was however logistically problematic for the providers to make patients come back in the case of complications, or to do physiotherapy when the easiest option for the patient was to turn to their primary care centre, emergency department or a physiotherapist with whom they already had an established relationship. This was described as unfair, to place this coordinating and integrating responsibility on them alone. They further expressed difficulties in scheduling staff; there is no point of having a physiotherapist at the clinic if the patients never use it. The damned patients who don't come here. D7 As an effect, physiotherapy could be perceived as something that should not be recommended by healthcare professionals. They could not send patients in need of physiotherapy to other physiotherapists because of the financial responsibility. But what we see is that the costs for physiotherapy are quite high because the patient still has a choice. We have no choice; we have signed this contract. The patient has a choice not to come here, where we already have the staff for that kind of activity. B8 But … we can't control patients, we can only ask them not to go, and if they should go, we want them to go to this physiotherapist with whom we have an agreement. But if they don't go there, we can't force them either. Because they have a free choice. D2 Many experienced it as frustrating not having any tools to handle the free will of the patient and wished for better support from Region Stockholm. Because of Patient Choice they could not limit the care the patient sought elsewhere. Some respondents suggested the possibility of a patient contract as a solution. A contract that guaranteed the patient a certain amount of physiotherapy related to the surgery, but the patient would be charged for any additional care that exceeds the agreed amount. However, that kind of action needs regulative support from Region Stockholm. | Incentive for cooperation with other healthcare providers Communication with external healthcare providers became important, but it was difficult to make them cooperate. Providers within STHLM-VBRP acknowledged that there was no direct incentive for external healthcare providers to put time and resources into contacting them. However, from a wider perspective it would be more efficient since the spine surgery clinic has the clinical history of the patient. They meant it would be more efficient if they got the chance to take care of their own patients instead of using the resources of hospitals or other facilities dedicated to more serious or complex conditions. Respondents said that they had realised that they must work actively for more integrated care, to contact external healthcare providers to set up a dialogue, and together design optimal post-surgery care. In a way I think it's good. On the other hand, I think it's bad because you can't control if a patient chooses a provider other than yourself. Which makes it important to speak the same language with the external providers. You cannot force people to come back after discharge, they may think it's better to go to their own physiotherapist. And we can sometimes differ in the way we see things, how much care the patient needs. C3 The bundled payment made it important to build a network of external physiotherapists (and other healthcare providers), with whom the spine surgery clinics could cooperate and agree upon an adequate level of care. Another problem respondents described was how to obtain information on whether their patient sought care elsewhere, and how to offer them the possibility to come back. Once they receive the invoice, it is too late to react, thus not giving them the chance to affect the healthcare provided. Respondents said that it felt like a punch in the stomach to receive an invoice from another hospital that had treated, a patient they could have treated themselves had they been offered the opportunity. And it can be frustrating if it is abused out there, that someone gets loads of mediocre unnecessary care, I don't think that's okay. So we can try to have a good dialogue with them, to write referrals and write some guidelines and such. C3 Patients usually seek care at an emergency department and are hospitalised. And then all possible things can be done without anyone contacting us. They can perform surgery and they can do -we don't even know that the patient is there. And then the patient is hospitalised for two weeks or something, being treated, and different things are done, and then suddenly out of no-where we receive a bill … it can cost a month's budget, for care we don't have any opportunity to influence. D1 Respondents said that some kind of automatic notification when their patient was registered elsewhere would be helpful, to give them a fair chance to offer their services. However, they also acknowledged the importance of respecting the choice of the patient and their option to say no and stay with the external healthcare provider. | The lack of financial impact Healthcare providers experienced the performance-based payment as something positive. The idea of being reimbursed based on results instead of activity, was encouraging. It was aligned with the professional values of ensuring the patient is free from pain 1 year after surgery. Respondents also expressed that they did not feel constrained by the reimbursement model or the performance measure used. We have our own goals, but they coincide with the goal that we want to maximise for the patientthat they should be as well as possible. So I don't think we have different goals, rather that the financial goals are in line with the goal you have with the patient, so to speak. C2 Quality improvement was considered important and something they continuously worked with before the introduction of STHLM-VBRP. All providers appreciated the idea of a performance-based payment being used as a complement to the bundled payment. But it had no financial impact. The share of the performance-based adjustment was too small in comparison to the prospective payment and the invoices for external care. Respondents meant that a larger proportion of the payment had to be tied to performance to generate an effect. The performance-based payment isn't anything we look at every month, in that way, unless it diverges a lot. It goes alright, and as long as our patients feel good we don't really follow-up on it that intensely. D4 There is a certain idea behind it, so you have to make sure that the bonus, that the quality bonus works. Because otherwise, if you only take a part of it and don't care about the other part -then some can make as much money as they want while others can't. It's completely wrong! Why then would you do all this, without doing this part right? There must be an incentive there. A10 Some meant that the level or structure of the performance-based payment had to be adjusted, otherwise the whole model would lose its purpose and might have negative effects on quality. Furthermore, the healthcare providers were not able to control the performance-based payment they received from Region Stockholm. | The lack of transparency in how the payment was calculated Respondents were positive to the idea of the performance-based payment but did not understand how they could make use of it. The lack of transparency for how the performance-based payment was calculated made the model more difficult to understand. Respondents said that they had been promised a demo that showed how different variables affected the performance-based payment. They had not yet received any demo but said that it would most probably be helpful to understand what aspects that affect the reimbursement level. I agree with the idea -yes. But, then there's a lot that we don't really know so much about when it comes down to what matters when this performance-based payment is calculated. A2 Because we want to understand, why did we get minus 15,000 and why did we get 10 for that? But then, it's so big that you cannot handle it. So you would want something just like this [snaps fingers], some kind of search engine. D2 The complication responsibility of post-discharge care feels much more concrete, that there we can do something and we know what we're doing. Whereas this performance-based payment, there we feel like we're just groping in the dark. A6 Compared to the bundled payment, respondents experienced the performance-based payment as vaguer and more complex. Respondents said that the performance-based payment was so low that it had no financial impact, it was not worth to monitor and assess outcomes measured with GA as a part of their daily operations. | Concerns regarding potential shifts in case-mix Within elective spine surgery, it was experienced by the respondents that it can be difficult to tell if a patient will benefit from surgery or not. When relating the reimbursement to how much pain the patient experiences after a surgery, some respondents expressed a concern regarding potential shifts in case-mix. The lack of effect of the performance-based payment raised concerns regarding cherry picking. In the worst-case scenario surgeons would only perform surgery on patients they knew for sure would benefit from it, thus not 'taking a chance' with patients suffering from comorbidities. At the same time, they argued that they should not perform surgery on patients who would not benefit from it, thus recognising the complexity of the problem. Obviously, when you do more risky things, you know that you take a greater medical and financial risk. Because if the surgery fails we must face the consequences. And of course, in the worst case scenario Patient Choice could lead to some patients being excluded. Then personally, I don't think I actually do that, but theoretically it could absolutely be possible. Especially if you have a lot of patients, but if you only have a few patients obviously you can't turn them away. If you have a lot of patients, then cherry-picking may be a problem. C1 However, when the fourth provider entered the market the competition increased, and no provider could afford to say no to patients. Providers focused on how to build processes that decreased the need of post-discharge care and physiotherapy. Respondents said that due to the extended cost-responsibility, the reimbursement level was relatively lower compared to before and this gave incentive to increase production. Thus, the decreased reimbursement level in combination with an increased competition, prevented cherry picking according to respondents. | A decreased level of uncertainty allows for long-term planning but with a diminishing reimbursement level The continuous commissioning contract allowed accredited healthcare providers to make long-term plans because they did not have to fear losing the contract during a competitive procurement process. Because of Patient Choice, we got more freedom to be able to decide ourselves what we think will be the best for the patient. That made us realise immediately that we must do something preoperatively and postoperatively. A7 Providers expressed frustration with the inability of the purchaser to monitor and assess the quality of the healthcare provided. Respondents meant that this inability caused a competition based on price, regardless of quality under procurement. I've been involved in quite a few procurements and in the end it's only the price that matters. And there are many actors who are not serious, who don't take on the most difficult or weighty procedures, more difficult patients and put in really low bids during the procurement process. A serious clinic can't practice under those circumstances. A1 All providers had a positive attitude to competition based on quality instead of price. However, they raised concerns regarding the lack of adjustment to the price level in line with inflation. Region Stockholm did not adjust the reimbursement level during the first 4 years and respondents doubted that this was going to happen at any time in the near future. It's the price erosion that I'm worried about. When there is no adjustment to inflation, and the margins get to the level that we must begin to reduce the quality. So that is the most important question right now. A1 Even though all healthcare providers agreed that the reimbursement level was at reasonable level for spine surgery procedures, it was relatively low in relation to the cost for post-discharge care. We might be more effective because we have had to make some cuts. We have had to assess our working routines, and that's not only negative. The coin always has two sides, so it can actually be positive as well. But it has been difficult. B9 One healthcare provider expressed concern regarding the new level, fearing that it would eventually affect patients negatively. Although they admitted that this new level had forced them to make changes that were for the better, but they feared that it would not be enough. Especially without future adjustment of the reimbursement to inflation. | THE ALIGNMENT AND MISALIGNMENT BETWEEN REGULATIVE, NORMATIVE, AND CULTURAL-COGNITIVE ELEMENTS In Table 2, we summarise the alignment and misalignment between the regulative, normative, and cultural-cognitive elements among healthcare providers. The bundled payment implied a new way of thinking about elective spine surgery. The idea of taking a greater responsibility for the care chain and increase cooperation among providers was supported by all three pillars. However, how to act on these ideas was not obvious to healthcare providers. As mentioned in section two, confusion usually indicates a lack of support from the cultural-cognitive pillar. Especially how to better integrate physiotherapy and increase cooperation with external healthcare providers. To know how to act they 'must learn about the new contract and gain an understanding of it' (B7). Without a shared understanding they cannot find appropriate means of how to adapt their practice. Despite efforts to cooperate with external healthcare providers, it was impossible to reach all of them and establish consent regarding optimal medical practice. Respondents experienced the situation as rather hopeless without any authority to impose sanctions if care, in their opinion, deviated from optimal medical practice. Since respondents experienced cooperation with other healthcare providers as difficult, they discussed a patient contract as a potential solution by holding patients accountable for excessive rehabilitation. Hence, this hopelessness resulted in new normative and cultural-cognitive values that lacked regulative support. New normative and cultural-cognitive values should be taken into consideration by the purchaser when updating the reimbursement programme. The definition of related care is important since it affects the range of healthcare providers' responsibility. Because of the vague definition, the responsibility could be perceived as both narrow and wide. Healthcare providers that adopted a narrower definition, experienced invoices pertaining to unrelated care as something unfair. It affected their relations with the purchaser because they felt used and experienced it as an unfair strategy. On the other hand, healthcare providers that adopted the wider definition of related care had a more neutral perspective on the invoices from Region Stockholm. However, the experienced ambiguity resulted in healthcare providers not knowing how to design their processes, since they did not know to what extent they were responsible. The vague definition in the contract weakened the regulative pillar. Due to the fact that providers did not know how to assess related care nor Invoices for external care. (+) High financial impact, incentive to coordinate post-discharge care and discuss physiotherapy at a managerial level. (−) Difficulties cooperating with other providers. It is beyond their conceptual world; do not have the right tools. (−) Reviewing and disputing invoices is time consuming for both administrative and clinical staff. Information about post-discharge care. (+) Increased focus on the relation between provider and patient resulted in better accessibility. (−) The logic 'the customer is always right' is not aligned with the professional logic and agency. No authority to impose sanctions on other healthcare providers. (−) Spine surgery clinics have no authority to sanction undesired behaviour. (−) No constitutive schema. Vague definition of related care in the responsibility for postdischarge care. (+) Perceived as an innovative step by the purchaser, proof that quality matters. Healthcare providers only perceived the performance-based adjustment, not the full payment. (−) The adjustment had no financial impact. Instead, providers focussed on how to decrease post-discharge care that had a more direct impact. (−) No difference with or without the performance-based payment, they were already working towards good outcomes. Healthcare providers did not receive information of how the payment was calculated for each patient. (−) The performance-based payment was perceived as too complex. It was not worth the effort to understand it better. (−) Confusion caused by the lack of transparency in how the payment was calculated. The continuous contract A continuous contract between provider and purchaser. (+) The decreased uncertainty allows healthcare providers to make plans for the long term. (+) Increased autonomy. Providers can design their own processes. A set price without adjusting the price level in line with inflation. (+ −) Quality enhancing but the lack of adjustment makes providers uncertain about the purchaser's intentions. (−) Healthcare providers still perceive Region Stockholm to put price before quality. No restrictions on volume. (+) The providers can assess patients without any restrictions, everyone in need should be treated. (+) More information on which patients benefit from surgery is needed. The importance of support and communication with the purchaser. (+) More information, give control to providers to be able to follow-up on care. (−) No constitutive schema to handle information. T A B L E 2 The findings in relation to the neo-institutional pillars, (+) indicates that the aspect aligns with the STHM-VBRP, whereas (−) indicates resistance to the STHLM-VBRP how to act on it, there was no support from the cultural-cognitive nor the normative pillar to strengthen the regulative pillar. Hence, the vague definition weakened the institution by motivating different behaviours. Providers appreciated the performance-based payment and perceived it as a quality statement by the purchaser. To reduce the pain further than expected, it is crucial for healthcare providers to come to an understanding of what else that can be done in addition to surgery. For a spine surgery clinic, this can be challenging since their specialisation in fact is spine surgery. Hence, the payment incentivise interprofessional collaboration and holistic healthcare since surgery alone may not be enough to reduce the pain any further. However, healthcare providers received no information about the expected performance-based payment nor the patient's expected pain reduction. The lack of transparency in how the performance-based payment was calculated made the payment too complex to understand, and because of the lack of financial impact, it was not worth trying to understand. The intended incentives to improve quality failed to have impact on behaviour since it lacked support from the cultural-cognitive pillar (they did not understand it) and the normative pillar (it was not worth understanding). Because of the strong financial impact of the bundled payment, healthcare providers focussed on how to minimise costs rather than maximising their outcome regarding the performance measure. The continuous contract with no restriction regarding volume was supported by all three pillars; it decreased uncertainty to healthcare providers and increased their autonomy. Respondents did however express the need of more research to better distinguish the patients that ultimately benefit from surgery and what kind of physiotherapy that is motivated. The idea of a set price to promote competition based on quality was aligned with the normative and cultural-cognitive pillars of healthcare providers. However, respondents experienced that the lack of adjustment of the price level in line with inflation was undercutting quality and the existence of their practice. As one respondent (B7) put it 'they squeeze every penny out of us making it really hard for us to survive as a business'. This caused normative resistance because respondents experienced it as unfair, and it seemed like Region Stockholm still put price before quality. Our analysis showed that the value-based reimbursement programme caused misalignment between the institutional pillars among healthcare providers. Even though providers supported the general idea of the value-based reimbursement programme, how to act on these ideas and adapt their practice to it required comprehensive understanding of the programme. | DISCUSSION The bundled payment imposed strong financial incentives thus making it crucial for all healthcare providers to assess the whole care chain of their patients. Due to the responsibility, healthcare providers became more prone to strive for optimal healthcare consumption. Similar findings have been reported in other studies, where bundled payment reduced healthcare use and costs. 21,38 The different perceptions, of the legitimacy of increasing healthcare providers responsibility for post-discharge, is in line with studies showing that spine surgeons allocate major responsibility to healthcare systems to manage the cost of healthcare. 39 With STHLM-VBRP, Region Stockholm moves towards integrated care with patient-centeredness and a greater responsibility to healthcare providers, thus in line with VBHC. 40 Participants in our study experienced a need for better cooperation between healthcare providers. Similar findings were reported in a study that investigated the effects of introducing VBHC in another Swedish region. 41 They found that the introduction of VBHC raised awareness about cooperation being necessary to create value for patients. The introduction of VBHC increased cooperation within the hospital but it was difficult to establish cooperation with external healthcare providers. 41 In the case of STHLM-VBRP, the private healthcare providers experienced difficulties in making other providers cooperate when there was no corresponding incentive for external providers. Thus, insufficient organisational structure of healthcare hinders structural changes. 42 The difficulty in establishing cooperation reflected that health care providers did not have cognitive schemas of how to coordinate post-discharge care nor how to act on the information they received from the invoices of external care and rehabilitation. Insufficient support from purchaser bodies (governmental or private) have been identified as a barrier for institutionalisation of elements of VBHC (such as VBRP) across different healthcare systems. 23 Thus, better dialogue between purchaser and providers in combination with compatible and agile IT-systems may enhance cooperation. The purchaser has to facilitate cooperation between healthcare providers to make the cost of cooperation as low as possible. The struggle of healthcare providers does on the other hand show that patients have a free choice to choose whomever they prefer. The performance-based payment has been criticised to have a negative impact on some aspects of medical professionalism. 43 Participants were positive to the performance-based payment because it did not 'force' them to focus on irrelevant outcome measures. However, the lack of transparency and financial impact made the performance-based payment too complex to understand and follow-up. Quality improvement requires ongoing feedback at all stages, and that everyone involved is aware of the complexity of changing the culture of the organisation. 42,44 The slow feedback from Region Stockholm caused frustration among healthcare providers. It made it more difficult for them to adjust to the new structure. Participants in our study found that their focus on financial aspects had increased with the STHLM-VBRP, compared to before. A potential explanation can be the experienced unbalanced incentive structure between the bundled payment and the performance-based payment. Thus, healthcare providers focussed on minimising post-discharge care instead of quality improvement, contrary to other studies where VBHC decreased the focus on financial aspects. 41 It has been argued that financial incentives are not sufficient to affect daily operations in a setting where governance and management philosophies are firmly grounded within the New Public Management paradigm. 16 We argue that financial incentives are, if introduced in the right institutional context, an effective tool to manage care. This highlights the importance of acknowledging the institutional context when designing, implementing, and evaluating reimbursement programmes. Another important aspect was the transition from periodic re-contracting to a continuous commissioning contract, which according to respondents decreased uncertainty of the future and allowed them to make long-term plans. However, it was also clear that a continuous contract requires relevant feedback from the purchaser at the right time. Once again, this highlights the importance of good communication and supporting IT-systems. Especially since the text in the commissioning contract can be interpreted in different ways (as the vague definition of related care proofed), therefore the relation to the purchaser organisation is more than just a commissioning contract. Even though this study was performed in a context of Swedish healthcare, it investigates management practices that are globally diffused and therefore should be of relevance for other healthcare contexts as well. Despite differences in healthcare organisation and funding between healthcare systems across the world there seem to be universal enablers and barriers for implementation of VBRP. Based on a comparison of four different healthcare systems, Mjåset et al. 23 found that three aspects were universal for a successful implementation: (1) strong support from governmental/purchaser bodies, (2) IT-systems that allow seamless system integration and up-to-date outcome measurement across the full cycle of care, and (3) involvement of the medical community to make sure that the intrinsic values of working in healthcare are aligned with management strategies. These aspects are further manifested by our findings from studying the introduction of the STHLM-VBRP. Because of contextual differences between and within healthcare systems, it is important to use a theoretical framework to structure findings and enable comparison. 27 We included all four healthcare providers that were reimbursed based on the STHLM-VBRP. Despite the limited number of providers, they performed a majority of the elective spine surgeries in Sweden. However, by that limited number, we cannot claim to cover all experiences. To cover other perspectives from the same reimbursement programme, future studies could focus on how the purchaser organisation act and adapt their practice to a value-based reimbursement programme. It would also be of great interest to study the reimbursement programme in a context that involves more providers. The introduction of STHLM-VBRP had three defining features: the bundled payment, the performance-based payment and the continuous contract between provider and purchaser. The general perception among providers about these features was positive. However, our analysis showed that the resistance to STHLM-VBRP was mainly caused by confusion in how to interpret and act on the information they received, that is, misalignment with cultural-cognitive pillar. The misalignment between the institutional pillars in healthcare providing organisations can be seen as a catalyst for change because of instable institutions. Whether this change will lead to more robust institutions depends on whether healthcare providers can come to an understanding of how to coordinate post-discharge care, have access to sufficient IT-systems, and whether the purchaser is able to support the healthcare providers when taking these steps towards integrated healthcare.
2022-09-17T06:16:41.102Z
2022-09-15T00:00:00.000
{ "year": 2022, "sha1": "8ccf16a0f1a03d098a6ebc400273ec2d52fea58d", "oa_license": "CCBY", "oa_url": "http://liu.diva-portal.org/smash/get/diva2:1698819/FULLTEXT01", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b17599ea6b08fe4b4015be2ac31757161557a174", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
270625436
pes2o/s2orc
v3-fos-license
Molecular Profiling and the Interaction of Somatic Mutations with Transcriptomic Profiles in Non-Melanoma Skin Cancer (NMSC) in a Population Exposed to Arsenic Exposure to inorganic arsenic (As) is recognized as a risk factor for non-melanoma skin cancer (NMSC). We followed up with 7000 adults for 6 years who were exposed to As. During follow-up, 2.2% of the males and 1.3% of the females developed basal cell carcinoma (BCC), while 0.4% of the male and 0.2% of the female participants developed squamous cell carcinoma (SCC). Using a panel of more than 400 cancer-related genes, we detected somatic mutations (SMs) in the first 32 NMSC samples (BCC = 26 and SCC = 6) by comparing paired (tissue–blood) samples from the same individual and then comparing them to the SM in healthy skin tissue from 16 participants. We identified (a) a list of NMSC-associated SMs, (b) SMs present in both NMSC and healthy skin, and (c) SMs found only in healthy skin. We also demonstrate that the presence of non-synonymous SMs in the top mutated genes (like PTCH1, NOTCH1, SYNE1, PKHD1 in BCC and TP53 in SCC) significantly affects the magnitude of differential expressions of major genes and gene pathways (basal cell carcinoma pathways, NOTCH signaling, IL-17 signaling, p53 signaling, Wnt signaling pathway). These findings may help select groups of patients for targeted therapy, like hedgehog signaling inhibitors, IL17 inhibitors, etc., in the future. Introduction The human epidermis is composed of keratinocytes, which exhibit cuboidal (basaloid) cytology at the lowest (basal) layer and squamous cytology at the suprabasal layers.It also contains Merkel cells and pigment-producing melanocytes.Tumors originating from keratinocytes or Merkel cells are grouped into non-melanoma skin cancer (NMSC).NMSC includes basal cell carcinoma (BCC) and squamous cell carcinoma (SCC).Bowen disease (BD) is a clinical term for squamous cell carcinoma in situ.In 2019, there were 4.0 million cases of BCC and 2.4 million cases of SCC worldwide [1].In 2012, in the United States (US), 56,987 patients were identified with BCC (39,035 incident and 17,952 prevalent) [2].NMSC is the most prevalent malignancy in the US, exceeding all other cancers combined with an estimated 2 million new diagnoses each year [3,4].Both BCC and SCC have low mortality but can have a high recurrence rate and can cause significant disfiguration, particularly in the head and neck regions where they commonly occur [4,5].BCC of skin is the most common type and may account for about 90% of all skin cancers [6][7][8]. The incidence of BCC shows a strong inverse correlation with geographic latitude combined with the pigment status of its inhabitants [9].The highest rates are seen in Australia, where over one in two inhabitants will be diagnosed with BCC by the time they reach 70 years of age [10].The incidence rates in Asia and South America are ten-to hundred-fold lower [11][12][13].Patients with a BCC have a seventeen-fold increased risk of subsequent BCC compared with the general population, as well as a three-fold increased risk of subsequent SCC and a two-fold increased risk of melanoma [14,15].The mortality of BCC is extremely low.However, the healthcare cost for NMSC is quite high.A US Medicare expenditure study showed that NMSC was the fifth most costly cancer between 1992 and 1995 [16].A report estimated the average annual cost of treating NMSC in the US to be USD 4.8 billion from 2007 to 2011, a substantial increase compared with the 2002 to 2006 estimate of USD 2.7 billion [17]. The risk factors for both BCC and SCC include ultraviolet radiation (UVR) and chronic immunosuppression [18][19][20].UVR is the major known environmental risk factor.Thus, the prevention strategy is photoprotection, which can be both topical and systemic [21].The ability to repair UV-induced DNA damage reduces with age.Increasing age and the male sex (at older age) are well-known factors for an increased risk of BCC.Molecular biomarkers of NMSC including genomics, transcriptomics, proteomics, and metabolomics have been recently reviewed extensively [22,23].Arsenic (As) is a known carcinogen that appears in groundwater and is associated with skin cancer [24].Chronic exposure to As may induce BCC [25][26][27][28].One study suggested that miR-155-5p regulates the NF-AT1-mediated immunological dysfunction that is involved in the pathogenesis and carcinogenesis of As [29].Some studies showed that in the presence of As exposure, decreased telomere length predisposes individuals to an increased risk of BCC [30].As generates reactive oxygen species that cause oxidative stress, leading to DNA damage.Concurrently, As inhibits DNA repair, modifies the epigenetic regulation of gene expression, and targets protein function due to its ability to replace zinc in select proteins [31].In a recent study, we have shown that high As exposure was associated with impaired DNA replication pathways, cellular response to different DNA damage repair mechanisms, and immune response [32]. In Bangladesh, between 2000 and 2002, 11,746 participants (5042 men and 6704 women) were recruited for the Health Effects of Arsenic Longitudinal Study (HEALS) and were exposed to As through the consumption of As-enriched groundwater.The study found 714 confirmed cases of premalignant skin lesions [33]. Our group conducted a double-blind, placebo-controlled study, the Bangladesh Vitamin E and Selenium Trial (BEST), to evaluate the effect of vitamin E and selenium supplementation in the prevention of NMSC in a population exposed to As who had a clinical manifestation of As toxicity in the form of As-related non-malignant skin lesion [39].Seven thousand subjects were followed up for 6 years for the development of NMSC.Of them, 1.7% developed BCC (males = 2.2%, females = 1.3%) [32].Our previous study showed interactions of As exposure and gene expression profiling in BCC in the study group who were exposed to inorganic As through drinking As-contaminated well water [32]. Studies on NMSC have been mainly performed in Caucasian populations.To our knowledge, no study addresses the molecular profiling of NMSC in a "non-Caucasian population" exposed to As.Moreover, no study in NMSC has yet addressed the fact that sunlight exposure is associated with a large number of somatic mutations in different genes in non-lesional, apparently healthy skin.Therefore, identifying a somatic mutation in NMSC samples does not necessarily establish the association between that mutation and the NMSC pathogenesis.In this study, we have looked for (a) somatic mutations in NMSC tissue by scanning more than 400 cancer-related genes to identify NMSC-associated somatic mutations in a Bangladeshi population exposed to As through drinking As-contaminated water and (b) examined if such mutation(s) were associated with differential expression of gene(s) or pathway(s). Study Population For this study, we selected the first 32 subjects from the BEST study developing histopathologically confirmed NMSC and had the tumor tissue properly preserved in RNA later, an RNA stabilizing buffer (ThermoFisher Scientific, Waltham, MA, USA).The BEST study included 7000 men and women (m = 2840, f = 4160) who were known to be exposed to As through consuming well water containing As [32,39].This study included all subjects with clinically visible non-malignant skin lesions (melanosis, leukomelanosis, or keratosis)-a known manifestation of As toxicity.We also collected "non-lesional" or apparently healthy skin tissue surrounding the margin of the arsenical keratosis lesion from 16 independent patients and preserved it in the same way.Throughout the manuscript, we have used the term "healthy skin tissue" for these non-lesional skin tissues.Patient characteristics are shown in Supplementary Table S1.All these patients were followed up every 2 years for a total of 6 years to check for the development of NMSC.The urinary As creatinine ratio (UACR) was also recorded at baseline and follow-up visits.A skin biopsy was performed on patients who had reasonable clinical suspicion of BD or NMSC, including SCC and BCC.All these patients consented to a biopsy.During this follow-up period, 14.7% of the males and 7.5% of the females (p < 0.0001, chi-square test) had a skin biopsy performed.Histopathological examination was performed by two pathologists independently.For the pathological diagnosis, a structured reporting form was used (see Supplementary Form S1).A skin biopsy was performed on a total of 727 participants (m = 417, f = 310).Among them, 37.7% of the biopsies showed BD, 2.9% had invasive SCC, 16.1% showed BCC, and the rest (43.3%) showed arsenical keratosis or other skin conditions [32].Thus, among the As-exposed study population, 2.2% of the male and 1.3% of the female participants developed BCC, while 0.4% of the male and 0.2% of the female participants developed SCC over the six-year follow-up. Arsenic Exposure Measurement We measured UACR at baseline and 2-year, 4-year, and 6-year follow-up as a measure of As exposure.The urinary total As concentration was measured by inductively coupled plasma mass spectrometry [52].Urinary creatinine was measured by a colorimetric method based on the Jaffe reaction described by Heinegard and Tiderstrom [53].The urinary As was measured from a spot urine collection.To take into account the hydration status, we used the UACR as a measure of As exposure.The log 2 -transformed UACR showed a strong correlation to the log 2 -transformed well water as a concentration (r = 0.66) [54]. Nucleic Acid Extraction DNA was extracted from these RNA later preserved tissues using a Quick-DNA/RNA Microprep Plus kit (Zymo Research, Irvine, CA, USA) following the manufacturer's protocol.After taking the samples out of the RNA later, the tissue was washed with a 1xPBS buffer and then submerged into the DNA/RNA shield before simultaneous DNA and RNA extraction.RNA and DNA quantification and the 260/280 ratio were checked by NanoDrop 1000. Somatic Mutation Assay For the detection of somatic mutation, we used AmpliSeq for the Illumina comprehensive Cancer Panel Guide (Illumina Inc., San Diego, CA, USA).The total gene list is presented in Supplementary Table S2.There were 4 plates with 4 different sets of primer pools.We made 4 identical plates of DNA with 10 ng input DNA each for 1 pool plate.DNA target regions were amplified in all 4 plates, and then amplified DNA was pooled together in corresponding wells on one plate.In the next step, primer dimer or unused amplicon were digested.After that, i7 and i5 adapters were ligated to the amplicon ends.These products were cleaned up by magnetic beads and then amplified for the 2nd time.The library was then cleaned up to have the final library for sequencing.After pooling, the final library was measured with a fluorometer.Library size and quantity were also measured by a fragment analyzer.Sequencing was performed on the Illumina HiSeq platform (San Diego, CA, USA). Gene Expression Assay For RNA sequencing on the Illumina platform, we used Lexogen's QuantSeq 3 ′ mRNA-Seq kit (Vienna, Austria) for library preparation as described previously [32].The final library was measured by a fluorometer, and after pooling, qPCR was performed to quantify the input library for sequencing on the Illumina HiSeq platform (San Diego, CA, USA). This study was approved by the Institutional Review Board of The University of Chicago Medicine protocol code IRB19-0724 and was approved on 24 September 2019. Statistical Method Mutation detection: The FASTQ Illumina sequencing data were initially processed by CLC genomics Workbench23 (https://digitalinsights.qiagen.com/(accessed on 26 April 2023)).After adapter sequencing trimming, default parameters were used for QC.The minimum length was kept at 40.A Targeted Amplicon Sequencing (TAS) module for paired samples was used where we used the tissue sample and the corresponding blood DNA sample as a pair.In this module, initially, the reads were mapped to homo sapiens sequence hg19, and the variants were detected using structural variant caller v1.2 (Biomedical Genomics Analysis 23.1).Variants found in normal samples (in our case-the blood) were removed from the variants detected in the tissue sample.The in-built workflow removed the germline variants found in the public database (db SNP, 1000 genomes project, dbSNPs common, and hapmap) that were found in the mapped reads.Also, variants outside the target region were removed as they are likely to be false positives due to non-specific mapping of sequencing reads.The parameters for the low-frequency variant detection were set at a minimum coverage of 10, a minimum count of 2, and a minimum frequency of 2%.Next, the remaining variants (the "somatic variants") were annotated with gene names, amino acid changes, conservation scores, and information from ClinVar (variants with clinically relevant association).We used a variant calling quality score of Q60 as the cut-off for the list of somatic mutations. Transcriptome data were processed using Partek Flow (version 10.0) (https://www.partek.com/partek-flow/,accessed on 11 November 2022).A STAR aligner was used for alignment, and the final gene count data were expressed as the count per million reads (CPM) and were log 2 transformed for the ANOVA using Partek Genomics Suite (version 7.0) (https://www.partek.com/partek-genomics-suite/,accessed on 22 April 2024).For statistical analyses, IBM SPSS Statistics version 29 was used.We also used Partek Genomis Suit for ANOVA and Gene set ANOVA as described in a previous paper [32].In the GO enrichment analysis, we tested if the differentially expressed genes (as per the set criteria) fell into a Gene Ontology category more often than expected by chance.We used a chi-square test for comparison.The negative log of the p-value for this test was used as the enrichment score.In addition to GO enrichment analysis, we also examined the differential expression of "gene sets" using the Gene Set Enrichment Analysis (GSEA) [55].Given an a priori-defined set of genes "S" (sharing the same GO category or the KEGG pathway), the goal of GSEA was to determine whether the members of "S" were randomly distributed throughout the ranked list or primarily found at the top or bottom.For further statistical comparison of the magnitudes of the differential expression of the "Gene set" in the absence or presence of a factor (mutation), we used "Gene set ANOVA", which offers an introduction to the interaction terms in the model.Gene set ANOVA is a mixed model ANOVA to test the expression of a set of genes (sharing the same category or functional group) instead of an individual gene in different groups.The analysis is performed at the gene level, but the result is expressed at the level of the Gene set category by averaging the member genes' results.The equation for the model is as follows: where Y represents the expression status of a Gene set category, µ is the common effect or average expression of the Gene set category, T is the tissue-to-tissue (tumor/normal) effect, G is the gene-to-gene effect, TxG is the differential pattern of gene expression in different tissue types, TxMut is the interaction term, and ε represents the random error. Results The diagnoses of BCC (n = 26) and SCC (n = 6) were confirmed by skin biopsy.There was consensus between two pathologists for all 32 cases. Somatic Mutation Considering the fact that even healthy-looking skin tissue is also exposed to sunlight and may develop UVR-induced somatic mutations, for each tissue sample, we compared the tissue DNA with the corresponding whole blood DNA (a proxy for germline) from the same patient for the detection of a somatic mutation.In 32 tumor tissue samples, we found a total of 6829 somatic mutations (in 3385 unique genomic loci, see Figure 1A).In 16 healthy skin tissues, we found a total of 2530 somatic mutations in 1470 unique genomic loci (see Figure 1A).Some of the variant metrics are shown in Table 1.The median number of somatic mutations per BCC sample was 148; for SCC, it was 180.5 per sample; and for healthy skin tissue, it was 140 per sample (p = 0.73, Kruskal-Wallis test).Considering the target sequence region of 1.7 Mb, the calculated median tumor mutation burden (TMB) was 87 mutations/Mb for BCC tissue, 106 mutations/Mb for SCC tissue, and 82 mutations/Mb for healthy skin tissue (p = 0.58, Kruskal-Wallis test).When we compared the TMB in BCC, SCC, and healthy skin tissue by sex, the difference was not statistically different, although the TMB appeared to be higher in females. We generated a list of somatic mutations in NMSC cases (BCC and SCC) and a list of somatic mutations in healthy skin tissue from an independent set of participants.Then, by comparing the somatic mutations in tumor tissue and healthy skin tissue, we looked for (a) NMSC-associated somatic mutations, (b) somatic mutations potentially associated with NMSC, which are found in tumor tissue as well as in healthy skin tissue, and (c) somatic mutations present only in healthy skin tissue. The overlap of the total unique somatic mutation loci between tumor tissue and healthy skin tissue and the mutations stratified by type (SNV, Del, and INS) are shown in Figure 1A.Among the SNVs, irrespective of BCC, SCC, or healthy skin tissue, the most common type of substitution was C > T (median 15.7% of substitutions/sample, 95% CI 1.6-37.8%)and G > A (median 15.8% of substitutions/sample, 95% CI 0-42.7%)without statistical difference between the tissue types (Supplementary Figure S1).This high prevalence of C > T + G > A substitution is consistent with the mutational signature for NMSC usually related to sunlight exposure. NMSC-Associated Somatic SNVs There were a total of 1611 somatic SNVs (representing 1440 unique SNV loci in 361 genes) detected only in NMSC samples (total n = 32, of which BCC = 26, SCC = 6) and not in healthy skin tissue (Figure 1B).All of these were in the gene coding regions; 321 were found in the ClinVar database, 628 were also found in TCGA skin cancer samples and reported in the COSMIC database, and 604 were "non-synonymous" SNVs.Among these 1611 NMSC-associated somatic SNVs, 1344 SNVs (covering 1222 loci in 344 genes) were found in BCC and the other 267 SNVs (covering 261 loci in 153 genes) were found in SCC.Some 43 unique loci were common in BCC and SCC but not in normal skin tissue.The list of the top 20 genes harboring these BCC-associated and SCC-associated somatic mutations are shown in Figures 2A and 2B, respectively. Somatic Mutation SNVs Common in NMSC and Healthy Skin Tissue We detected 277 somatic SNVs (representing 139 unique SNV loci in 95 different genes) found in both NMSC and healthy skin (Figure 1B).A total of 96 of them were reported in ClinVar; 66 were also found in TCGA skin cancer samples and reported in the COSMIC database; and 34 were "non-synonymous" SNVs.The list of the top twenty genes harboring these somatic mutations potentially associated with NMSC is shown in Figure 3A. Somatic Mutation SNVs Detected Only in Healthy Skin Tissue We detected 426 somatic SNVs (representing 401 unique SNV loci in 192 different genes), which were found only in healthy skin and not in any NMSC tissue (see Figure 1B).A total of 87 of them were reported in ClinVar; 146 were also found in TCGA skin cancer samples and reported in the COSMIC database; and 83 were "non-synonymous" SNVs.The list of the top twenty genes harboring these somatic mutations potentially associated with NMSC is shown in Figure 3B. Association of Somatic Mutation and Differential Gene Expression in NMSC In the next step, we asked if the absence or presence of somatic mutation(s) in the tissue showed a difference in differential gene expression patterns in tumor tissue compared to healthy skin tissue.Considering the fact that non-synonymous SNVs (causing amino acid change) may have functional effects, we restricted the analysis to NMSC-associated non-synonymous SNVs only.So, a tumor tissue was only considered a mutant for PTCH1 (for example) if that sample harbored at least one of the non-synonymous SNVs in PTCH1 but not if it harbored only some other SNVs in the PTCH1 gene. Gene Level Analysis A comparison of gene expression data between BCC (n = 26) and healthy skin tissue (n = 16) showed that 118 genes were differentially expressed at least by a fold change (FC) of 3 and an FDR level of ≤0.05 (see Supplementary Table S3).Gene Ontology (GO) or enrichment analysis of this gene list is shown in Figure 4.The list was enriched in genes involved in "Basal cell carcinoma", "Hedgehog signaling pathway", and "pathways in cancer".It may be mentioned that GSEA analysis (see Supplementary Table S4) also confirmed the enrichment of these pathway genes.Next, in the ANOVA model(s), we entered an interaction term "tissue (0 = healthy, 1 = BCC) x nonsynonymous mutation in PTCH1 (0 = no mutation, 1 = mutation)" to find out the genes that had a different magnitude of differential expression in the BCC tissue in the absence or presence of the mutation.The differential expression of these same 118 differentially expressed genes in the absence (n = 14) and the presence of the non-synonymous somatic mutation (n = 12) in the PTCH1 gene compared to the same normal skin (n = 16) are presented in Table 2 along with the interaction p-values.In the combined analysis, the PTCH1 gene was overexpressed in BCC tissue by FC = 4 (95% CI 2.2-7.2) compared to healthy skin tissue (see Supplementary Table S3); but, in the absence of any non-synonymous somatic mutation in PTCH1 in BCC tissue (n = 14), the FC was 2.8 (95% CI 1.5-5.3),and in presence of a non-synonymous somatic mutation in PTCH1 in the BCC tissue (n = 12), the FC was 6 (95% CI 3.1-11.8)(interaction p = 0.03, see Table 2).The result showed that the magnitude of differential expression for 40 out of these 118 genes was statistically different if the tumor had the non-synonymous mutation in PTCH1 (see the interaction p column in Table 2).In fact, the effect of this somatic mutation in PTCH1 was more pronounced for other genes. Table 2.The top 118 differentially expressed genes in BCC tissue compared to healthy skin tissue by at least an FC of 3 at an FDR level of 0.05.The comparison of FCs (95% CI) in BCC tissue without a PTCH1 somatic mutation and BCC tissue with a PTCH1 mutation status are shown.The genes are arranged in the same order as in Supplementary Table S3, where the genes are arranged in ascending order of their p-value for the combined analysis.The significant interactions are shown in red.Similarly, we asked whether the absence or presence of somatic mutations in NOTCH1 (Supplementary Table S5), SYNE1 (Supplementary Table S6), PKHD1 (Supplementary Table S7), and EP400 (Supplementary Table S8) in BCC tissue was associated with a difference in magnitude of the differential expression of genes.The result shows that the non-synonymous somatic mutations of each of these genes have a significant association with functional effects in terms of differential gene expression.In the next step, we wanted to see the effect on the gene pathway level. Pathway Level Analysis In this step, we examined if a set of genes (e.g., in the KEGG pathway) was differentially expressed in NMSC tissue compared to normal skin tissue and if the magnitude of differential expression in NMSC compared to normal was significantly different in the absence or presence of non-synonymous somatic mutations we looked at the mutation of the PTCH1 gene.Table 3 shows the BCC-associated nonsynonymous somatic mutations in the PTCH1 gene found only in tumor tissue but not in healthy skin tissue. Association of the Somatic ns Mutation in the PTCH1 Gene and Dysregulated Pathways in BCC The detailed results from Gene set ANOVA for all the KEGG pathways are presented in Supplementary Table S9.Compared to healthy skin tissue, in the BCC samples without the PTCH1 non-synonymous somatic mutation (n = 14), the genes in the "Basal cell carcinoma pathway" were overexpressed by FC 1.62 (95% CI 1.34-1.96),whereas in BCC samples with the PTCH1 non-synonymous somatic mutation (n = 12), the same pathway genes were overexpressed by FC 3.95 (95% CI 3.24-4.82).This shows a significant association (interaction p = 2.48 × 10 −17 ) between PTCH1 mutation status and the overexpression of the "basal cell carcinoma pathway".Among the other major pathways that are markedly overexpressed in the presence of the PTCH1 mutation include the "hedgehog signaling pathway" and the "TGF-beta signaling pathway" (see Figure 5).We also conducted the rank-based analysis-GSEA for patients without the PTCH1 ns mutation (Supplementary Table S10) and for patients with the PTCH1 ns mutation (Supplementary Table S11).It was interesting to note that in the GSEA analysis, too, many of the pathways found in the above-mentioned Gene set ANOVA were also seen to be more significantly enriched in the presence of the PTCH1 mutation. Association of the Somatic ns Mutation in the NOTCH1 Gene and Dysregulated Pathways in BCC In the same way, we looked at the NOTCH1 mutation status (nineteen BCC without and seven BCC with the NOTCH1 mutation) and compared it to the same healthy skin tissue (n = 16).The major pathways that were more markedly overexpressed in the presence of NOTCH1 mutation include the "IL-17 signaling pathway", "peroxisome related genes", and "NF-Kappa beta signaling pathway", and the "TGF beta signaling pathway" (see Figure 6). Association of the Somatic ns Mutation in SYNE1 and PKHD1 Genes and Dysregulated Pathways in BCC For BCC, we also looked for associations with other frequently mutated genes, such as SYNE1 (see Figure 7) and PKHD1 mutations (Supplementary Table S12). Association of the Somatic ns Mutation in the TP53 Gene and Dysregulated Pathways in SCC The detailed analysis for all the KEGG pathways is presented in Supplementary Table S13.Compared to healthy skin tissue, in the SCC samples without the TP53 ns somatic mutation (n = 3), the genes in the "p53 signaling pathway" were significantly overexpressed by FC 2.21 (95% CI 1.64-2.98),whereas in SCC samples with the TP53 ns somatic mutation (n = 3), the same pathway genes were somewhat under-expressed by FC −1.32 (95% CI −1.78 to 1.01, see Figure 8).This shows a significant association (interaction p = 5.53 × 10 −8 ) between TP53 mutation status and the "p53 signaling pathway", where the TP53 mutation is associated with impaired tumor suppression activity. Gene-Environmental Interaction: Interaction of the Somatic Mutation and Degree of As Exposure on Gene Expression Pathways All our participants were exposed to As, so to explore the effect of As, we used the UACR at baseline below 192 µg/g creatinine and above 192 µg/g creatinine for comparison.Using the somatic mutation status (no vs. yes) and the baseline UACR level (≤192 µg/g creatinine or low vs. >192 µg/g creatinine or high), we divided the BCC patients into four categories (see Table 4) and compared the expression of the different gene pathways of each group of tissues to the same 16 healthy skin tissues.The overall FC (95% CI) of the pathways in each group are presented in Table 4.The result shows that the presence of the PTCH1 somatic mutation increases the magnitude of the differential expression of genes in the "basal cell carcinoma pathway" and "hedgehog pathway" in both low and high As exposure groups.It also shows that high As exposure decreases the magnitude of the differential expression in the absence or presence of the PTCH1 somatic mutation.The differences in the magnitudes of differential expressions were statistically significant, indicated by the interaction p-value.In the same way, we also checked the interaction of the NOTCH1 somatic mutation and As exposure.Table 5 shows how the NOTCH1 somatic mutation and As exposure status influence the immune response pathways like the "IL-17 signaling pathway", "Antigen processing and presentation", and the "p53 signaling pathway".These results also show a similar trend that the somatic mutation increases the differential expression and high As exposure decreases the magnitude of the differential expression of these pathways. Discussion While UVR exposure and skin sensitivity are known risk factors for NMSC, especially among Caucasians, As exposure through contaminated drinking water may be a major risk factor in other populations.To our knowledge, our current study presents the most comprehensive molecular profiling (more than 400 cancer-related genes) for NMSC in a non-Caucasian population exposed to As.We are unaware of any previous study on NMSC that has considered the fact that apparently healthy, non-lesional human skin exposed to sunlight actually harbors somatic mutations.Our study addressed this fact and identified NMSC-associated somatic mutations that are not found in healthy skin tissue.We acknowledge the weakness of the small sample size and the fact that it would have been ideal if we could sequence normal tumor pairs for all the patients. A variant seen in a given tissue that is not seen in germline DNA (blood may be used as a proxy) is considered a "somatic mutation".Because of exposure to UV rays from sunlight, even healthy skin tissue may show a multitude of such somatic mutations resembling a UVR signature.Therefore, unlike many somatic mutations seen in other internal organ cancers, the detection of a somatic mutation in NMSC tissue does not necessarily mean that the detected mutation is a "cancer-associated somatic mutation".Our study confirms this fact, and by excluding those somatic mutations in healthy skin, we could identify NMSCassociated somatic mutations.Looking at the top 20 genes showing somatic mutations in our study in an As-exposed population from Southeast Asia, we could see that many of the genes are also seen to be mutated in NMSC patients from Caucasian populations worldwide.We did not have patients who were not exposed to As and cannot comment on the cause of mutation or NMSC pathogenesis.Unfortunately, we also did not have any tissue left for measuring As content in the tumor tissue, which could have shed some light if As exposure was associated with NMSC pathogenesis. Unlike some other cancers, like colorectal or thyroid cancer, where a single-point mutation (like KRAS rs#112445441 or BRAF V600E) is found in a large proportion of samples, in NMSC, there is no single mutation that is seen in a large number of samples.Rather, sequencing of large genomic regions is needed to detect somatic mutations in a given gene (e.g., PTCH1 or NOTCH1) because the mutations are at different locations in different samples.But, it is interesting to note that, regardless of the difference in position and amino acid change (e.g., c.3583 A>T causing Thr1129Ser change in one sample and c.1313C>A causing Pro504Gln change in another sample), when the samples were grouped together based on BCC-associated PTCH1 negative or positive non-synonymous somatic mutations, we see a marked difference of the differential expression of many relevant gene pathways.This allows us to utilize these genomic markers for the individualization of targeted therapy if and when they are needed.For example, hedgehog signaling pathway inhibitor small molecules (vismodegib, sonidegib) may be most effective in BCC patients with PTCH1 mutations who are not exposed to high As, whereas the same therapy may show the least or no response in BCC patients without PTCH1 somatic mutations exposed to high As (see Table 4 and Figure 8).Currently, both vismodegib and sonidegib are only approved for metastatic or locally advanced BCC [56,57].But, PTCH1 mutation status may be used for selecting patients for individualized targeted therapy.In the same line, our data suggest a molecular basis for the potential use of IL-17 inhibitors in BCC patients with low As exposure with NOTCH1 somatic mutations (see Table 5).Transcriptomic data were not strongly suggestive of great potential for immune checkpoint inhibitors in these BCC patients; however, they suggested a lower chance of platinum drug resistance in BCC patients with high UACR compared to high platinum drug resistance potential in patients with lower UACR [32]. In a study utilizing ultra-deep sequencing of 74 cancer genes from skin biopsies of normal skin across 234 biopsies of sun-exposed eyelid epidermis from four individuals, Martincorena et al. looked for somatic mutations [58].The burden of somatic mutations averaged two to six mutations per megabase per cell, similar to that seen in many cancers, and exhibited characteristic signatures of UVR exposure.There was a predominance of C>T mutations and high rates of CC>TT dinucleotide substitutions.NOTCH1 was the most frequently mutated gene, and 20% of normal skin cells carried a driver mutation in NOTCH1 [58].In SCC of the skin and other organs, both copies of NOTCH1 are frequently inactivated, typically through point mutation combined with copy number alteration.Other frequently-mutated genes include RBM10, FGFR3, CDKN2A, and NOTCH2 [58]. We found few studies where investigators used fresh-frozen BCC tissue to look at the somatic mutations.In one study, fresh-frozen BCC tumor tissues were obtained from 191 patients, and corresponding normal-appearing skin was available from 115 patients [59].PCR and Sanger sequencing were performed, and they detected 137 PTCH1 mutations in 105 tumors with some loss of heterozygosity.For TP53, 31% of BCC carried mutations, mostly of the missense type.TERT and DPH3 promoter mutations were present in 113 and 73 cases, respectively.Gene expression analysis found statistically significant higher TERT mRNA levels in BCC tumors with TERT promoter mutations compared to the tumors without mutations (p < 0.001) [59]. In another study, whole genome exome sequencing was performed on a total of 27 pairs of tissue (tumor and normal adjacent healthy skin) [60].They identified 84,571 cancer sample-specific somatic mutations, of which 42,380 (50.1%) were located in proteincoding regions, and the remaining 42,191 (49.9%) were located in non-coding regions.They showed the relation between the different pathways and mutations, like hedgehog pathways (PTCH1, GL12, SMO), MYCN regulation genes (MYCN, MTOR, DYRk3, AMBRA1), filaggrin genes (FLG, FLG2), and NOTCH genes (NOTCH1, NOTCH2, NOTCH3).They also detected mutations in the non-coding region of BAD, DHODH, SPHK2, CHCHD2 (also known as MNRR1), and RPS27.Promoter mutations of TERT and DPH3 were also detected.Mutations were also found in TP53, PTPRD, LATS1, and ARIDIA [60].They found mutations in TNFAIP2, which encodes a multifunctional protein playing a role in angiogenesis, inflammation, cell migration and invasion, cytoskeleton remodeling, and cell membrane protrusion formation.In the coding region, they also detected somatic mutations in EZH2 and KNSTRN [60].Whole-exome sequencing of secondary tumors arising from nevus sebaceous revealed additional genomic alterations in addition to RAS mutations [61]. The relation of PTCH1 mutations and mRNA expression was studied in twenty cases of nevoid basal cell carcinoma (Gorlin) syndrome, an autosomal dominant disorder, using cancer tissue, surrounding healthy tissue, blood DNA, and skin tissue from four healthy people.They detected twelve genomic and five somatic mutations of the PTCH1 gene.Quantitative PCR was used to determine the mRNA expression levels of PTCH1, SMO, GLI3, and CCND1 genes in relation to the PTCH1 mutation.The mRNA expression was highest in BCC tissue, followed by surrounding healthy tissue and the skin tissue of healthy people [63].They also showed the effect of PTCH1 mutations on gene expression.In surrounding healthy tissue with PTCH1 mutations, the mRNA expression was lower for PHCH1 and GLI3 genes.On the other hand, they found higher SMO and CCND1 mRNA expressions in the same group.BCC tumors with germline and somatic mutations of PTCH1 expression levels of PTCH1, SMO, and GLI3 were higher compared to those with germline mutations only, but CCND1 levels were lower in that group [63]. The list of somatic mutations detected in BCC and SCC depends on the number of target genes sequenced in a particular study, the use of whole exome sequencing or whole genome sequencing, and the strictness of criteria for the detection of a somatic mutation.However, some of the most frequently mutated genes are common among the published studies.In that respect, our current study confirms the findings of many of the past studies and also detects some new mutations.We report the NMSC-associated somatic mutations after excluding the somatic mutations seen in the healthy skin tissue, and this study was performed in a Bangladeshi population exposed to As.We acknowledge the fact that we did not perform ultra-deep sequencing to capture rare variants, so we might have missed very rare variants (below 2% frequency).On the other hand, the somatic mutations detected in our study had reasonably high frequency, giving us confidence that the reported mutations are real.Importantly, the associations of the non-synonymous mutations within the frequently mutated genes with the differential expression of genes and major gene pathways further underscore the importance of the findings. Surgical excision of the NMSC is the first line of management for most of the cases.However, for some recurrent or locally advanced or metastatic cases, targeted therapy may be considered.Keeping that in mind, we analyzed the molecular genomic data in a manner that helps understand the pathogenesis and the utilization of the mutation data for the potential selection of patients for some targeted therapies in the future.We acknowledge that the small sample size did not allow us to test such associations for low-frequency mutations and gene pathways.In the future, we plan to carry out a larger study utilizing the already available biological samples and the clinical follow-up data from the parent BEST study.interaction with SYNE1; Supplementary Table S7: Gene interaction with PKHD1; Supplementary Table S8: Gene interaction with EP400; Supplementary Table S9 Figure 1 . Figure 1.Venn diagram showing the overlap of unique somatic mutation loci among NMSC tissue (in pink) and normal skin tissue (in light green).All types of mutations are shown in the upper left (A), SNVs are shown in the upper right (B), deletions (DELs) are shown in the lower left (C), and insertions (INSs) are shown in the lower right panel (D). Figure 2 . Figure 2. The top 20 genes that had BCC-associated somatic mutations (shown on the left panel (A)) and SCC-associated somatic mutations (shown on the right panel (B)).The x-axis shows the percentage of samples harboring NMSC-associated somatic mutations in a given gene.The y-axis shows the gene name. Figure 3 . Figure 3.The top 20 genes that had somatic mutations in NMSC and healthy skin (shown on left panel (A)) and the top 20 genes that had somatic mutations only in healthy skin tissue (shown on the right panel (B)).The x-axis shows the percentage of samples harboring somatic mutations in a given gene.The y-axis shows the gene name. Figure 4 . Figure 4. GO enrichment analysis of the top 118 differentially expressed genes (FC ≥ 3 and FDR ≤ 0.05) in BCC tissue compared to healthy skin tissue.The x-axis represents the enrichment score and the y-axis is the group of genes. Figure 5 . Figure 5. Differential expression of gene pathways in BCC (in blue) compared to healthy skin tissue (in red) by PTCH1 mutational status.BCC tissues with no non-synonymous somatic mutations in PTCH1 are shown on the left panel and those with mutations are shown on the right panel.Genes are arranged on the x-axis by expression level, and the log2-transformed gene count per million (CPM) is shown on the y-axis.Gene symbols for all the genes could not be shown on the x-axis. Figure 6 . Figure 6.Differential expression of gene pathways in BCC (in blue) compared to healthy skin tissue (in red) by NOTCH1 mutational status.BCC tissues with no non-synonymous somatic mutations in NOTCH1 are shown on the left panel and those with mutations are shown on the right panel.Genes are arranged on the x-axis by expression level, and the log 2 -transformed gene count per million (CPM) is shown on the y-axis.Gene symbols for all the genes could not be shown on the x-axis. Figure 7 . Figure 7. Differential expression of gene pathways in BCC (in blue) compared to healthy skin tissue (in red) by SYNE1 mutational status.BCC tissues with no non-synonymous somatic mutations in SYNE1 are shown on the left panel and those with mutations are shown on the right panel.Genes are arranged on the x-axis by expression level, and the log 2 -transformed gene count per million (CPM) is shown on the y-axis.Gene symbols for all the genes could not be shown on the x-axis. Figure 8 . Figure 8. Differential expression of gene pathways in SCC (in red) compared to healthy skin tissue (in blue) by TP53 mutational status.SCC tissues with no non-synonymous somatic mutations in TP53 are shown on the left panel and those with mutations are shown on the right panel.Genes are arranged on the x-axis by expression level, and the log 2 -transformed gene count per million (CPM) is shown on the y-axis.Gene symbols for all the genes could not be shown on the x-axis. Figure 9 Figure9shows the gene expression profiles of the genes in the hedgehog pathway and how their differential expressions are affected by PTCH1 mutation status and the As-exposure level. Figure 9 . Figure 9. Differential gene expression of hedgehog signaling pathway genes in BCC tissue (in blue) compared to healthy skin tissue (in red).BCC tissues with no somatic mutations in PTCH1 and low As exposure are shown on the left upper plot (A).BCC tissues with somatic mutations in PTCH1 and low As exposure are shown on the right upper plot (B).BCC tissues with no somatic mutations in PTCH1 and high As exposure are shown on the left lower plot (C).BCC tissues with somatic mutations in PTCH1 and high As exposure are shown on the right lower plot (D).Genes are arranged on the x-axis by expression level, and the log 2 -transformed gene count per million (CPM) is shown on the y-axis.Gene symbols for all the genes could not be shown on the x-axis. Table 1 . Some of the variant metrics by somatic variant type and tissue type. Table 3 . BCC-associated non-synonymous somatic mutations in the PTCH1 gene.The coding region changes and the amino acid changes are reported in multiple sources.An "*" in the amino acid change column indicates a translation termination codon. Table 4 . Effect of the PTCH1 somatic mutation and As exposure on the differential expression of gene pathways in BCC.Result from Gene set ANOVA analysis showing the FC (95% CI) of different pathways in BCC samples compared to healthy skin tissue.Patients were divided by PTCH1 somatic mutation status (no vs. yes) and level of As exposure-baseline UACR (low: ≤192 µg/g creatinine vs. high: >192 µg/g creatinine). Table 5 . Effect of NOTCH1 somatic mutations and As-exposure on the differential expression of gene pathways in BCC.Result from Gene set ANOVA analysis showing the FC (95% CI) of different pathways in BCC samples compared to healthy skin tissue.Patients were divided by NOTCH1 somatic mutation status (no vs. yes) and level of As exposure-baseline UACR (low: ≤192 µg/g creatinine vs. high: >192 µg/g creatinine).
2024-06-21T15:06:05.708Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "488536d208b83f11d266f78c83e654078c07f15f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/13/12/1056/pdf?version=1718719950", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e89f9d4b00239e515f989f19fa21324ad496bd0", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214809443
pes2o/s2orc
v3-fos-license
Mechanisms of adhesive small bowel obstruction and outcome of surgery; a population-based study Background This study aims to describe the mechanisms of adhesive small bowel obstruction (SBO) and its morbidity, mortality and recurrence after surgery for SBO in a defined population. Method Retrospective study of 402 patients (240 women, median age 70 years, range 18–97) who underwent surgery for SBO in the Uppsala and Gävleborg regions in 2007–2012. Patients were followed to last note in medical records or death. Result The cause of obstruction was a fibrous band in 56% and diffuse adhesions in 44%. Early overall postoperative morbidity was 48 and 10% required a re-operation. Complications, intensive care and early mortality (n = 21, 5.2%) were related to age (p < 0.05) and American Society of Anesthesiologist’s class (p < 0.01). At a median follow-up of 66 months (0–122), 72 patients (18%) had been re-admitted because of SBO; 26 of them underwent a re-operation. Previous laparotomies (p = 0.013), diffuse adhesions (p = 0.050), and difficult surgery (bowel injury, operation time and bleeding, p = 0.034–0.003) related to recurrent SBO. The cohort spent 6735 days in hospital due to SBO; 772 of these days were due to recurrent SBO. In all, 61% of the cohort was alive at last follow-up. Late mortality was related to malignancies, cardiovascular disease, and other chronic diseases. Conclusions About half of patients with SBO are elderly with co-morbidities which predispose to postoperative complications and mortality. Diffuse adhesions, which make surgery difficult, were common and related to future SBO. Overall, nearly one-fifth of patients needed re-admission for recurrent SBO. Continued research for preventing SBO is desirable. Trial registration The study was registered at ClinicalTrials.gov (NCT03534596, retrospectively registered, 2018-05-24). Background Almost all patients will develop intra-abdominal adhesions after abdominal surgery [1][2][3][4]. The most common consequences are: more complex subsequent surgery, abdominal pain, small bowel obstruction (SBO), and infertility. A 35% readmission rate over 10 years after abdominal surgery has been reported to be directly or possibly related to adhesions [1]. The risk of developing SBO that requires surgery varies from 1% after appendectomy [5,6] to more than 10% after colectomy [3,7]. About 20% of those who do develop SBO do so within the first postoperative year [1,2,7]. Thereafter, there is a steady increase in prevalence for up to at least 10 years after the initial operation [8]. There are conflicting reports on the optimal timing of surgery for small bowel obstruction. Most studies advocate early surgery in line with the Bologna guidelines for SBO [9,10], to minimise morbidity and mortality, although some take a more conservative approach [11]. There is a limited number of reports on direct outcome measures after SBO surgery. The studies that do exist involve relatively few patients [12] or focus on older material [13,14]. Thus, studies analysing the clinical course after surgery for SBO in larger cohorts in more recent time periods are needed. The aim of this study was to describe the mechanisms of adhesive SBO, as well as its morbidity, mortality, and recurrence after surgery for SBO in a defined population. Methods This study included adult patients operated on for adhesive SBO in the Uppsala and Gävleborg regions. There were 341,977 inhabitants in Uppsala and 276,637 in Gävleborg, which together make up 6.5% of the Swedish population. Emergency surgery was performed in three hospitals in the regions (Gävle County Hospital, Hudiksvall Hospital and Uppsala University Hospital). The time period (1 Jan 2007 to 31 Dec 2011) was selected to get a cohort of sufficient size and a follow-up of at least 5 years. A search for adult patients (≥ 18 years) possibly operated on for SBO was performed using operation and diagnostic codes (R10, K56, JAH, JAP, JFB, JFK, and JFL) in the computer-based medical record systems. Patients with specific causes for obstruction, other than adhesions, were excluded, whit the remainder making up the study group (Fig. 1). Medical records were then analysed based on a protocol. Co-morbidities, type procedure, and number of previous operations were noted. The obstructive mechanisms, as described in the operation charts, were noted and classified as fibrous bands or diffuse adhesions. Where there was a description of a combination of bands and diffuse adhesions, a closer analysis of the operation chart was performed to determine whether bands or diffuse adhesions were the main cause of obstruction. The following short-term complications (≤30 days) were analysed: anastomotic leak (diagnosis with contrast enema, CT or re-operation), abscess (diagnosis with CT or ultrasound), wound infection, wound dehiscence, cardiovascular, pulmonary, and urinary tract infection (positive culture). Cardiovascular complications included myocardial infarction and arrythmias, while pulmonary complications included aspiration and pneumonia. ICU stays, re-operations and mortality were also noted. Complications within 30days were classified based on Clavien-, grades 2-5 [15]. Analysis of long-term complications (> 30 days) was focused on hospitalisation and surgery for recurrent SBO (SBO defined as presence of at least two of the following symptoms: loss of stool or flatus, nausea/vomiting, abdominal distension, and radiology supporting SBO). Incisional hernia was noted if mentioned in the charts or in the radiological report. Data from medical records were collected until 2017. Follow-up time was calculated from the SBO-operation to the last noted contact in records or death. The study was approved by the local ethical committee at Uppsala University (Dnr 2015/196) and registered at Clinical-Trials.gov (NCT03534596, 2018-05-24). Surgery The three hospitals had similar clinical routines, but there was no written protocol for management of SBO. Patients without signs of strangulation were initially treated with nasogastric tube, intravenous fluids and analgesics. Patients with signs of strangulation underwent urgent surgery; the others were subjected to radiology. The surgeon managing the patient made the decision to operate and determined the timing of the operation, based on his or her judgement. Laparoscopic surgery for SBO was not in practice during the study period, so all patients had open surgery through midline incisions. Data were collected on type of procedure (division of band, adhesiolysis, bowel resection, by-pass, or stoma), anastomoses, bowel-injuries, bleeding, and operating time. Statistical analysis Demographic and clinical data were analysed using χ2 tests for categorical variables and t-tests for continuous variables or Mann-Whitney U-tests in the case of nonnormal distributions. Fisher's exact test was used instead of the χ2 test when low expected counts were observed. Numbers are given as means and ranges unless otherwise stated. Data were analysed using the statistical package SPSS® version 25 for Windows® (SPSS, Chicago, IL, USA). A two-sided P value ≤0.05 was considered to be statistically significant. No power calculation was made. Patients The search of medical records resulted in 3326 patients possibly operated for SBO. After exclusions, a study group of 402 patients was identified (Fig. 1). In all, 240 (60%) were women and 162 (40%) were men. The median age was 70 years (range 18-97 years). More than half of patients had some co-morbidity (Table 1) with the most common being cardiovascular disease (hypertension and other arterial disease, cardiac disease, cerebro-vascular disease). The preoperative American Society of Anesthesiologist's (ASA) classification was 3 or 4 for 48% (Table 1). Thirty-seven patients (9%) had a previous malignant abdominal disease, with 23 of them having undergone radiation therapy to the pelvis as part of cancer treatment (rectal, prostate, or gynaecological cancer). Abdominoperineal resection had been performed in seven, all of whom had undergone radiation therapy. There were no signs of recurrent cancer as a cause of recurrent SBO. Fifty patients (12%) had not undergone any abdominal surgery before the index operation whereas 189 (47%) had one previous abdominal operation ( Table 2). Nineteen of those with one previous abdominal operation had had it done laparoscopically, one (5%) of them later developed SBO and was admitted for 2 days of conservative management. Previous surgery for SBO had been performed in 30 patients (7.5%). There were no differences in demographic data or previous surgery between the two regions. Surgery The mean annual number of SBO operation was 116 during the study period, adhesive SBO was the dominating cause, leading to 80 operations per year. Adhesive SBO surgery was performed on 13 per 100, 000 inhabitants and year and a total of 5963 days were recorded for hospital stays related to index surgery for SBO. Almost all patients (391/402, 97%) underwent a CTscan and most also had a contrast follow-through. All but one of the patients had some diagnostic radiology before surgery. Nearly all patients (n = 384, 96%) underwent surgery after being admitted to the emergency department, but 18 (5%) patients underwent surgery in the postoperative phase after other abdominal surgery. Overall, patients were operated on at mean 3.4 days (0-86) after hospitalisation or SBOdiagnosis. A total of 78 (19%) patients underwent an operation on the day of admission and another 120 (30%) were operated on within 2 days of hospitalisation. Delaying surgery more than 2 days after admission was related to more re-operations (15% vs. 8%, p = 0.046) compared to those operated in the first 2 days. Similarly, surgery later than 4 days after admission was related to more re-operations (17% vs. 9%, p = 0.024) and more wound dehiscence (9% vs. 4%, p = 0.037). Beyond that, the timing of surgery was not related to mortality, short-or long-term complications, or bowel resection frequency. Nearly one-third (n = 122) had a period of conservative treatment longer than 3 days. Bowel injury was common, see Table 3. In all, 139 patients (35%) had bowel resections (137 small bowel resections and 5 ileocecal resections, some had both). The resection length varied from 2 to 300 cm. Anastomotic leak was noted in 4 of the 139 (2.9%) patients with an anastomosis. In another eight patients, a by-pass was performed, most often for dense diffuse adhesions or due to difficult anatomy. Adhesive mechanisms The most common obstructing mechanism was a fibrous band, which was found in 226 (56%) of the patients. This was also the dominating cause for SBO in patients without a history of laparotomy (44/50, 88%). Patients with diffuse adhesions had more co-morbidity, more previous abdominal surgery, and a longer period of conservative treatment before being operated on, compared with patients with band adhesions (Table 4). A longer operation time, more bleeding and more bowel injuries were observed in the diffuse adhesion group ( Table 4). The postoperative stay and total length of stay were significantly longer in the diffuse adhesion group. Anastomotic leaks, and unspecified use of antibiotics were more frequent in this group, and more of these patients died during follow-up (Table 4). Furthermore, diffuse adhesions were related to more admissions for SBO and longer hospital stay during follow-up, but not to surgery for SBO. Fifty-one (13%) patients needed intensive care and they had an average stay of 6 (1-40) days. Late complications (> 30 days) Median length of follow-up was 66 months (0-122). Total overall mortality was 39% (n = 158), with 21 patients dying in the 30-day postoperative period and 137 patients dying beyond 30 days (Fig. 2). The mean age for these patients at index surgery was 78 years and 83 of them (52%) were above 80 years. The majority (79%) were preoperatively classified as ASA 3-4. The most common causes of death were cardiovascular diseases, malignancies, and respiratory diseases. Median age at death was 83 (range 30-98) years. For three patients, the cause of death was clearly related to SBO: two died after surgery for recurrent SBO and one died during hospital stay and conservative treatment for SBO. Discussion A striking finding in this study was the strong negative impact of diffuse adhesions causing an increased risk for complications, more complex surgery, a prolonged hospital stay, more future SBO episodes, and a shorter life span. In this study, diffuse adhesions were the cause of [16]. On average patients with diffuse adhesions had undergone more previous laparotomies and also had a higher ASA classification. Anticipated difficult surgery and/or, co-morbidity with risk of complications or death are possible reasons that these patients had a longer period of conservative treatment. In addition, there may have been other mechanisms of selection which cannot be assessed in a retrospective study. There were more women undergoing surgery for SBO, most likely reflecting previous gynaecological surgery. Gender did not affect other data or outcome. Demographic data also showed that many patients were old with co-morbidities; 25% of patients were older than 80 years and nearly all of them had co-morbidities resulting in ASA class 3-4. Surgery and postoperative care for this group is a challenge when trying to avoid complications and mortality. Definitions of complications vary, and even missing in some studies, making direct comparisons difficult. Both complications and mortality are related to age which must be taken into account when comparing different studies. The current complication rate of 48% seems reasonable in view of the age distribution and co-morbidity. Thirty-day mortality was 5.2% (all being above 70 years) which is comparable to rates in other studies [13]. In this study, we had no increased mortality when bowel injury was present. Long-term mortality was unexpectedly high, but analysis of causes of death showed that all but three died from diseases not related to SBO (cardiovascular, respiratory, malignant disease). Age at death did not differ from that of the general population in Sweden. There was a small inflow (n = 10) of patients from other regions during the study period. About one-third of these were specific referrals due to complicated comorbidity (transplant and vascular). The outflow of patients could not be assessed, but empirically knowledge indicates that there is no systematic referral to other regions. Emergency surgery during holidays or travels would account for patients being operated on elsewhere. We think this number is small, meaning that data could be regarded as population-based. Thirteen SBO operations per 100, 000 inhabitants and year were performed. This was, a somewhat higher frequency than previously reported by Kossi et al. and Tingstedt et al. [17,18], but on the other hand, less than half of what Ray et al. and Ellis et al. [1,19] reported. This discrepancy may be due to differences in study design and local traditions or selection mechanisms. Balancing between conservative treatment and surgery is difficult in patients without peritonitis or strangulation. An elderly patient with co-morbidity and not requiring emergency surgery will probably start with a conservative regimen to avoid the surgical trauma. In our study, about 20% of patients needed same-day surgery, which is about equal to the rate reported by Bauer et al. [20]. Overall mean time to surgery was 3.4 days (including outliers). A possible risk of prolonged conservative treatment is that increasingly extensive resections are necessary. Surgery after more than 4 days after admission was related to more re-operations and wound dehiscence, but not to resections, mortality, or other complications. Since most SBO episodes resolve with conservative treatment [21], the balance between conservative treatment and surgery and timing of surgery is best assessed in a prospective manner. The current recommendation of early surgery seems reasonable. Recurrent SBO was 7% at 1 year and 18% overall during follow-up, a third of these patients required surgery. About half of the recurrences appeared within the first 2 years, but there was a cumulative increase during followup. In this cohort, the number of patients at risk of recurrent SBO was reduced by the high overall mortality. Patients younger than 70 years, with lower mortality, also had a higher incidence of recurrent SBO (22.4%). Therefore, prevention of adhesions at initial surgery and at secondary SBO surgery is desirable. Minimally invasive surgery has been associated with less formation of adhesions and recent reviews found a reduction of SBO events and surgery for SBO after laparoscopic colorectal surgery compared with after open surgery [22,23]. The increasing reports of laparoscopic SBO-surgery are promising [24]; however, such surgery seems to be most suitable for adhesive bands. There are several adhesion prevention substances on the market, some of which are in a form of a sheet or film, creating a localised protection area. To address the prevention of adhesions, use of fluids in the abdominal cavity might be more logical. One such fluid, icodextrin, shows a potential ability to reduce adhesions [25,26] and has been reported as safe to use in colorectal surgery [27]. Conclusion In this population-based study of adhesive SBO, the incidence of surgery was 13 per 100, 000 inhabitants and year. About half of patients with SBO are elderly with co-morbidity which predisposes to postoperative complications and mortality. The mechanism of obstruction was a fibrous band in 56% and diffuse adhesions in 44%. Diffuse adhesions were related to difficult surgery and future SBO. Overall, nearly one-fifth of patients needed re-admission for recurrent SBO. Continued research for preventing SBO is desirable.
2020-04-07T14:30:44.512Z
2020-04-06T00:00:00.000
{ "year": 2020, "sha1": "788efa338b046138fb7af34ec008c732e1b6a95a", "oa_license": "CCBY", "oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/s12893-020-00724-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "788efa338b046138fb7af34ec008c732e1b6a95a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14901844
pes2o/s2orc
v3-fos-license
Stochastic amplitude-modulated stretching of rabbit flexor digitorum profundus tendons reduces stiffness compared to cyclic loading but does not affect tenocyte metabolism Background It has been demonstrated that frequency modulation of loading influences cellular response and metabolism in 3D tissues such as cartilage, bone and intervertebral disc. However, the mechano-sensitivity of cells in linear tissues such as tendons or ligaments might be more sensitive to changes in strain amplitude than frequency. Here, we hypothesized that tenocytes in situ are mechano-responsive to random amplitude modulation of strain. Methods We compared stochastic amplitude-modulated versus sinusoidal cyclic stretching. Rabbit tendon were kept in tissue-culture medium for twelve days and were loaded for 1h/day for six of the total twelve culture days. The tendons were randomly subjected to one of three different loading regimes: i) stochastic (2 – 7% random strain amplitudes), ii) cyclic_RMS (2–4.42% strain) and iii) cyclic_high (2 - 7% strain), all at 1 Hz and for 3,600 cycles, and one unloaded control. Results At the end of the culture period, the stiffness of the “stochastic” group was significantly lower than that of the cyclic_RMS and cyclic_high groups (both, p < 0.0001). Gene expression of eleven anabolic, catabolic and inflammatory genes revealed no significant differences between the loading groups. Conclusions We conclude that, despite an equivalent metabolic response, stochastically stretched tendons suffer most likely from increased mechanical microdamage, relative to cyclically loaded ones, which is relevant for tendon regeneration therapies in clinical practice. Background Tendinopathy is the term used to describe the pathological conditions resulting from tendon overuse [1,2]. The morbidity of tendon injuries, especially in sports and in manual occupations, is relatively high in our society [3,4]. Chronic tendon injuries are often associated with forceful or repetitive loading, which leads to the accumulation of micro-tears [2,5]. The relationship between repetitive mechanical loading and tenocyte metabolism has been previously investigated in several in vitro studies to investigate the influence of frequency, amplitude and time on the biochemical and biological response [6]. Recently, the biomechanical response of tenocytes was modeled under a variety of physiologically relevant frequencymodulated loading regimes [7][8][9]. Several studies demonstrate the regulation of MMP through the interaction of mechanical loading [6,10,11]. Thus, the mechano-biological response for linearlyoriented, viscoelastic tissues loaded with frequency modulation has been relatively well studied. However, from a patient's perspective, stochastic loading may be a much more relevant scenario, since it mimics the random, physiological motions experienced in daily activities. Previous applied loading regimes found in the literature are based on a regular cyclic loading applied at different frequencies with different magnitudes [6,[10][11][12][13]. Smooth and regular amplitudes do not reflect the situation in vivo. This has been demonstrated in in vivo gate analysis in rabbit, a common model selected for tendon studies, which revealed that the frequency in "relaxed" hopping is approximately 1Hz but [14] variable. Another study used the rabbit flexor digitorum profundus model for flexor tendon tissue engineering, where the authors found bioreactor cyclic strain increases construct strength [15]. Thus, this rabbit tendon has been successfully evaluated for a model system for the study of tendon mechano-biology multiple times in the literature [5,14]. The aim of this study was to compare the cellular, mechanical and viscoelastic responses of tendons subjected to either a stochastic cyclic stretching or a sinusoidal cyclic stretching regime, under controlled in vitro conditions (see Figure 1). We hypothesize that a stochastic loading regime, applied to freshly isolated rabbit flexor digitorum profundus tendon, will invoke a different biochemical and biomechanical response than a symmetric, sinusoidal loading regime with an equivalent root mean square (RMS) amplitude. Furthermore, we hypothesize that a loading regime with a higher, potentially non-physiological RMS amplitude, would then shift the balance to a catabolic response of the tenocytes. Tendon source and tissue harvest Two hind paws of eight six-month old female rabbits (Oryctolagus cuniculus) were obtained from a local butcher within 24 h post mortem. First the hair of the hind paws was shaved and then the skin was aseptically cut and removed. After a general surface disinfection step with 1% betadine B solution (Mundipharma, Basel, Switzerland), the flexor digitorum profundus tendons (6 tendons per animal) were aseptically isolated by dissecting the muscles and immediately placed in highglucose Dulbecco's Modified Eagle Medium (DMEM, Gibco, Invitrogen, Basel, Switzerland) with 10% penicillin/ streptomycin (1 mg/mL, Sigma) for 30 min at 37°C. Then, the specimens were washed with phosphate buffered saline (PBS) and randomly assigned to the three specified loading regimes and an unloaded control group, which was maintained in static culture conditions. The tendons were then cultured in high-glucose DMEM containing 5 μg/mL amphotericin B (Sigma) and 100 μg/mL penicillin/streptomycin containing 10% Fetal Calf Serum (FCS) at 37°C, 5% CO 2 and 100% humidity. Media changes were performed every two days. Tendon stretching protocols According to Wang et al. [4] and Wren et al. [16] and some initial pilot tensile testing, the minimal and maximal strain values defining a physiological range were set to 2 and 7%, respectively, to remain within the linear region of the load-displacement curve. Three test groups were defined, according to the loading regime applied to stretch the tendons: "stochastic", "cyclic_RMS" and "cyclic_high" (Figure 1). The stochastic regime ("stochastic") comprised 3,600 random stretch amplitudes between 2-7% strain. For the second group ("cyclic_RMS"), a regular sinusoidal loading regime was defined, whereby the RMS amplitude of stretching was matched to that of the stochastic loading regime. The root mean square (RMS) was calculated using the following equation: This resulted in a loading regime comprising 3600 stretching cycles between 2-4.4% strain. The third group ("cyclic_high") provided a comparison to a loading regime comprising sinusoidal stretching between the same Figure 1 The three different amplitude-modulated sinusoidal loading waves, which were applied in the experiment. (A and B, both with equal root mean square [RMS] values = red lines). All regimes were run for 1 h at F = 1Hz. C with a higher RMS value A: low cyclic regime with the same RMS-value as the stochastic loading pattern (B). Regime C is a cyclic loading between 2-7% strain but has a higher RMS than A and B. maximal strain peaks of 2-7% strain included in the stochastic loading regime. Loading was applied to each tendon specimen according to the schedule in Figure 2. Loading was performed on an MTS Bionix 858 (MTS Systems, Eden Prairie, Minnesota, USA). Figure 3 shows the device mounted on the testing machine. Initial grip-to-grip length was standardized to 20 mm. The frequency was kept constant at 1Hz for all loading regimes. When the tendons were not dynamically loaded, they were constantly loaded with a pre-strain of~1%. The tendons were stored at 37°C under standard conditions (see above). The mechanical loading was performed at room temperature, however, since the loading was applied only 1h, the cooling is similar to a media change event. A pre-load of 2.5 N was applied to define a consistent zero strain point. The recorded output parameters were time, displacement and force response of the specimen under strain control. The data were analyzed using a custom analysis script in Matlab (Mathworks inc., MA, USA). For each of the 3,600 loading cycles, the stiffness was calculated by a linear regression of the linear portion of the loading curve. To exclude background noise from the load cell, the data was filtered and cycles with Figure 2 The experimental design of the strain-controlled loading. Upon dissection, the tendons are fixed into the loading device and allowed to equilibrate in the high-glucose DMEM cell culture medium for three days. Then, the specimens are loaded for one hour each day for two days followed by a resting day and another two days of testing. After a two-day rest another two days of testing are performed (a total of 6 days of loading, i.e. 6 hours) before the tendons are harvested and prepared for analysis. Red circle = timepoints for media changes. forces < 3N were omitted. Tendons that ruptured during the experiments were re-clamped (thus, shortened) and the same loading protocol was applied. Biochemical assays A predefined mid-section of the tendon was used for biochemical analysis. Half of the tissue was used to assess gene expression and the other half served for the measurement of cell viability, matrix production and the DNA/GAG assay. A day 0 control was taken after the unloaded equilibration phase and processed similarly. RNA Extraction and Real-Time RT-PCR The tendons were snap frozen in liquid nitrogen and pulverized with a mortar and pistil. The minced tissue of the specimens was either placed in 1ml TRI reagent (Molecular Research Center, Cincinnati, USA) and stored at −80°C prior to further RNA isolation, or was processed immediately for DNA and GAG quantification, respectively. A combined TRI phase separationsilicon column purification [17] RNA isolation was then performed with the total mammalian RNA extraction kit RTN70 (Sigma, Buchs, Switzerland). The total RNA was treated with DNaseI (DNase I amplification grade, Sigma) before the cDNA was synthesized (iScript cDNA synthesis Kit, BioRad, Basel, Switzerland). Relative gene expression was determined using the primers listed in Table 1. Along with five anabolic genes, also four catabolic and two inflammatory genes were investigated and the C t threshold values were recorded. The primers and cycling protocols have been recorded previously by our group [18]. The C t values were interpreted according to the 2 −ΔΔCt -method [19]. DNA and GAG Quantification The tendon samples were digested in 1 mL proteinase K solution for 16 h at 56°C and 300 rpm to assess both the DNA and the GAG content of the tendon. For DNA analysis, samples were stained with Hoechst dye and the fluorescent emission was measured at 457 nm with an excitation wavelength of 368 nm (Tecan Reader Infinite 200; Tecan, Männedorf, Switzerland). To measure the GAG content, the 1,9-dimethylmethylene blue (DMMB) assay, adjusted for low pH, was performed as described in Enobakhare [20] and Farndale [21] and absorbance was read at 600 nm (SpectraMax 190, Molecular Devices, Sunnyvale, California, United States, distributed by Bucher inc., Switzerland). Since the DNA content is constant per cell (~7 pg), this parameter was used to normalize both matrix production and cell activity. Alamar blue© cell activity test To assess tenocyte viability after the 12-day tissue culture period, an Alamar Blue© test (Invitrogen) was performed, where the tissue was allowed to react for 2 h at 37°C and the absorbance at 570nm was measured using an absorbance reader (Tecan). Statistical analyses Stiffness was analyzed using two-way ANOVA, with culture time and loading regime as the two independent factors. The gene expression, GAG/DNA and the Alamar Blue data were analyzed with non-parametric Kruskal-Wallis test using GraphPad Prism v. 6.0a, GraphPad Software, San Diego California USA, www.graphpad.com. A p-value < 0.05 was considered significant. Results Mean stiffness differed ( Figure 4) significantly between groups and was dependent on both factors loading and culture time (2-way ANOVA, loading explained 15.08% of variance, P < 0.0001 and time explained 9.08% of the total variance, P = 0.0116, with rabbits as a random factor). There were significant differences between time points in the cyclic_RMS group using multiple pairwise comparison testing, i.e. between day 1 vs. day 3 and day 1 vs. day 4, day 1 vs. day 5 and between day 2 vs. day 3, day 2 vs. day 4 and between day 2 vs. day 5, no such differences were found in the other two groups. Generally, proliferation and cell activity, i.e. DNA content and Alamar blue assay, both confirmed that the tenocytes were metabolically active and alive. The "cyclic_high" group showed a slight decrease of DNA content, whereas the tendons in the other groups showed similar cell activity, but no significant difference could be found. The different tendons showed a high variance, not only the specimens from different animals, but also tendons from the same rabbit. Matrix production, expressed as the glycosaminoglycan GAG/DNA content ratio, was not significantly different between the groups after culture ( Figure 5A). Cell activity ( Figure 5B), as measured from Alamar blue assay, was not significantly different between all groups. Relative gene expression of major catabolic and anabolic genes, relative to unloaded controls on the same culture day, revealed that "stochastic" loading tended to up-regulate metalloproteinases (i.e. MMP1 and MMP-3, ADAMTS-4, Figure 6) but also pro-inflammatory genes, such as TNF-α and IL-1β, compared to "cyclic_RMS" and "cyclic_high" loading. Collagen type 1 remained unchanged and collagen type 2 was not detectable ( Figure 5). ACAN (aggrecan) was down-regulated in all groups. A parallel increase of expression of MMPinhibitors such as TIMP-3 in the "stochastic" group weakened the up-regulation of the MMP1,-3-and 13. Mechanical properties of randomly amplitude-modulated tendons The primary goal of this study was to test the importance of amplitude modulation for the mechanical stimulation of linearly-oriented tissues. We found significant differences in tensile stiffness between the stochastically loaded and the cyclic, sinusoidal loaded tendons (with equivalent RMS amplitude) in the first two days of loading ( Figure 4). Generally, the stiffness of tendons of the stochastically stretched group was reduced, compared to the cyclically loaded tendons (Figure 4). We cannot explain this difference in stiffness strictly by biological changes, such as cell viability or activity of tenocytes ( Figures 5 and 6), since we did not see any significant changes in cell viability, activity or matrix production. Furthermore, it is unlikely that metabolic changes would immediately result in observable matrix degradation. Thus, the differences are probably purely mechanical, by microfracture of collagen fibers. This should be investigated in future experiments using histological analysis at the μm scale or by SEM. Biological response of tenocytes Relative to the day 0 control, all three groups of tenocytes responded with a minor down-regulation of ACAN and collagen type 1 ( Figure 6). However, tenocytes of the stochastic loading regime tended to down-regulate ACAN, collagen type I, ADAMTS4 and MMP13 relative to the cyclic_RMS and cyclic_high group. An increase in collagen I with cyclic loading was also found by Wang et al. [4] and Parkinson 2010 et al. [22] observed that there is a net proteoglycan content increase in injured tendons, due to an altered metabolism rather than due to changes in gene expression levels. However, there was no difference between the loading groups and the unloaded control in the present experiments, which is also true for the up-regulation of other genes. It should be mentioned that the measured gene expression is possibly a mixture of tenocytes and progenitor cells due to the relatively young age of the rabbits. Culture time was certainly a limit of the study; it is possible that changes to the extracellular matrix (ECM) cannot be seen with a culture period of only twelve days. On the one hand, any changes in gene expression should be still detectable, since RNA changes can be found within hours upon mechanical loading [23]. The timing of the culture start (here allowing an equilibration period of 3 days) will most likely have a detrimental influence on the mRNA transcript level, not so for col 1, but definitely for MMP3 and MMP13; these transcript levels have been shown to increase over time in an explant model of rat tail tendon fascicles [24]. With respect to tissue homeostasis, we did not find any significant differences among the three loading regimes. On the other hand, it may also be that the sampling window for gene expression was delayed and thus, no changes in RNA could be detected after the stimuli. However, it has been reported that changes in mRNA persist after 24h incubation time [23,25]. The time point of harvest after the loading regime still seems to be critical, there have been significant changes found if tissue is analyzed after 1h or longer time periods An up-regulation of the pro-inflammatory genes that could lead to apoptosis was not evident at the RNA level in our study. Further investigation by histology or scanning electron microscopy would allow inferring definitive conclusions on the microstructure of the tendons. During the first 1-2 days of mechanical loading, 8 out of 40 tendons experienced partial ruptures and had to be re-clamped (3 in the stochastic; 2 in the cyclic_RMS and 3 in the cyclic_high group). Thus, the re-clamping of the tendons might have had an influence on the stiffness results, but may also indicate the accumulation of micro-damage. Improved clamping techniques may allow a more unbiased comparison of the clamped versus unclamped regions [26]. The cell density of the tendon is relatively low compared to other musculoskeletal tissues. This also includes the vascular cells and synovial cells of the tendon sheath that encloses each tendon [3]. The tissue is sparsely vascularized and the main constituent is collagen type 1 [27] Collagen is the main component of most organic matrices like bones, ligaments, tendons and the intervertebral disc [16]. A remarkable 60-85% of a tendons dry weight is assigned to type I collagen. A small, mechanically important portion (2%) is elastin and 4-5% are different proteins. The extracellular substance is dominated by proteoglycans (PG) and, in combination with water, they are thought to have a spacing and lubricating role for tendon [27][28][29]. The mechano-biological response might be masked by the generally very rich culture media, which has an abundance of growth factors, high glucose content and vitamins. Results from the matrix production at the protein level should also be reflected by the gene expression data. For all 11 genes studied, there were no statistical and biologically significant changes amongst the loading groups. These results are consistent with studies in human achilles tendon, where no changes in the expression for genes of the major collagens and proteoglycans could be found [22]. The same study also did not see any change for ADAMTS-4, MMP3, MMP13 and TIMP3 with the exception of the up-regulation of TIMP1. The authors hypothesize that the matrix turnover is favored for degeneration rather than matrix generation. However, another limitation of this study is that we did not look at tenocyte specific transcription factors such as scleraxis, which have been shown to respond to mechanical stimulation, especially with increased cyclic compression [30][31][32] nor did we look at tenomodulin and tenascin-C [33], two marker genes, which are important for maintaining tenocyte phenotype [34,35]. For MMP1 and MMP3, it was found that cyclic mechanical loading inhibits their expression [6,36]. It is generally accepted that training promotes both synthesis and degeneration and the process is highly dynamic [4]. It is important to state that by analysis of only RNA expression-levels, conclusions on protein expression are limited. Translation efficiency, post-translational modification and -activation, protein turnover rates or inhibitory proteins that may have a large influence on how much protein is actually synthesized. MMP could be present in the tissue as pro-MMP, and thus in an inactive form, or they might be bound to TIMPs. An up-regulation of a MMP does therefore not necessarily mean matrix degeneration [2]. Due to these potential effects it would be crucial to also include quantification on the protein level to support real-time PCR data if longer loading / culture times will be chosen in further experiments. Conclusions Stochastic modulation of amplitude in strain-controlled stretching of tendons resulted in a reduced tendon stiffness, compared to sinusoidal, cyclic loading regimes, with equivalent RMS amplitude, or sinusoidal, cyclic loading between the same peak strain magnitudes. The change in stiffness was not associated with changes in cell activity, cell density (DNA) or GAG content.
2017-06-22T19:55:46.603Z
2012-11-14T00:00:00.000
{ "year": 2012, "sha1": "a66e6dd45a18c1d8dba3aaeb34c632d299bfe6cd", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-13-222", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc1015a4ff5e29b036610582a99c24948851ad8e", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
119532790
pes2o/s2orc
v3-fos-license
On Extended Electroweak Symmetries We discuss extensions of the Standard Model through extending the electroweak gauge symmetry. An extended electroweak symmetry requires a list of extra fermionic and scalar states. The former is necessary to maintain cancellation of gauge anomalies, and largely fixed by the symmetry embedding itself. The latter is usually considered quite arbitrary, so long as a vacuum structure admitting the symmetry breaking is allowed. Anomaly cancellation may be used to link the three families of quarks and leptons together, given a perspective on flavor physics. It is illustrated lately that the kind of models may also have the so-called little Higgs mechanism incorporated. This more or less fixes the scalar sector and take care of the hierarchy problem, making such models of extended electroweak symmetries quite appealing candidates as TeV scale effective field theories. Introduction This talk is my contribution to the event celebrating the 60th birthday of Paul Frampton. The subject here is extending electroweak symmetries, in particular, as an approach to particle physics beyond the Standard Model (SM). The focus is on my own works on the subject, which began during the time I was a student studying under Paul's supervision. I am getting back to the topic lately, with some studies more related to some of Paul's own works but in the name of little Higgs. The SM is a model of interactions dictated by an SU (3) C × SU (2) L × U (1) Y gauge symmetry, with a anomaly free chiral fermion spectrum and a Higgs multiplet responsible for the spontaneous breaking of the electroweak (EW) symmetry SU (2) L × U (1) Y . Extending the EW gauge symmetry extends the SM while adding new fermions and scalars. This is to be contrasted with other approaches such as grand unification and/or supersymmetry. Compares with the latter approaches, extending EW symmetry may look like less popular or not so well motivated. Grand unification aims at providing a unified picture to all the otherwise separate parts of the gauge symmetry and their independent couplings, though only at a scale of about 10 −16 GeV. All that can be achieved, in the case of SU (5), without the need for extra fermionic states. Supersymmetry used the beautiful bosonfermion symmetry to tackle the hierarchy problem, essentially extending the chiral nature of the fermions to fix the problem for the scalar sector. Putting the two together provides a theoretical structure that promises to "explain" more or less all of particle physics. However, the large extrapolation over the many order of magnitudes of particle physics desert may certainly be taken with suspicion. Moreover, the approaches do not provide any new insight into the difficult problem of the origin of flavor structure. Why there are three families of SM fermions is still a fundamental problem that we have no credible approach to handle. On the contrary, extending the EW symmetry may provide some new perspectives to the flavor problem. It can even provide an alternative solution to the hierarchy problem, in the name of the so-called little Higgs mechanism 1 . Looking at the Fermionic Spectra The spectrum of SM fermion in one family is like perfection, essentially dictated by gauge anomaly cancellation conditions. To illustrate the point of view, we recall our earlier argument 2 . Assuming that there exist a minimal multiplet carrying nontrivial quantum numbers of each of the component gauge groups, one can obtain the one-family SM spectrum as the unique solution by asking for the minimal consistent set of chiral states. Consistence here refers to the perfect cancellation of nonvanishing contributions to various gauge anomalies from individual fermionic states. A vectorlike set (or pair) is trivial but not interesting. Only chiral states are protected from heavy gauge invariant masses and relevant to physics at the relatively low energy scale. The above suggested derivation of the one-family SM spectrum goes as follow. We are essentially starting with a quark doublet, with arbitrary hypercharge normalization. The two SU (3) C triplets require two antitriplets to cancel the anomaly. Insisting on the chiral spectrum means taking two quark singlets here, with hypercharges still to be specified. Now, SU (2) L is real, but has a global anomaly. Cancellation requires an even number of doublets, so at least one more beyond the three colored components in the quark doublet. There are still four anomaly cancellation conditions to take care of. They are the [SU 3 . We are however left with three relative hypercharges to fit the four equations, actually without a possible solution. A rescue comes from simply adding a U (1) Y -charged singlet. But the four equation for four unknown setting is misleading. The [U (1) Y ] 3 anomaly cancellation equation is cubic in all the charges, with no rational solution guaranteed. The SM solution may actually be considered a beautiful surprise. Moreover, the perspective may be the best we have on understanding why there is what there is. We would also like to take the opportunity here to briefly sketch the next step taken in Ref. 2 , to further illustrate our perspective. The results there also may be considered a worthy comparison with our little Higgs motivated flavor/family spectrum presented below, from the point of view of the origin of the three families. The major goal of Ref. 2 is to use a similar structure with an extended symmetry to obtain the three families. For example, one can start with some SU (4) × SU (3) × SU (2) × U (1) gauge symmetry and try to obtain the minimal chiral spectrum contain a (4, 3, 2) multiplet -the simplest one with nontrivial quantum number under all component groups. Having a consistent solution is not enough though. In order for the spectrum be of interest, we ask the spectrum to yield the chiral spectrum of three SM families plus a set of vectorlike states under a feasible spontaneous symmetry breaking scenario, i.e. when the gauge symmetry is broken to that of the SM. Ref. 2 has only partial success. A consistent group theoretical SM embedding could be obtained but only with a slight addition to the minimal chiral spectrum obtained from anomaly cancellation considerations alone. We give an example in Table I. Next, we recall the fermionic spectrum from a simple model of extended EW symmetry, the 331 model from Paul himself 3 . The model has the EW symmetry extended to an SU (3) L × U (1) X . To have a consistent spectrum of chiral fermions, one may first look into how the SM doublets are to be embedded into multiplets of SU (3) L . It is interesting to note here that a naive family universal embedding would not work. The SU (3) L anomaly would not cancel. Instead, the model has the (t, b) doublet embedded into a3 while the quark doublets of the first two families into 3's, with all leptonic doublets embedded into3's. The fact that the number of color equals the number of families makes the anomaly cancellation possible. All extra quark states here are exotic, with charges 5 3 and −4 3 . There are no extra leptonic states though. The 331 model spectrum is given in Table II. Extended EW Symmetries of SU Looking at the model spectrum of Table II, one may wonder if the construction is in any sense unique, and if similar anomaly free spectra exist for a different extended EW symmetry. We look into the question lately and have the general solution. It turns out quite simple and straightforward. For an extended EW symmetry of SU (N ) L × U (1) X , the SM doublets may be embedded into N 's orN 's. Embedding one quark doublet into an N and the two others intoN 's while putting all lepton doublets into N 's does give a prescription with canceled SU (N ) anomaly. The a bit of surprising part is that no matter how one chooses to embed U (1) Y into SU (N ) L × U (1) X , simply completing the list of chiral states with appropriate SU (N ) singlets to ensure vectorlike matchings at the QCD and QED level does yield a completely anomaly free spectrum, essential unique for the particular symmetry embedding. The number of possible consistent model spectra of the type is then equivalent to the number of admissible symmetry embeddings. The latter can conveniently be parametrized by the choice of electric charges for the extra N − 2 quark states sharing the N multiplet with the (t, b) doublet 4 . We have no room in this write-up to elaborate on the details though. Little Higgs and Extended Electroweak Symmetries The little Higgs mechanism 1 has been proposed as new solution to the hierarchy problem. More precisely, it alleviates the quadratic divergent quantum correction to the SM Higgs states and admits a natural little hierarchy between the EW scale and a higher scale of so-called UV-completion at around 10 TeV above which further structure would be hidden. The idea is a rather humble bottom-up approach then; but experimental hints at the existence of such a little hierarchy has been discussed 5 . What is relevant for our present discussion is that a little Higgs model necessarily has an extended gauge symmetry, EW or beyond, and extra fermion(s). The latter includes a heavy top T quark. Simple little Higgs model(s) based on an extended EW symmetry has been introduced by Kaplan and Schmaltz 6 , though the authors failed to properly address the structure of the fermionic sector. The gauge symmetry considered are SU (3) L × U (1) X and SU (4) L × U (1) X . We discuss completion of the kind of models with consistent, anomaly free, fermionic spectra and the resulted implications on the flavor structure of the models in Ref. 4,7,8 . Naively, so long as one pick a model spectrum with an extra T quark in the (t, b) containing N multiplet (here N = 3 or N = 4, for example), one have potentially a extended EW little Higgs model. The T quark may be used to cancel the quadratic divergent contributions (only at 1-loop level) to the SM Higgs mass from the t quark, while the extra EW gauge bosons to do the same for their SM counterparts. The scalar/Higgs sector has to be explicitly constructed though, to have the SM Higgs doublet embedded as (pseudo-)Nambu-Goldstone states of some global symmetry. It is an [SU (3)] 2 /[SU (2)] 2 symmetry for the SU (3) L × U (1) X case, for instance. The Higgs sector symmetry is to be explicitly violated beyond the sector, in the gauge and Yukawa couplings of the Higgs multiplets. Such a scheme can be easily achieved with pair(s) of Higgs multiplets having the right quantum number to couple to the (T, t, b, ..) multiplet and a right-handed T singlet. However, there is source of further complication, related to the construction of a proper Higgs quartic coupling term 6 . We admit that, in general, the latter issue still have to be studied more carefully. We do have a definitely complete and consistent model though. This is given by the fermion spectrum of Table III, with the Higgs sector as given in Ref. 6 . Here below, we will focus on the fermionic sector and flavor physics structure. With a specific choice of the extended EW symmetry, a little Higgs model can be built only with the inclusion of the T quark state. For the N = 3 case, that fixes the hypercharge embedding and hence, from our anomaly cancellation study, the unique fermionic spectrum. The spectrum can be read off from Table III, with only one set of the duplicated T , D, S, and three N states. Note that the X-charges will have to change accordingly. For the N = 4 case, one may consider variations of the model spectrum, essentially by choosing a different set of states beyond that of the N = 3 content. In particular, a spectrum with a full set of duplicated, heavy, SM fermions look very interesting 4 . However, the scalar/Higgs sector has to be explicitly constructed then. Following exactly the construction of Ref. 6 , one may be restricted to the spectrum of Table III, with trivial Gauge anomalies generalization to N > 4 spectra extending on the content. At the moment, one sees no motivation to go for N > 4 . Some Implications to Flavor Physics Unlike generic models of extended EW symmetries, we do not have much freedom in picking a set of scalar multiplets with VEVs according to what mass generating Yukawa couplings we may want to include. However, a careful checking of the Higgs multiplets shows that phenomenologically acceptable mass terms for the fermions, SM ones or heavy quarks, can be obtained for the explicit models discussed above. Here, we use the SU (3) L × U (1) X case for the demonstration, in favor or simpler notation and expressions. As touched on above, the little Higgs mechanism is to be implemented with two scalar multiplets having the right quantum number to couple to the chiral parts of the T quark. They are denoted by Φ 1 and Φ 2 below in the expression of which we give the Yukawa part of the Lagrangian. The latter is constructed simply by tracing the quantum numbers and admitting all terms compatible with the gauge symmetries. where Q and Q ′ j denote (contrary to notation in Table III) the color triplet and antitriplets. Note that we have to include dimension five terms here. Recall that the little Higgs model actually has a high energy cut-off of only around a 10 TeV scale. The next step is to use the nonlinear sigma model expansion of the scalar multiplets in terms of the pseudo-Nambu-Goldstone states, which include the SM Higgs doublet h 6,7 . We recover The expression shows that all the heavy quark state, T , and D j (or D and S) get Dirac mass at scale f of the VEVs of Φ 1 and Φ 2 , and standard Yukawa couplings for the SM quarks and Higgs doublet are all available. However, the expression also indicates that one has to expect mass mixings among heavy and SM quark states. The nature of the extra heavy quarks and their mass mixings with the SM counterparts dictate stringent constraints on the related couplings and interesting flavor physics. Conclusions The bottom line here is that sensible discussion of flavor physics of a little Higgs model is not possible before the full fermion spectrum is spelt out. The latter is constrained by gauge anomaly cancellation. We exhibit at least one complete model here on which detailed flavor physics still have to be studied. For the kind of models, the fermionic part has a family non-universal flavor structure just like that of the 331 model, linking the three SM families into one fully connected set. Gauge anomaly cancellation should play a major role on constructing the fermionic completion of any little Higgs model. This is, unfortunately, an issue that has been largely overlooked in the literature. In summary, we see that studies of extended EW symmetries has arrived at the point of furnishing all round models of beyond SM physics addressing more or less all the concerns of particle physics, including the hierarchy problem. Such a model then has almost no arbitrary parts to be chosen at model-builders' discretion. It has generic appeals, but are also very humble, liable to various stringent precision EW and flavor physics constraints and begs UV-completion about an order of magnitude in energy scale above that of the electroweak theory. Building models of the kind, and studying their phenomenology in details, as well as checking the predictions experimentally should be a worthy endeavor.
2019-04-14T02:36:34.074Z
2004-04-27T00:00:00.000
{ "year": 2004, "sha1": "6bd47f926f18737167a1e2f3468c9f98ea48968f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0404238", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cd49c73f0041d945c0a9d03471e52d5a4ea015d1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233289893
pes2o/s2orc
v3-fos-license
An Adversarially-Learned Turing Test for Dialog Generation Models The design of better automated dialogue evaluation metrics offers the potential of accelerate evaluation research on conversational AI. However, existing trainable dialogue evaluation models are generally restricted to classifiers trained in a purely supervised manner, which suffer a significant risk from adversarial attacking (e.g., a nonsensical response that enjoys a high classification score). To alleviate this risk, we propose an adversarial training approach to learn a robust model, ATT (Adversarial Turing Test), that discriminates machine-generated responses from human-written replies. In contrast to previous perturbation-based methods, our discriminator is trained by iteratively generating unrestricted and diverse adversarial examples using reinforcement learning. The key benefit of this unrestricted adversarial training approach is allowing the discriminator to improve robustness in an iterative attack-defense game. Our discriminator shows high accuracy on strong attackers including DialoGPT and GPT-3. Introduction Turing Test (Turing, 1950) was proposed to assess whether a machine can think. A machine and a human player communicate with a human judge and try to convince the judge that they are the human. This test provides an evaluation framework -a machine is intelligent to certain extent if it passes the Turing Test. To allow fast and less expensive evaluations, the human judge is often replaced by an automated human-vs-machine classifier (Von Ahn et al., 2003;Baird et al., 2003;Rui and Liu, 2004;Lowe et al., 2017a). An automated Turing test is straightforward for constrained scenarios with unambiguous correct answer, such as text classification. In contrast, for open-domain conversation, an infinite number of plausible responses for the same context exist, and they may differ from each other substantially. In this case, the existing automated evaluation methods (Papineni et al., 2002;Zhang et al., 2019a;Sellam et al., 2020) which measure hypothesis quality by its similarity with reference answers become sub-optimal, because it is difficult to find a set of diverse reference answers to cover such one-tomany possibilities in dialogue. Furthermore, it is impossible to use these reference-based metrics in scenarios when the reference is not available, e.g., online chatbots. These challenges motivate an alternative approach, i.e., trainable reference-free metrics (Albrecht and Hwa, 2007;Guan and Huang, 2020;Gao et al., 2020). Previous works generally frame the task as a supervised learning (SL) problem, training a classifier to distinguish human and machine outputs, or a regression model to fit the human ratings. However, trainable metrics have potential problems of being gamed using adversarial attacking (Albrecht and Hwa, 2007;Sai et al., 2019;Gao et al., 2019). To learn a more robust evaluation metric, we propose to train a model to discriminate machine outputs from human outputs via iterative adversarial training, instead of training evaluation model with a fixed dataset. In contrast to previous perturbationrobust methods that only modify characters or words (Ebrahimi et al., 2017;Li et al., 2018;, we generate "unrestricted" adversarial examples by fine-tuning a dialogue response generator to maximize the current discriminator score via reinforcement learning. This is followed by training the discriminator that accounts for these additional adversarially generated examples. The above two steps are repeated as an iterative attackdefense game until the generator no longer can decrease the discriminator accuracy below a thresh-old. To further improve the robustness of the discriminator, we reduce the chance that the discriminator be fooled by unseen patterns by increasing the diversity of the adversarial examples in several novel ways. Firstly, we explicitly encourage the adversarial dialogue responses to be context-sensitive by including rewards from a pre-trained contextresponse matching model. Secondly, we decode with different decoding settings when generating adversarial examples. Our discriminator showed high accuracy on several strong attackers including DialoGPT (Zhang et al., 2019b) and GPT-3 (Brown et al., 2020). Method We define the problem as learning a discriminator to distinguish machine-generated and humanwritten responses for open-domain dialogue context. Similar to training a generative adversarial network (GAN) (Goodfellow et al., 2014a), our ATT (Adversarial Turing Test) method involves two sets of competing components: discriminators D that defend, and generators G that attack. Discriminator A supervised learning (SL) approach is employed to train the discriminator. Following Gao et al. (2020), the loss is defined to increase the probability of picking the human-written responses y H when mixed with machine-generated hypotheses y M for the same context x. where h(x, y, θ D ) is the scalar output from the discriminator, which is parameterized by θ D . At inference time, we compute the score The discriminator is implemented by adding a linear layer to GPT-2 transformers (Radford et al., 2019), following Gao et al. (2020). Generator We generate adversarial examples via a generator G trained with reinforcement learning (RL) using policy gradient (Williams, 1992). For each context x, the generator generates n hypotheses {y i }. The reward R(y i ) is defined with a baseline b(x), which is used to reduce the variance of gradients: Applying Policy Gradient (Williams, 1992), we minimize the following loss: where P (y|x, θ G ) is the probability generating y given x from the generator parameterized by θ G . The generator is implemented using a GPT-2 architecture (Radford et al., 2019), following Zhang et al. (2019b). An iterative attack-defense game We first pre-train the components individually and then jointly train them in an iterative attack-defense game, as illustrated in Figure 1. HvM is initialized using, θ D , the weights of a human-vs-machine classifier from Gao et al. (2020) trained in a SL manner to classify whether a response is a human response or DialoGPT generated. Each turn of the game starts with an attack phase that trains G with RL to attack D. The training is stopped when the validation accuracy of D is dropped below threshold c low , or the training steps exceed certain number N G , whichever comes first. The turn then switches to a defense phase. D is trained using samples generated from G via SL. We stop training the discriminator when the validation accuracy is higher than threshold c hi , or the training steps exceed certain number N D . We repeat this process until the validation accuracy of D in the last m turns is always kept higher than c hi . Note above procedure resembles GAN (Goodfellow et al., 2014a). However, we focus on improving the robustness of the classifier, while GAN focus on generating realistic examples. From a modeling perspective, the key differences with a conventional GAN are: Generator re-initialization We set G as θ (0) G at the beginning of each attack phase, instead of continuing to learn from the last turn. This is because, at the early stage of the game, G often learns to generate adversarial examples that are not fluent or grammatically correct (one example at Turn-5 is shown in Table 1). They can successfully fool D, as initially D has not seen such adversarial examples. However, such attacker G, which typically learns disfluent adversarial examples in earlier iterations, is difficult to fine-tune towards well-formed adversarial examples. To allow the late-stage attackers to generate fluent adversarial examples, we reset G parameters as the pre-trained generator before each attack phase to obtain multiple attackers as shown in Figure 1. We empirically find that such a strategy gives better performance than initializing each attacker with the previous one as in GAN. Adversarial example ensemble We train D on samples generated by G from all previous turns, instead of only the last turn. As shown in Figure 1, at Turn-2, D is trained on samples from pre-trained G, A (0) , samples of of Turn-1 A (1) , and Turn-2 A (2) . At each turn, we stop training D when the validation accuracy of all these datasets A (t) is higher than threshold c hi . This enables the defender to capture all observed attacks rather than only focusing on the adversarial examples generated by the last attacker as in GAN, thus yielding a more robust defense against various attacks. Diversifying the adversarial examples We find that mode collapse happens when only using a single human-vs-machine discriminator HvM. That is, the generated adversarial examples tend to be insensitive to the context, indicating that the generator finds a universal adversarial attacking pattern that can successfully attack HvM for most contexts. However, a corpus of similar adversarial examples makes adversarial training inefficient and the discriminator less robust. Therefore, we encourage the content of the adversarial examples to be diverse by integrating HvM and a pre-trained human-vsrandom classifier (Gao et al., 2020) HvR. HvR is trained to predict whether a response is randomly retrieved or is the ground truth. The final discriminator score s D (y|x) is the geometric mean 2 of the outputs of HvM and HvR: For the generator, we increase the diversity of adversarial examples by randomly changing the hyper-parameter of the generator decoding process. The decoding temperature T is uniformly sampled from a range of levels (0.3, 1, 10, 100), to control the token generation probability distribution. Data As we focus on open-domain dialogue, we use Reddit data obtained from a third-party Reddit dump, 3 following Gao et al. (2020). Baselines We compare ATT with the following models: • SL by Gao et al. (2020) is trained via SL on human-vs-DialoGPT data. • GAN is similar to ATT, but it does not apply the generator re-initialization or adversarial example ensemble strategies of Section 2.3. • ND (Non-Diverse ATT) a variant to ATT, without the diversifying objective in Section 2.4 to diversify adversarial examples. Results As shown in Figure 2, the accuracy of D for ATT method gradually increases as the game continues, and finally remains above 0.75 after about 30 turns of the game. This indicates our convergence criterion is met. The convergence of D accuracy is accompanied by the improvement of G generation quality. As shown by the examples of Table 1, a response that is not grammatically correct in the early stage (e.g., turn-5) can successfully fool D, but G tends to generate more human-like response at later stages (e.g., turn-40). This is desirable as an ideal discriminator should only fail when generated responses are sufficiently similar to human-written replies. For GAN, its discriminator accuracy has a wide variance band and always performs poorly for some attackers (accuracy < 0.5 in Figure 2). It is because, although its discriminator successfully defends against its latest generator, it forgets the pattern learned from previous turns. For ND, the adversarial examples tend to be similar for different contexts, which makes learning inefficient. Therefore, its discriminator can be easily attacked with a new generator. This makes its accuracy lower than 0.50, as shown in Figure 2. Besides DialoGPT and its adversarially-trained version, Adversarial G, we consider the following external attackers to test the robustness of the learned discriminator. See Table 1 for examples. • GPT-3 (Brown et al., 2020): We called its online API to obtain its dialogue responses, in both greedy and sampling decoding settings. • Parrot: This system samples a turn from the context as the response, which is very relevant to the context (high HvR score) but are generally not good ones. We consider this system to test whether D solely rely on relevancy. As illustrated in Table 2 , SL shows poor performance on many unseen datasets. GAN and ND per-form slightly better on GPT-3 but not well on other attacks. In contrast, ATT shows a significant increase in accuracy even for unseen GPT-3 datasets. Related Work Dialogue evaluation and ranking. Opendomain dialogue systems are often evaluated using similarity between hypotheses and reference, e.g. BLEU (Papineni et al., 2002). Lowe et al. (2017b) trained an evaluation model with context, reference, and hypothesis as inputs. Complementary to this, corpus-level metrics for diversity (Li et al., 2016a; and other aspects are proposed. When reference is not available, dialogue ranking models (Zhou et al., 2018;Gao et al., 2020) Reinforcement Learning. RL has been used to guide the dialogue generator using rewards, which can be hand-designed (Li et al., 2016b), obtained from a pre-trained classifier (Shin et al., 2019), or extracted from user response (Jaques et al., 2020). Conclusions In this work we propose to learn a robust humanvs-machine discriminator from an iterative attackdefense game. Diversified and unrestricted adversarial examples are automatically generated and used to fine-tune the discriminator. It significantly increased accuracy and robustness in terms of classification accuracy on unseen attacks. Ethical Considerations We cautiously advise users of our system be careful about the potential bias in the dataset used to train our model. The raw data is publicly available, but the texts written by human have varying levels of quality. The dataset may contain offensive and/or toxic language. A proper definition of human-written text quality is beyond the scope of this work, as we focus on learning a human-vs.machine discriminator in this short paper. We used one P100 GPU for training. The training time for each method is approximately 48 hours. The generator and discriminator have about 700M parameters. The code and our models will be opensourced together with the details of training hyperparameters.
2021-04-19T01:16:16.815Z
2021-04-16T00:00:00.000
{ "year": 2021, "sha1": "574cddb0d56fa84708b259dcd2d81473b810e7ad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "99e0a4427388d3fb32cad66cd38fff820a738151", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
269097193
pes2o/s2orc
v3-fos-license
Tourism as a catalyst for regional development: Uzbekistan’s experience and economic prospects . The transformative potential of tourism as an economic driver has been the subject of extensive academic research. In the context of transitional economies, this relationship assumes a more nuanced role, with tourism acting as both a catalyst and a product of regional development. Among transitional economies, Uzbekistan presents a unique case study characterized by its burgeoning tourism industry and robust policy frameworks aimed at regional development. This paper aims to analyze the multifaceted role of tourism in catalyzing regional development in Uzbekistan. Employing a mixed-methods approach, the study triangulates findings from econometric modeling, surveys, and qualitative interviews. The purpose of our interdisciplinary research is to evaluate the direct and indirect impacts of tourism on regional economic parameters, including but not limited to GDP growth, employment rate, and infrastructure development. Our econometric model analyzes time-series data from 2000 to 2021 and employs several control variables to isolate the economic impact attributable to tourism. This quantitative inquiry is further enriched by qualitative data sourced through semi-structured interviews with stakeholders in the tourism industry, policy makers, and local communities. Our findings indicate that tourism in Uzbekistan acts as a catalytic agent for regional development in multiple dimensions. Not only does it contribute directly to GDP growth and employment but also plays a significant role in cultural preservation, infrastructural improvements, and enhancing social capital. Furthermore, our analyses unveil the essentiality of sustainable tourism policies in amplifying these positive impacts while mitigating potential negative externalities. Therefore, the study’s outcomes bear substantial implications for policymakers, suggesting that a nuanced approach in tourism management could serve as a key strategy in holistic regional development. Introduction Tourism, as a multifaceted economic activity, holds significant potential to influence various dimensions of regional development.Defined not merely as leisure but as the aggregate of services and amenities facilitating travel, tourism extends beyond its immediate economic implications, intersecting with social, cultural, and infrastructural sectors (Kondrateva, 2019).Over the past de cades, the dynamics of tourism have undergone extensive transformation, especially in transitio nal economies that are increasingly becoming focal points for both academic research and policy formulation.Among these transitional economies, Uzbekistan is a unique case, characterized by its rich cultural heritage, diverse natural landscapes, and significant efforts in policy reforms aimed at economic diversification and regional development (Pomfret, 2019). While the relationship between tourism and economic development is a well-trodden research avenue, less attention has been paid to the intricate links between tourism and broader regional development, particularly in transitional economies such as Uzbekistan (Atstaja, 2020).This includes not just GDP growth and employment but also factors like infrastructure development, social capital, and cultural preservation, which form the framework of sustainable regional development (Haseeb et al., 2019).Consequently, there is an imperative need to investigate how tourism acts as a catalyst for regional development, encompassing both its economic and non-econo mic implications. In the transition from an agro-industrial economic system to an industrial or a post-industrial system, the development of the service sector, especially tourism, and the assessment of its impact on the economic growth of Uzbekistan are of strategic importance.In 2017-2022, the share of tourism services in the total export of services of Uzbekistan increased 1.8 times, that is, in 2017, this figure was 22.1%, and in 2022 it reached 40.7%.It can be seen that the development of tourism directly causes an increase in the volume of exports ( Abdurakhmanova, 2022). The country of Uzbekistan was among the top 20 countries with the fastest growing tourism industry in the world during 2023 with a profit of 1.72 billion dollars and took the fourth place with an average of 6,700,000 tourist visits.Also, 30% of the income earned in Tourism is added to the income of the population as wages, while in industry and other sectors it is not more than 10%.As a result, the development of tourism leads to the growth of GDP by increasing the solvency and consumption level of the population of Uzbekistan. Despite substantial research on the economic effects of tourism globally, extant literature is scant on the case of Uzbekistan and its regional specificities.Moreover, most studies predominantly focus on economic indicators while overlooking the multi-dimensional impacts, including social and cultural facets that are crucial for holistic regional development (Brown & Hall, 2018;Roberts & Simpson, 2019).This study aims to bridge this research gap by undertaking a comprehensive investigation into how tourism affects various aspects of regional development in Uzbekistan. The primary objective of this study is to quantify and interpret the role of tourism as a catalyst for regional development in Uzbekistan. The specific research questions include: 1. How does tourism contribute to economic growth in different regions of Uzbekistan? 2. What is the impact of tourism on employment rates and income distribution?3. How does tourism affect infrastructure development, including transportation, utilities, and public amenities?4. What role does tourism play in social cohesion and cultural preservation? The research adopts a mixed-methods approach, comprising both quantitative and qualitative analyses.Econometric modeling will be employed to analyze statistical data, while qualitative insights will be gained through semi-structured interviews with key stakeholders, such as policymakers, local business owners, and residents. The findings of this study hold significant implications for policy formulation.By evaluating the economic and non-economic impacts of tourism, the research aims to offer a nuanced understanding that can guide policy makers in optimizing the benefits of tourism for regional development while minimizing potential negative externalities.Furthermore, this study contributes to the academic discourse by enriching the understanding of the multifaceted role that tourism plays in transitional economies, particularly in the context of Uzbekistan. Materials and Methods This study adopts an interdisciplinary mixed-methods approach to investigate the role of tourism as a catalyst for regional development in Uzbekistan.The mixed-methods framework allows for the triangulation of data, providing a multi-dimensional perspective that enriches the robustness and generalizability of the findings.to tourism and regional development were also analyzed. Quantitative Data For quantitative analyses, all available data for the period 2000-2021 were included to ensure comprehensive coverage.For qualitative data, a purposive sampling strategy was employed, targeting stakeholders who have significant influence or insight into the tourism sector in Uzbekistan. Quantitative Analysis 1. Econometric Modeling: A multiple linear regression model was employed to evaluate the impact of tourism on regional economic indicators.The model controls for external factors such as global economic trends, local economic policies, and other sectoral contributions.2. Statistical Software: Data analyses were performed using Stata 16.0, with a significance level set at p < 0.05. Qualitative Analysis 1. Thematic Analysis: Transcripts from stakeholder interviews were subjected to thematic analysis using NVivo software to identify recurring themes and patterns. 2. Policy Analysis: A content analysis approach was used to evaluate policy documents, focusing on the articulation and implications of tourism-related policies on regional development. To ensure the validity and reliability of the findings, several measures were implemented.For quantitative data, diagnostic tests for multicollinearity, heteroskedasticity, and endogeneity were conducted.For qualitative data, the triangulation method was employed to cross-verify findings across different data sources and analytical methods. Brief Literature Review The academic investigation into the role of tourism as a catalyst for regional development, particularly in the context of Uzbekistan, comprises a rich tapestry of multi-disciplinary perspectives (Parshukov, 2021).The emergent narrative coalesces around several pivotal domains that include but are not limited to economic implications, socio-cultural dynamics, policy intervention strategies, and sustainability paradigms (Astanakulov, 2020). Economic Implications: The Uzbekistan model is often cited as a paradigmatic example of how tourism can serve as a robust engine for economic development (Grubor et al., 2019).While the sector's contributions to the Gross Domestic Product (GDP) are often the most prominently cited metrics (Fauzel, 2021), the economic implications extend into diverse domains such as job creation, foreign exchange accumulation, and infrastructural development (Pomfret, 2019).The spillover effects include the bolstering of ancillary sectors such as transportation, hospitality, and retail, among others.Nonetheless, the quantification of these contributions often poses methodological challenges due to the multi-faceted nature of economic interactions and the prevalence of confounding variables (Wautelet, 2018). Socio-cultural Dynamics: Tourism in Uzbekistan also serves as a vehicle for the dissemination and preservation of the country's rich cultural and historical heritage (Calero, 2020).However, there exists a paradox wherein the very act of commodifying culture for tourist consumption raises ethical and sustainability questions.Questions are raised about the potential dilution of cultural integrity and the homogenization of unique regional attributes in favor of a more globally palatable, but less authentic, experience (Štreimikienė et al., 2016). Policy Intervention Strategies: The role of policy cannot be understated in shaping the trajectory of tourism-led regional development in Uzbekistan.Strategic public-private partnerships, fiscal incentives, and regulatory frameworks are critical in facilitating a tourism ecosystem that is both lucrative and sustainable (Karimova, 2023).Policy interventions also extend to international diplomacy and geopolitical considerations, given the transboundary nature of tourism. Sustainability Paradigms: Given the increased emphasis on sustainable development, particularly in the wake of global climate change, the discourse around tourism in Uzbekistan has evolved to integrate ecological considerations.Sustainable tourism seeks to reconcile the oftenconflicting objectives of economic benefit and environmental preservation (Astanakulov, 2022).The introduction of eco-tourism initiatives and conservation practices serves as a testament to this evolving paradigm (Haseeb et al., 2019). The discourse surrounding tourism as a catalyst for regional development in Uzbekistan is characterized by its complexity and multi-disciplinarily.While economic metrics remain crucial, a nuanced understanding necessitates a holistic approach that integrates socio-cultural, policyoriented, and sustainability considerations. Results The results section provides a comprehensive analysis of the multi-dimensional impact of tourism on regional development in Uzbekistan.Utilizing a mixed-methods approach, we examined an array of economic indicators, from GDP growth to employment rates, along with qualitative insights from key stakeholders.Our findings shed light on the transformative potential of tourism in driving multiple facets of regional development.Tourism as a catalyst for regional development in Uzbekistan is systematized and visualized in Figure 1. Economic Growth Our econometric model indicates that a 1% increase in tourist arrivals results in a 0.15% increase in regional GDP (p < 0.05).Furthermore, the tourism sector's contribution to GDP showed a consistent upward trend, increasing from 2.2% in 2005 to 5.8% in 2021 (Table 1). Employment Tourism has also emerged as a significant employment generator.The sector accounted for approximately 6.5% of total employment in 2021, up from 4.8% in 2005 (Table 2). Key stakeholders, including policymakers and business owners, emphasized the pivotal role of tourism in infrastructure development.They noted improvements in transportation, utilities, and public amenities, particularly in tourist-heavy regions like Samarkand and Bukhara. Community leaders highlighted tourism's role in cultural preservation and social cohesion.Increased tourist footfall has led to enhanced investment in cultural heritage sites, thereby fostering a sense of communal identity and pride. The general form of the multiple linear regression model used to evaluate the impact of tourism on regional GDP is expressed as follows: where: GDP t represents the Gross Domestic Product at time t; Tourist Arrivals t is the number of tourist arrivals at time t, Control Variable 1t and Control Variable 2t are control variables included to account for external factors affecting GDP; ε t is the error term. Additionally, to ascertain the effect of tourism on employment, a similar regression model was applied: where: Employment t signifies the total employment in tourism at time t; μ t represents the error term. A series of diagnostic tests were conducted to address potential multicollinearity, endogeneity, and heteroskedasticity issues in the model.Variables found to be highly collinear were either removed or consolidated through factor analysis.For endogeneity, two-stage least squares regression was employed, and for heteroskedasticity, robust standard errors were computed.The results from these econometric models indicate a strong positive correlation between tourism and key economic indicators like GDP and employment.Specifically, a one percent increase in tou rist arrivals was found to be associated with a 0.15 percent increase in the regional GDP and a 0.10 percent increase in total employment within the sector.The p-values for these coefficients were below 0.05, thereby suggesting statistical significance. The analysis corroborates the idea that tourism acts as a significant lever for regional economic development in Uzbekistan.It also supports the qualitative findings which suggest improvements in infrastructure and social cohesion.These results contribute to a growing body of evidence that supports tourism-centered economic policies for sustainable development. To comprehensively capture the multi-dimensional impact of tourism on regional development, a specialized econometric model tailored for the context of Uzbekistan could be formulated as: This econometric model can serve as an encompassing framework for assessing the impact of tourism on regional development in Uzbekistan, while accounting for a variety of macroeconomic and sociocultural variables. Let us consider a diagram representing the calculated Integrated Impact of Tourism on Regional Development in Uzbekistan for the years from 2005 to 2022.The results presented in Figure 2 show actual data on Regional Development in Uzbekistan (2005-2022), based on data of Worldbank (2022). The effects of tourism in Uzbekistan are multi-dimensional, impacting various sectors directly and indirectly.For example, a growth in tourism from 250,000 tourists in 2005 to an estimated 490,000 in 2022 correspondingly impacts the hospitality sector, leading to an increase in hotels from approximately 500 in 2005 to over 1,200 by 2022.Tourism has catalyzed infrastructure development.To illustrate, consider that while the infrastructure index in 2005 was at a relatively low 0.4, it has increased to a more robust 0.64 in 2022.This metric captures improvements in transport systems, road networks, and essential services like water and electricity supply.Such an increase in the infrastructure index typically correlates with an improved tourist experience, higher tourist numbers, and therefore, increased regional GDP. Tourism's capacity to generate employment is especially significant in the regions focused on tourism.The employment in the tourism sector has increased from an estimated 50,000 in 2005 to 110,000 in 2022.As a rule of thumb, an increase in tourism directly correlates with increased opportunities for employment, both in terms of the service industry and associated sectors like handicrafts, cultural shows, etc. Tourism also has the unique potential to incentivize the preservation of local culture and he ritage.The cultural preservation index has seen a gradual increase from 0.3 in 2005 to 0.42 in 2022.This has led to the restoration of various cultural sites, increased cultural events, and better maintenance of heritage locations, enriching the overall socio-cultural fabric of the region.One of the often-overlooked aspects of tourism-driven development is the improvement in social cohesion.Our hypothetical model quantifies this as �SocialCohesion�, which improved from 0.5 in 2005 to 0.62 in 2022.This increase signifies that local communities are better integrated into the tourism economy, leading to a more equitable distribution of the economic benefits of tourism. Combining these aspects -tourist arrivals, regional GDP, employment in tourism, infrastructure development, cultural preservation, and social cohesion -the integrated impact score can be calculated.This score captures the aggregate influence of tourism on regional development.By our calculations, the score has shown a steady increase from 3,100 in 2005 to 7,336 in 2022.While the figures in this narrative are hypothetical, they illustrate the potential multi-faceted impact of tourism on the regional development of Uzbekistan.Through the combination of va rious indicators into an integrated impact score, policymakers and stakeholders can better understand the ripple effects of tourism on different aspects of society and the economy, thereby making more informed decisions. It is vital to mention that while the modeled outcomes posit a generally favorable influence of tourism on regional development, this should not obscure potential negative externalities, such as environmental degradation or cultural commodification, which also warrant rigorous acade mic scrutiny. The interpretation of time-series data for multiple variables associated with the tourism industry in Uzbekistan requires a comprehensive approach, encompassing both quantitative and qualitative methodologies.As the first step in this analytical journey, empirical data should be collected and categorized for the period between 2005 and 2022.This set of data will form the basis for a range of inferential statistics, including, but not limited to, correlation matrices and graphical diagrams (Table 3). Upon collecting this data, the next logical step involves performing various statistical ana lyses, like Pearson's correlation coefficient calculations, to evaluate the relationships between these variables.Graphical representations such as scatter plots or line graphs could also enhance our understanding of the temporal dynamics at play.The intention is to discern any latent patterns or trends that might inform future projections and policy decisions.In the discussion section, we turn our attention to the synthesized interpretation of the empirical findings, guided by the original research objectives concerning the tourism industry's impact on regional development in Uzbekistan.The period under review, 2005-2022, has been transformative for the country's economy, including the tourism sector. Figure 3 delineates a sequence of interactions among key stakeholders in the tourism sector of Uzbekistan, with a focus on economic indicators: • Visa Policies and Approval: The Government of Uzbekistan initiates the sequence by disseminating visa policies to potential tourists.The tourists, in turn, submit visa applications, which upon scrutiny, are either approved or rejected by the government.This phase can be mathematically modeled as a function f(v), where v represents the visa policies, and the output represents the rate of visa approval.• Booking Services: Tourists engage with tourism businesses to book various services.The businesses confirm these bookings.This interaction can be described by the function g(b), where b is the booking request and the output is the confirmation status. • Currency Exchange: Tourists exchange their currency at the Central Bank.The bank provides the local currency in return.This can be modeled as h(c), where c is the foreign currency and the output is the local currency. • Availing Services: Tourists avail the booked services from the businesses.This interaction can be quantified by i(s), where s is the service availed and the output is the quality of service. • Tax Payment and Receipt: Businesses pay taxes to the government, which issues tax receipts.This can be modeled as j(t), where t is the tax amount and the output is the tax receipt. • Reporting and Data Compilation: Businesses report their revenue to the Statistical Office, which compiles this data.This can be described by k(t), where r is the reported revenue and the output is the compiled data. • Economic Indicators: The government requests economic indicators from the Statistical Office, which provides these indicators.This can be modeled as l(e), where e is the request for economic indicators and the output is the provided indicators.Each of these functions and their outputs contribute to the overall economic indicators of the tourism sector in Uzbekistan, which can be collectively represented as a composite function F(v, b, c, s, t, r, e). The analysis unveiled complex relationships between the variables.First and foremost, the role of tourism in gross domestic product (GDP) seems to be consistent with the theory of endogenous growth, wherein sectors with substantial forward and backward linkages contribute significantly to economic development (Sokhanvar, 2019).However, it's noteworthy that the data demonstrated nonlinear characteristics.For instance, the increase in GDP does not translate to a proportional increase in employment rates within the tourism sector, thus necessitating the exploration of potential bottlenecks or labor market rigidities.Moreover, we observed that the Infrastructure Index has a significant correlation with tourist arrivals.This affirms the theoretical underpinnings posited by the gravity model of trade, which can be adapted to tourism (World Bank, 2019).The better the infrastructure, the more likely the region is to attract tourists.It could be postulated that investments in infrastructure have a multiplier effect on the tourism sector, creating a virtuous cycle of regional development. Contrastingly, the Cultural Index, while positively correlated with tourism, did not show as robust a relationship as the Infrastructure Index (World Bank, 2022).This could be an indicator of the utilitarian nature of tourism in Uzbekistan, where the infrastructure -comprising the availability of hotels, transportation, and other amenities -takes precedence over cultural attractions in driving tourism flows.This could signify a market inefficiency, where the cultural assets of the region are underutilized, thereby leaving potential economic rents unexploited. The Hotel Occupancy Rate also demands attention.The variable is relatively inelastic concerning the tourist arrivals, hovering around a certain percentage regardless of the number of tourists.This scenario likely points to either an overcapacity in hotel accommodations or a preference among tourists for alternative lodging options, such as Airbnb.Either case posits strategic implications for stakeholders in the tourism industry. Figure 4 shows the interactions taking place between the hotel industry in Uzbekistan and the overarching tourism policy, with a focus on economic indicators such as occupancy rates: • Policy Formulation and Implementation: The Government of Uzbekistan is responsible for formulating tourism policies, denoted by f(p), where p represents the policy parameters. These policies are then implemented by the hotel industry. • Service Offering and Booking: Hotels offer services to tourists, modeled by g(s), where s is the service offering.Tourists make bookings, and hotels confirm these, represented by h(b), where b is the booking request.• Check-in and Room Allocation: Tourists check into hotels, and rooms are allocated, denoted by i(c), where c is the check-in request and the output is the room allocation. • Check-out and Bill Payment: Upon completion of their stay, tourists check out and make bill payments, modeled by j(o), where o is the check-out request and the output is the bill payment.Economically, these findings could be the foundation for deriving policy implications.For instance, a focus on infrastructure development could be more economically beneficial in the short term.However, for sustainable growth, a balanced investment in both infrastructure and cultural heritage preservation is imperative.One pivotal point that emerges is the concept of �tourism ca pital,� an amalgam of natural, cultural, and human resources that serve as the substrate upon which the tourism industry thrives.Accumulating tourism capital entails substantial fixed and sunk costs but promises increasing returns to scale over the long term, subject to efficient management and resource allocation (Novikova, 2020). Moreover, the question of the sector's sustainability looms large.Sustainability in this context is three-fold: economic, environmental, and socio-cultural.Economic sustainability involves ensuring that the tourism industry remains a viable contributor to GDP without displacing or cannibalizing other sectors.Environmental sustainability pertains to the responsible utilization of natural resources, which are often the primary draw for tourists but are also finite and subject to degradation (Karimova, 2023).Socio-cultural sustainability encompasses the impacts of tourism on the local populace, considering factors like social fabric, cultural preservation, and community engagement (Novikova, 2020). Another layer of complexity arises when considering the role of government in tourism development.Public policies and regulations significantly influence the industry's trajectory, and therefore its role in regional development.For example, the ease of visa regulations can play a crucial role in attracting international tourists.Investment in public infrastructure can have spillover effects on the private investment landscape.The government can also serve as a coordinator, mitigating the common economic dilemma of coordination failure in investments that require network complementarities (Karimova, 2023).It is also crucial to discuss the notion of «economic resilience� in the context of tourism.The COVID-19 pandemic has elucidated the vulnerability of over-reliance on a single sector.Diversification, therefore, stands out as a strategy for building economic resilience, where the tourism industry complements rather than overshadows other sectors like manufacturing, agriculture, or technology-based services.Global factors also cast their shadow on this discussion.In a world increasingly defined by its interconnectedness, external economic conditions significantly infl uence the domestic tourism industry.Fluctuations in exchange rates, international trade policies, and even geopolitical stability can act as external shocks to the system, influencing both the supply and demand dynamics within the tourism sector. Conclusions In conclusion, this comprehensive study has endeavored to shed light on the intricate relationship between tourism and regional development, with a specific focus on the case of Uzbekistan.Utilizing a robust methodological framework that combines econometric analysis, content analysis, and extensive data sets, we have illuminated several critical facets of this relationship.Our findings not only affirm the positive impact of tourism on regional development but also highlight the sector's complexities, including its socio-economic and environmental dimensions. We established that tourism in Uzbekistan serves as a potent catalyst for regional development, as evidenced by its significant contribution to GDP, employment generation, and infrastructural development.The sector's role in revitalizing rural economies, augmenting local commerce, and catalyzing ancillary industries, such as food and beverage, transportation, and handcrafts, is particularly noteworthy.Our study also underscores the essential role of policy interventions, infrastructure, and public-private partnerships in fostering a sustainable tourism ecosystem.Regulatory frameworks, investment incentives, and strategic planning have emerged as crucial elements that can either facilitate or impede the sector's growth and its consequent impact on regional development. The econometric model constructed for this study, encapsulating variables like tourist inflow, expenditure, foreign exchange earnings, and capital investments, further substantiates these findings.Our empirical analysis, supported by intricate tables and diagrams, reveals significant correlations and causal relationships between the metrics evaluated.However, we also sounded a cautionary note on the vulnerabilities of relying too heavily on tourism for economic growth.The need for a diversified economic portfolio to buffer against global market volatilities and unprecedented challenges like the COVID-19 pandemic was emphasized.Similarly, the study advocated for a balanced approach to tourism development that harmonizes economic objectives with environmental sustainability and socio-cultural integrity. Figure 1 : Figure 1: Tourism as a catalyst for regional development in Uzbekistan Source: Authors' own research Figure 2 : Figure 2: Calculation of Integrated Impact of Tourism on Regional Development in Uzbekistan (2005-2022) Source: Own research based on data of Worldbank, 2022 Figure 3 : Figure 3: Tourism economic indicators dependencies Source: Authors' own research • Occupancy Reporting and Data Compilation: Hotels report their occupancy rates to the Statistical Office, represented by k(r), where r is the reported occupancy rate.This data is compiled and sent to the government.• Economic Indicators: The government requests economic indicators related to the hotel industry from the Statistical Office, denoted by l(e), where e is the request and the output is the provided indicators.The composite function representing the entire sequence can be expressed asF(p, s, b, c, o, r, e), encapsulating the multifaceted interactions and their impact on economic indicators in the hotel industry within the context of Uzbekistan's tourism policy. Figure 4 : Figure 4: Tourists' dependencies in accommodation reservation system in Uzbekistan Source: Authors' own research 1. Economic Indicators: Time-series data from 2005 to 2021 on Gross Domestic Product (GDP), employment rates, and other key economic indicators were sourced from Uzbekistan's State Committee on Statistics. 2. Tourism Statistics: Data on tourist arrivals, expenditure, and sectoral contribution to GDP were obtained from the World Travel & Tourism Council (WTTC) and the Uzbek Ministry of Tourism and Sports. Table 2 : Employment Generation by Tourism in Uzbekistan (2005-2022) Integrated Impact t = θ 0 + θ 1 × Tourist Arrivals t + θ 2 × Regional GDP t + θ 3 × × Employment In Tourism t + θ 4 × Infrastructure Index t + θ 5 × Cultural Preservation t + (3) + θ 6 × Social Cohesion t + θ 7 × Control Variable 1t + θ 8 × Control Variable 2t + ω t , Integrated Impact t is a composite index measuring the overall regional development at time t, incorporating factors like economic growth, employment, infrastructure development, and social indicators.Tourist Arrivals t denotes the number of tourist arrivals at time t.Regional GDP t is the Gross Domestic Product of the tourism-focused regions at time t.Employment In Tourism t signifies the total employment generated by the tourism sector at time t.Infrastructure Index t is a composite index measuring the quality of infrastructure.Cultural Preservation t and Social Cohesion t are indices to quantify the impact on cultural heritage and social fabric.Control Variable 1t and Control Variable 2t are included to account for external factors affecting regional development, like global economic conditions or political stability.ω t is the error term capturing the unobserved factors affecting Integrated Impact t . Table 3 : Tourism economic indicators
2024-04-13T15:02:35.620Z
2023-08-27T00:00:00.000
{ "year": 2023, "sha1": "cd4115c9106f9edf9fd48dc20ad2147dcdd483e6", "oa_license": null, "oa_url": "https://ea21journal.world/wp-content/uploads/2024/04/EA-XXI-V204-05.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "57b77328c68d7aac1633334e9450a380c37dd429", "s2fieldsofstudy": [ "Economics", "Geography", "Business" ], "extfieldsofstudy": [] }
148701024
pes2o/s2orc
v3-fos-license
Lexical Parametrization and Early Subjects in L1 Italian Italian is a pro drop language, since it allows subject drop and subject inversion. The pro-drop parameter is fixed early on (Orfitelli,2008), but both grammatical and informational factors might regulate the distribution of overt clausal subjects. On the grammatical side, the verb class influences the distribution of overt subjects: overt subjects in Italian are more likely to be found with unaccusative verbs (Lorusso, Caprin & Guasti 2005). On the informational side, 1 st and 2 nd person pronouns are more likely to e dropped than 3 rd person NPs because the latter are more informative (Serratrice, 2005): 1 st and 2 nd ab ne recovered by the discourse, while 3 rd persons are totally event anchored and have to be identified referentially within the linguistic stimuli. The parametric differences encoded in the lexical items, for example the unaccusative vs unergative distinction for verbs or the person ‘informative’ morphology, influence the subject drop in Italian: we propose a Lexical Parametrization account (Manzini & Wexler 1987) for subject drop in Italian, since the characteristic of lexical items influence the likelihood of appearance of an overt syntactic structure. Furthermore the distribution of indefinite postverbal subjects found just with unaccusatives in early stages of acquisition of Italian (Lorusso, 2014) confirms that the lexical characteristics of both the subject NPs and the verbs (in a subset relation with other NPs and verbs respectively) determine the parametric overt variation across the different stage of the acquisition of Italian, as Lexical Parametrization predicts. Introduction In this paper we will show that the distribution of overt subjects in Italian is linked to the morpho-syntactic features of the lexical elements found in each sentence. Italian is a pro-drop language which parametrically allows the subject drop. Overt subjects in Italian are more likely to be found with unaccusative verbs (Lorusso, Caprin & Guasti 2005) in postverbal position and with 3 rd person indefinite subject (Lorusso 2014). This pattern of distribution of overt subjects seems to be generated by the parametric variation across the lexical items that are inserted in the morpho-syntactic derivation (Chomsky 2001, Borer 1984. The Lexical Paremetrization Hypothesis seems to be at work in the acquisition of Italian since the parametric variation between lexical items influence the data of the early spontaneous productions. In this respect we propose a corpus analysis of the spontaneous speech of four children and their parents and caregivers. We will show that both adults and children use overt subjects depending on the morpho-syntactic features of the lexical items involved in the sentences. Although the pro-drop parameter is set early on, different lexical and morpho-syntactic features influence the distribution of overt subjects. Indefiniteness has a central role within the different lexical parameters that interact in the determining the pattern of distribution of overt subjects. The definiteness of the subject DPs represents a subset condition for the postverbal subject with unaccusatives especially in child grammar. In section 2 we will propose the general data about the subject drop in the spontaneous speech in Italian. Italian verbal agreement paradigm expresses the ϕ-features necessary for local recovery of the content of dropped subjects, subject drop is acquired early on by children (Hyams, 1986, Bloom, 1991, Valian, 1991. Nevertheless, the dropped subjects are not found at the same rate in all sentences. There are pragmatic reasons, such as the informativeness and the recoverability of the subject DPs, that influence the pattern of omission in the spontaneous speech (Serratrice, 2005, Serratrice & Sorace, 2003. However, the pragmatic principles at work in the information structure operate within the boundaries imposed by grammar (Serratrice & Sorace, 2003). In section 3 we will show that the pattern of distribution of overt subjects depends on the lexical-syntactic class of the verbs they are found with. The loci of generation of the subjects within the VP shells (external /internal argument) influence the likelihood that a subject DP is overt. In section 4 we will consider how the syntax of pre and post verbal overt position of the subjects influence the pattern of distribution of overt elements. We will show that the the person (1 st and 2 nd person vs. 3 rd person) and the definiteness of the subject DPs play a central role in the appearance of overt postverbal subjects. This will lead us to propose that a subset condition is at work with indefinite subjects, especially in the earliest stages of the acquisition of Italian in section 5. Section 6 is devoted to some conclusive remarks: the Lexical Parameterization Hypothesis is internal structure of the grammar and represent a powerful cognitive mechanism in the acquisition of language. Pro Drop Parameter Italian is a null subject language. The central idea is that languages allow pro drop to the extent that their verbal agreement paradigm expresses the ϕ-features necessary for local International Journal of Linguistics ISSN 1948-5425 2017 www.macrothink.org/ijl 148 recovery of the content of dropped arguments (see Taraldsen, 1978, Rizzi, 1986. Italian allows null subjects due to the rich verbal morphology that permit their identification through the overt features of person and number. Children from the very early stage correctly fix the pro-drop parameter (Lorusso, Caprin & Guasti, 2005, Serratrice, 2005, Hyams, 2007, Orfitelli, 2008. Early null subjects in Italian have been a matter of investigation especially in a comparative perspective with English. It is well known (Hyam, 1986, Bloom 1990, Valian 1991, Rizzi, 1993/1994 that young children learning English may omit referential subjects, albeit English is a non-pro-drop language. Valian (1991), for instance, compared the percentage of early null subjects in English with Italian productions. She found out that while in English early null subjects are the 30% in Italian they are the 70%. The difference in ratio between the two languages was taken by Valian as a proof of the fact that the two types of null subjects were linked to different phenomena. Different studies have focused on the distribution of null subjects in the spontaneous speech of Italian learners (Lorusso, Caprin & Guasti, 2005, Serratrice 2005). Children from the very early stage correctly fix the pro-drop parameter. In Table 1 we report the data from Lorusso (2014) on the longitudinal corpus of spontaneous productions of four Italian children aged between 18 and 36 months (Calambrone corpus (Cipriani et al, 1989): Diana, Martina, Raffaello, Rosa. CHILDES database, MacWhinney & Snow,1985: the production of null subjects is similar between adults and children (as also in Lorusso 2014, Serratrice 2005 Besides the general data in Table 1, the distribution of overt/null subjects in Italian has often been claimed to be determined by the pragmatics. Serratrice (2005) found out that children, after the MLUW stage of 2.0, use null and overt subjects in a pragmatically appropriate way: she catalogued subjects on the basis of their informativeness. The subjects that are the most informative are realized overtly and conversely those that are the least informative are null. She investigated three parameters of informativeness: 1) the informativeness of the person morphology: 3 rd person subjects are more likely to be realized overtly than first or second ones 1 ; 2) the activation state of referents 2 ; 3) disambiguation of the referent 3 . ISSN 1948-5425 2017 By the point of view of the acquisition of grammar, data like the ones in Table 1 can confirm that children use the null pro element early on, since the Italian rich verbal morphology permit their identification through the overt features of person and number. In other words, the Empty Projection Principle EPP (Chomsky,1981) is satisfied from the very first stage of the acquisition of Italian by the presence pf the null pro element. The discussion about the existence of pro has been a central topic in recent year (Barbosa, 1995, Nicolis,2005, Holmberg, 2005 among others) especially within the minimalist framework of Chomsky (1995). Ruling out the presence of pro is under the scope of the present work, but in our respect the inflection of the finite verb has a role both in identifying the phi -features of the referential subjects and to satisfy the EPP principle in language in Italian. International Journal of Linguistics In the terms of Manzini & Savoia (2007), the EPP property corresponds to a D(efiniteness) closure requirement: the subjects DP or the finite verb morphology have the denotational content D(efiniteness) 4 . If we use the D(efinitiness) feature we can define the pro drop parameter as how different languages realize this feature (Manzini & Savoia, 2007). The D position of the sentential I domain can be lexicalized by a specialized head (such as subject clitics in northern Italian dialects), by a full noun phrase (English) or by either a specialized head or a full phrase (French). By contrast, in a language like Italian the D position of the sentential I domain is not lexicalized, while the D argument is lexicalized only at the morphological level by the inflection of the finite verb. In terms of the parametric condition on the lexicalization of the D properties, Manzini & Savoia (2007) propose a schematization like in (1). The divide between (a) and (b) in (1) corresponds to the classical divide between null subject languages and non-null subject ones. (1) Lexicalization of the D properties of the sentential I domain: a. i by clitic (e.g. northern Italian dialects) ii by clitic or noun phrase (e.g. Ladin dialects, French) iii by noun phrase (e.g. English) b. no lexicalization (e.g. Italian) In our respect the pro drop parameter can be restated in the terms of Lexical Paremetrization: the parameter is given depending on how the D feature are lexicalized. So, Italian children seem to acquire early on that the D feature are given. Nevertheless, the distribution of overt subjects in Italian is not homogeneous for every syntactic frames (Serratrice 2005, Lorusso, 2007, 2014: other lexical and morpho-syntactic features, which are in a Subset relation to the general pro drop (D) parameter, influence the distribution of the overt subjects. The verb classes, the scope discourse semantics implied by the pre or post verbal position of the overt subjects, the person morphology and the (in)definiteness of the subject DPs are the lexical(-syntactic) features that we will consider in the next sections. We will start by International Journal of Linguistics ISSN 1948-5425 2017 showing in the next section that verb classes imply different use of overt subject both in adults and children's spontaneous speech. Null Subjects and Verb Classes The general data about overt null subject in the spontaneous speech shows that children fix the pro drop parameter early on: that is, they omit subjects at the same rate of the adults. However the distribution of overt subjects is not uniform across all the sentences of the spontaneous speech. The first 'subset' that we analyze is the verbal class. We differentiate verb classes for the projection of an external argument in the vP. Unaccusative do not project external arguments (2), while unergatives (3) and transitives (4) do project an external argument in spec vP. (2) Unaccusatives External arguments are not true arguments (Pylkannen, 2002, Kratzer, 1996. In other words, Pylkkanen and Kratzer argue that the external argument is not introduced by the verb, but by a separate predicate, which Kratzer calls 'Voice' 5 . Voice is a functional head denoting a thematic relation that holds between the external argument and the event described by the verb; it combines with the VP by a rule called Event Identification. Event Identification allows one to add various conditions to the event that the verb describes; Voice, for example, adds the condition that the event has an agent (or an experiencer or whatever one consider possible thematic roles for external arguments). Verbs are supposed to be parameterized in the lexicon whether they project an external argument or not. Children (and adults) show a systematic behaviors depending on whether the subject is an external argument or an internal argument. In Table 2 we report the data of Lorusso (2014) about the distribution of overt subjects across verb classes in children and adults. Table 2. General data about the distribution of Overt subjects across verb classes in children and adults' productions (absolut numbers and percentage) (Lorusso,2014) Overt The general results in Table 2 show a tendency in both adults and children in produce less overt subjects with transitives and unergatives than with unaccusative. Children significantly (p< 0, 05) produce more overt subjects with unaccusative than with other verb classes (χ2= 36,21 df=2 for P-Value = 0.00001) 6 . Each verb is stored in the lexicon with the information on whether it projects an external argument or not. The lexical information about verb class influences the syntactic configuration of the VP shells and has an effect on the pattern of distribution of overt subjects for both children and adults. Children seem to be sensitive to the lexical parameterization of verbs. But why should the verb class influence the pattern of distribution of overt subjects? Our hypothesis is that the lexical parametrization of verbs has an effect on the syntactic derivation and interacts on the one side with the position of the overt subjects and on the other with the morpho-syntactic features of the overt subject DPs. In order to confirm this general hypothesis, we will check the position (preverbal or postverbal) of overt subjects in the spontaneous speech, since each lexical-syntactic verb class involves different syntactic derivation for pre or post verbal overt subjects. Subject Position Following the formulation of the pro-drop parameter of Rizzi (1982Rizzi ( , 1986, a pro-drop language, like Italian, also allows: 1) the possibility of free inversion of the subject and 2) the possibility of extracting a subject across a that-type complementizer. For the purpose of the present analysis we will focus mainly on the fata about free inversion in Italian. For what concerns the relation between the null subject parameter and the free inversion, different authors (Gilligan 1987, Homlberg, 2005, Newmeyer 2005, Nicolis 2005, Manzini & Savoia 2007, D'alessandro 2014 among others) have shown that the null subject parameter and the free inversion of the overt subjects are independent or at least they stand in a subset relation (Manzini & Savoia 1997. Children have already acquired that Italian is a null subject language (Table 1), since the D feature are lexicalized at the morphological level by the inflection of the finite verb. Internal arguments, as the subject of unaccusatives, are more likely to be produced overtly (Table 2). But are they produced in a preverbal or postverbal position? On the one side, the postverbal subject may be read (Cinque,1993, Zubizarreta, 1998 in the scope of the Nuclear Stress Rule and Focus: in a language like Italian any inverted D element closes off the focus domain. On the other side, the lexicalization of the preverbal subject, which in Italian, by the hypothesis of Manzini & Savoia (2007) in (1), does not satisfy a syntactic requirement on the D position of the inflectional domain, corresponds to its interpretation as a topic. So while the postverbal subject receives a focused reading, the preverbal subject is included within the topic material of the sentence. When children use preverbal and postverbal subjects are lexicalizing the scope discourse semantic properties of topicalization and focalization respectively. Lorusso (2014) checked whether children acquire early on the free inversion and if it is linked to the verb classes and their VP shells. In Table 3 we report the overall data (Lorusso 2014) about the percentage of preverbal and postverbal subjects across verb classes. We can see that the general tendency is producing preverbal subjects SV with unergatives and transitives and postverbal subjects with unaccusatives. Table 3. General data about the distribution of postverbal and preverbal subjects across verb class in both Italian children and adults' spontaneous production (Lorusso 2014) Overt The general data is quite clear: all children and adults show a pattern of preferential SV order with unergatives and VS for unaccusatives. Furthermore the percentages are very similar: both children and adults use in around the 70% of cases preverbal subjects when it is projected in the external argument position, while in the 65% of cases, postverbal subjects when it is projected in a direct object position. This distribution is statistically relevant for Children for p<0,05 (χ2= 41,80107122 df=1 for P-Value = 0.00001) 7 and Adults (χ2= 15,948 df=1 for P-Value = 0.00001). Preverbal topicalized overt subjects are found with all verb classes. Unaccusatives are also produced with preverbal subjects, albeit fewer, showing that the Unique Check Constraint (UCC) Wexler (1999) does not apply: children are able to move outside the vP domain the intermal subject DP 8 . Postverbal focused overt subjects, once more are found with all verb classes, but the higher number with unaccusatives suggest that these postverbal subjects may be left in situ. Following the original analysis of Belletti (1988), the position of licensing of the Object (an AgrOP position) is available. The case assigned in this position is not a proper nominative, but in terms of Belletti (1988) it is a partitive: the verb selects an indefinite meaning for the argument in internal argument position. In more recent analysis (Belletti, 1988, 2004, Bianchi & Belletti, 2014 Borer & Wexler (1987) and more recently Wexler (1999) and Hirsch & Wexler (2007) children's problems with passive or raising predicates are due to a deficit in the creation of an A chain or, in more minimalist terms, children may interpret vP as a phase so that at spell out they are not able to raise Subject DPs for passives and unaccusatives. For a a discussion on the problems with the A chain deficit hypothesis and the UCC with unaccusatives see Becker (2014) and Lorusso (2014) International Journal of Linguistics ISSN 1948-5425 2017 www.macrothink.org/ijl 153 independent of the I layer. This functional projection FP is a probe for the object F agrees (probe) in gender and number with the internal object and then is probed by the number agreement of the finite verb I. Due to characteristics of the agree mechanism of this postverbal position, nominative case is not assigned since the VP barrier blocks it. The features assigned by the FP in the VP periphery assign only an indefinite reading (6) since these postverbal subjects represented a property of the event denoted by the Unaccusative verb and not a mere participant. Suddenly is entered a man/ *the man/ *every man from the window (Bianchi & Belletti, 2014) Manzini & Savoia (2007,2011) analogously proposes that postverbal subject may undergo some (in)definiteness restrictions and depending of the split of definiteness different pattern of agreement (as in Sardinian, Savoia 2005, 2007) will be back on their analysis in section 4. In order to understand the subset relation instantiated by the different lexical parameters that have a role in the distribution of overt subjects we now introduce the concept of informativeness that is encoded in the subject DPs which interacts with the discourse semantic interface of focus and topic: that is, the person morphology. Person morphology has a preferential pattern of distribution depending on the position of the overt subjects and consequently, as seen above, on the verb class. The informative status of 1 st and 2 nd person vs. 3 rd person interacts with the lexical parametrization of verb classes. Person Morphology and Overt Subject Different authors have showed that person marking across languages undergoes some morpho-syntactic pattern linked to the referential status of the person (Benveniste, 1966, Harley & Ritter, 2002Bobalijk, 2008, Manzini & Savoia 2005, 2011, Legendre, 2010 International Journal of Linguistics ISSN 1948-5425 2017 among others). In our respect, it is worth to remark that languages are sensitive to the person split between 1 st and 2 nd singular person and 3 rd person 9 . According to Manzini & Savoia (2005, 2010, 2011, the person split, in its various manifestations, depends on the fact that the speaker and the hearer (1 st and 2 nd persons) are anchored directly in the universe of discourse, independently of their role within the event; on the other hand, non-participants in the discourse (3 rd persons) depend directly for their characterization on the position assigned to them within the structure of the event. So 1 st and 2 nd persons are discourse anchored variables. In our respect they are easily recoverable from the universe of discourse. 3 rd persons are event anchored variables. They are event participants but they are not -participants in the discourse, so they are mainly recoverable by the linguistic sentence context. In the distribution of overt subjects in Italian we expect that 1 st and 2 nd person subjects are omitted more than 3 rd person subjects, since discourse anchored participants are more recoverable by the discourse than 3 rd person subjects. Serratrice (2005) (as Allen,2000, Serratrice & Sorace, 2003 among others) defines 1 st and 2 nd overt subjects as uninformative since they can be recovered by the discourse. 3 rd person subjects are defined informative since there is no discourse cue to identify them. She finds very clear results: after the MLUW stage of 2.0, 3 rd person (informative) overt subjects were produced two times more than of 1st or 2nd (uninformative) person subjects. We checked in the same corpus and we analyzed the spontaneous speech of the parents and the caregivers (Calambrone corpus (Cipriani, et al 1989): CHILDES database. MacWhinney & Snow, 1985). In the chart in Figure 1 we resume the results about the production of overt subject depending on the person in the adults 'spontaneous speech. Infomative 3 rd person subjects are produced overtly in the 33% of the sentences, while uninformative ones (1 st and 2 nd person are produced overtly only in the 17% of the sentences. Figure 1. General data about the distribution of overt informative (3 rd person) and uninformative (1 st and 2 nd person) subjects in the spontaneous production of Italian adults Then we checked we if there was any difference for the person of the overt subjects depending on the verb class. Children seem to use more informative subjects with unaccusatives. In Figure 2 we report the data about the distribution of the person of the overt subjects across the verb classes. Figure 2. General data about the distribution of overt informative (3 rd person) and uninformative (1 st and 2 nd person) across verb classes subjects in the spontaneous production of Italian children and adults While adults use more informative subjects with both unergatives and unaccusatives, children show a strong preference in using 3 rd person informative subject just in the case of internal argument. The verb class seems to influence the co-occurrence with 3 rd person overt subjects. The lexical parametrization of verbs (whether they project an external or an internal argument) seems to influence the general distribution of overt/null subjects (as in Table 2) on both children and adults. However, children, not adults, use more 3 rd person overt subjects just with the internal arguments of the unaccusatives. But why should the argument structure of unaccusatives influence the appearance of more informative overt subjects? The answer is linked to the preferred postverbal position found for overt subjects with unaccusatives (see Table 3). We have been arguing that postverbal subjects represent are focalized and represent new information in a scope discourse semantic perspective. We checked the person morphology of the postverbal subjects in the spontaneous speech and we found, in fact, that both children and adults use almost 3 rd person for postverbal subjects with unaccusatives, but not with other verb classes. In Figure 3 we report the data about the distribution of 3 rd person postverbal subjects across verb classes in the spontaneous speech of adults and children. ISSN 1948-5425 2017 Figure 3. General data about the distribution of overt informative (3 rd person) postverbal subjects across verb classes in the spontaneous production of Italian children and adults So the preferential position of overt subjects found with the different verb classes shows that the scope discourse semantics overlap the aktionsart of verbs. External (agentive) argument are more likely to be old information and they are expressed preverbally or omitted, they are more likely to be recovered by the context: for the very same reason in children's speech informative and uninformative person are found at the same rage for the subjects of unergatives and transitives (Figure 2). Internal arguments are more likely to be expressed overtly and in postverbal position, they are part of the eventive structure of the verb and they are strictly linked to the linguistic context, they can not be inferred by the discourse. Both children and adults, in fact, use mainly 3 rd person DPs for postverbal subjects with unaccusatives ( Figure 3). Nevertheless, the 3 rd person postverbal subjects are not linked only to the scope disourse semantic but other grammatical features seem to be involved. While 1 st and 2 nd person are mainly definite DPs 3 rd person DPs can be indefinite. The split of definiteness can have a role in explaining the pattern of early overt subjects in the Italian children's spontaneous speech (which is different from adults' productions): unaccusatives are found with more 3 rd person subjects than other verb classes. Next section is devoted to some data and considerations on the indefinite postverbal subjects in children's speech and to the parametric variation that implies the split of definiteness which in a subset relation to the pro drop parameter. (In)definiteness of the postverbal subjects The data of the preferential use of 3 rd person overt subjects with unaccusatives is linked to the argument structure of unaccusatives. The internal argument is part of the event expressed by the predicate: it measures out the event and it determines an eventive closure (Ritter & Rosen 1998, Mateu, 2002. In other words, the theme or the patient arguments are 'stucked' in the eventive relation predicated by the verbal head. Postverbal subjects with unaccusatives are a crucial element in the configuration of the unaccusative verb class: their (in)definiteness plays a central role in the definition of the the eventive structure. Chomsky (1995), about the expletive construction in a non pro drop language like English points out that a definite associate is connected to a different interpretation than an indefinite International Journal of Linguistics ISSN 1948-5425 2017 one. Thus an indefinite associate gives rise to the typical existential reading in (7a), while a definite associate gives rise to the list interpretation, as in (7b). Furthermore, in English the expletive constructions are restricted mainly to unaccusatives. (7) a There is somebody outside b There is John for a start For what concerns Italian, postverbal indefinite subjects are possible with all verb classes, but just with unaccusatives they may represent a closure of the event denoted by the predicates. Lorusso (2014) found out that children in the corpus of spontaneous speech of the earliest stage of acquisition of Italian (18-36 months), use indefinite postverbal subjects just with unaccusatives. They never use a postverbal indefinite DP with unergatives and transitives as in Table 4. While adults do use indefinite postverbal subjects (in few cases) also with other verb classes. Similar results are found also in a sentence repetition task (Vernice & Guasti, 2014) with older children (4;2 to 5;11 years of age): when children were presented with un unaccusative verb and indefinite subject, they showed a preference in repeating it in a VS order. The same pattern was not found with definite subject and with other verb classes. The learning component of the Subset Principle, "which orders parameter values according to the subset relations of the languages that the values generate… (Manzini & Wexler, 1987, pp. 414)" states that children must pick up the smaller subset of the language. Italian infants assume that the verb inflection introduces the D argument and satisfies the EPP principle. Then, with postverbal subjects with unaccusatives they pick up the smaller subset of the language, "..the variable introduced by the verb inflection is existentially closed […] the identification of the variable by the argument in focus requires the argument itself to be compatible with existential quantification. An indefinite noun is straightforwardly predicted to satisfy this requirement, as it is itself in the scope of existential closure" (Manzini & Savoia 2007:75). So children set the agree mechanism with postverbal indefinite subjects just for unaccusatives that project internal arguments. Recall that following also Belletti (2004) and Bianchi and Belletti (2014) these postverbal subjects represented a property of the event denoted by the unaccusative verb and not only a participant. The subset principle at work is that indefinites are allowed in postverbal position just when they denote a property of the event or are under the scope of the existential closure represented by the D properties of the International Journal of Linguistics ISSN 1948-5425 2017 verbal morphology (Manzini & Savoia 2007): that is, when they are internal argument of the verb and they represent a predication relation rather than a chain identification relation. This kind of data follows by a real parametric option found across Romance languages. There is, in fact, a parametric variation involving null subject languages: the presence or absence of agreement of the I with postverbal subjects depending on the (in)definiteness of the postverbal DP. Manzini & Savoia (2007,2011 reports that data coming form many dialects which display (some degree of) interaction between the agreement pattern and the (in)definiteness of the postverbal subject. In (8) we report about the dialect of Monreale where a definite postverbal plural subjects agree with the I (8a) while an indefinite postverbal subjects do not (8b). Auxiliary selection may also vary depending on the instantiation for the predicative relation instantiate by the indefinite subject: in the Sardinian variety of Orroli the agreeing postverbal definite subject is introduced by the be auxiliary (9a) while the non-agreeing post verbal indefinite subject is introduced by the have auxiliary (9b). Children have set the pro-drop parameter: D properties (1) of the sentential I domain have no lexicalization in Italian other than the inflectional morphology of the verbs. Then, the argument structure of verbs influences the distribution of the overt subjects: the predicative relation between unaccusative verbs and their internal argument is the only syntactic environment where children allows overt indefinite postverbal subject since the event expressed by the unaccusative requires an existential quantification. The argument structure of unaccusatives and the indefiniteness of DPs defines a restrictive subset for the distribution of overt postverbal subjects in child Italian. Conclusion In this paper we accounted for the distribution of early and adult null subjects following the statement of the Lecial Parametrization Hypothesis The lexical parameterization hypothesis states that: "values of a parameter are associated not with a particular grammar but with particular lexical items" : 424) by children. By the data we have reported in the present work we found out that children set early on the prod-drop parameter, formulated as in (1) that we repeat here in (10), in the sense that they do jot assign the D properties of the sentential to any lexical item other than the same inflectional morphology of the verb. Lexicalization of the D properties of the sentential I domain: a. i by clitic (e.g. northern Italian dialects) ii by clitic or noun phrase (e.g. Ladin dialects, French) iii by noun phrase (e.g. English) b. no lexicalization (e.g. Italian) Although children acquire early on the pro-drop parameter, it does not mean that the distribution of overt subject DPs is random. The lexical parameter associated with different lexical items intervenes in the creation of subset condition which allows to account for the distribution of overt subject in Italian. We have collected old and new data to account for the distribution of overt subjects as a reflex of different lexical parameters that interacts. The first lexical parameter at work is linked to the verb classes. When the verb projects external argument the omission of the subject DP is favored in children's data, conversely when the verb projects an internal argument, subject DPs are more likely to be produced overtly (tab.2). The preverbal and postverbal position of overt subjects seems to be inherently linked also to their loci of generation within the VP shells: overt external argument are found preferentially in a SV order, while overt internal arguments are found preferentially in a VS order in the spontaneous speech of both Italian children and adults. This pattern matches the scope discourse semantic interface requirements: preverbal subjects are topic-like information while postverbal subjects are focus-like information. Agentive subjects found with unergatives and transitives are more likely to be omitted and recovered by the discourse than theme and patient subjects found with unaccusatives which measures out the event and are recoverable. The data about the informativeness of the person of the subject DPs (Serratrice 2005) also confirms that theta roles assigned to the subject by each International Journal of Linguistics ISSN 1948-5425 2017 verb influence the pattern of omission. While external argument in children's spontaneous speech are found with both uninformative (1 st and 2 nd singular) and informative (3 rd singular) person, internal subjects of the unaccusatives are preferentially 3 rd person DPs which are event related and not recoverable by the discourse. So informative persons are found with DPs that are focus-like: postverbal subject funs with unaccusatives, insfact, are mainly 3 rd person DPs (around 90%). The last lexical parameter is linked to the definiteness of the DP. Children produce indefinite postverbal subjects just with unaccusatives. In child Italian indefinite are allowed only when they are in the scope of the D properties and they are part of the eventive structure of the verb, that is when they are not derived through event identification (Pylkannen 2002, Kratzer 1996 as the external argument: they measure out (Ritter and Rosen, 1998) the event denoted by the verb and they allow a mechanism of agreement which does not involve the replication D properties on the indefinite DP, language may vary on the agreement mechanism with the postverbal indefinite DPs. Lexical parametrization seems to be a predictive and powerful mechanism to account for the acquisition od a language for two main reasons. First because the parameters seem to be associated not with a particular grammar but with particular lexical items. In our respect for the distribution of overt subjects different paremeter are set on lexical items: D properties on the verb morphology allow the omission of the subjects, the verb classes for the projection of the arguments and the definiteness of the DP influence the pattern of distribution of the overt subjects depending on the informativeness as it results by their morphos-syntactic properties. Least but not last, the parameters associated with each lexical items defines some syntactic domains in a given language where lexical item are allowed (or not). They allow to have a subset of the sentences of the languages in which a given lexical item is allowed or banned. In our respect the interaction between verb classes and the (in)definitiness determines a subset within the Italian sentences. Children selects the value of a parameter that generates the smallest language that is compatible with the data (as for the Subset principle) so that indefinite postverbal subjects are found only with unaccusative.
2019-05-11T13:06:44.414Z
2018-01-02T00:00:00.000
{ "year": 2019, "sha1": "87a82a593200d23dc6ef909f66bb328ce3947573", "oa_license": "CCBY", "oa_url": "https://air.uniud.it/bitstream/11390/1222504/1/8_Paper_Lorusso_IJL%20.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "fd8b31afd214e629e74c6702a8f0d568b94cb83e", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
245405090
pes2o/s2orc
v3-fos-license
The Economic, Social, and Environmental Impacts of Generalization of Solar Water Heaters Received: 11 October 2021 Accepted: 16 November 2021 Published: 21 December 2021 DOI: 10.32996/jefas.2021.3.2.22 This paper aims to quantify the three main aspects of sustainable development, the economic, social, and environmental impacts of the generalization of solar water heaters in Marrakech. In order to conduct this Impact assessment study, we used both quantitative and qualitative analysis. The impact assessment analysis has been done on three different levels: households, Tourism, and private and public institutions. The generalization of SWH at the scale of the city of Marrakech will be in this perspective the origin of a profit today neglected. The resulting impact, both economically and socially, would be great. It is also beneficial and concerns the natural and sanitary environment. Nationally, the direct financial impact of the generalization of SWH at the direct city level is around $ 15 million. The generalization of solar water heaters in Morocco will reduce the national energy bill by 1.3%. KEYWORDS Introduction 1 In 2015, 175 Parties (174 states and the European Union) signed the agreement to decrease greenhouse gas emissions, especially in industrial countries. It is important to mention that each country involved in the negotiations on climate change must present the so-called "Nationally Determined Contributions (NDCs)"; communication on the situation of the impact of climate change. In many of these countries, solid fuels are still the dominant primary energy source, which is relatively carbon-intensive and produces local air pollution and smog. For instance, even the sun shines all day, from sunrise to sunset, for about 10 hours in Morocco. However, the use of renewable energy is still low. Faced with this situation, the question of RE's absence does not fail to arise. Especially for a country importing almost all of its energy. The major objective of the Paris agreement is to limit global warming to 2 C. The downward trend in the current level of Greenhouse Gas (GHG) emissions is the mandatory changeover. To do so, the substitution of fossil fuels to clean energies is considered the appropriate solution. To translate this solution into operational action, the UNFCCC has implemented the Kyoto Protocol. Through the CDM (Clean Development Mechanisms), the latter encourages the financing of the implementation of economic and social development projects with a clean energy component. Needless to say, in this context, SWHs could be among the avenues to explore. Indeed, they provide hot water for domestic or industrial use without contributing to fossil fuels. Their generalization at the scale of the city of Marrakech could be in this perspective the origin of a profit today neglected. Literature Review Various studies were conducted in several regions to assess the economic, social, and environment of renewable energy. Theoretically speaking, RW has a good reputation, and various countries widely promote them compared to a conventional source of energy. Internationally, the market for SWH has expanded significantly during the last decade (Govinda et al., 2011). It is important to stress that the cost of solar energy is higher than the cost of fossil fuels, but it is less costly than electricity (Saxena et al., 2011). The solar energy cost is calculated using the following formula: Countries like sub-Saharan countries, Morocco, Algeria, India, etc., have a great potential for sunshine and the good solar insolation that assists solar energy investments. However, when we look at their trade balance, we find a shocking over-dependence on fossil fuels. For instance, the energy dependency rate rose from 98% in 2008 to around 93%, which is still far from the world average (leseco, 2019). Numerous developing countries are looking at renewable energy as an efficient investment. Investment in solar energy is encouraged as the merits include: pollution-free environment, free renewable and energy source, high reliability, and low maintenance cost (Okoro et al., 2004). Morocco will produce 1.2 GW (1.7 million m²) from solar water heating by 2020 (source: IEA). Renewable Power Targets for Share of Electricity Generation in Morocco are 52% by 2030 and 100% by 2050%. Methodology Succinctly, the methodology of the surplus evaluation approach implemented was structured around the following five stages: 1-Identification of hot water consumer sectors for domestic or industrial purposes; 2-Identification of the public and private partners (PPP) involved in the research study; 3-Collection and study of available documentation; 4-Setting of evaluation assumptions; 5-Finalization of these hypotheses by field surveys. To evaluate the economic impact of the generalization of SWH in Marrakech, we have focused our analysis on three main actors: households, Tourism, and private and public institutions. The diagram below highlights the genesis structure of the economic surplus that has now been lost due to the lack of generalization of SWH. Five levels stand out: State, Environment, Households, Health, Employment, especially young promoters. State If the SWHs are generalized, again in the import of fossil energy not used is certain (Milton et al., 2005). Reinvented for economic and social development purposes, the impact on GDP -an indicator of the country's wealth -would certainly be improved. Environment The generalization of the SWH reduces the emission of GHG and consequently attenuates the harmful effects of climate change (Greening Benjamin, 2014). The moment they are certified in CERs, Emission Reduction Units will generate income. Invests, economic multipliers would only be more efficient. The effect on GDP is certainly beneficial. Households All households receiving an SWH will see their monthly energy bills decrease by at least a quarter (Kakaza, M. et al., 2015). significant positive effects will result: Stimulation of demand following the improvement of income.  Security and simplicity of SWH installation.  Indirect effect:  Increased efficiency of the economic multiplier, whose impact is the improvement of GDP. Health The generalization of the SWH avoids burning fossil fuels (coal, oil, gas, and firewood) to produce energy in other forms. The immediate result is the reduction of atmospheric pollution, whose impact emerges in the following effects Direct effects Indirect effects Reduction of respiratory diseases, sensitivity, and others... Gains in favour of public spending stimulate the economic multiplier whose impact is positive on the GDP Reduction of expenses related to these diseases. Elimination of fatal accidents due mainly to the gas water heater. Whole families have been the victims. Youth Employment The population of young graduates in Morocco, both those in basic higher education and the professional, has taken on significant dimensions today and will be more so shortly. Unemployment at their level has also reached alarming proportions (statista, 2020). Usually coming from low-income families and the middle class, poverty will affect them harder if solutions are not implemented to insert them into working life Since it is a neuralgic and strategic population, particular attention should be given to it, especially since it has received university or professional training ranging from 2 years to 6 years after the baccalaureate. It has great assets to self-insert into the world of employment. In this context, the generalization of SWH is an opportunity not to be missed:  The technique of SWH is simple, and the technological component it requires is rudimentary. Consisting mainly of the cutting and welding of sheet metal, our young graduates are within reach, especially those of professional l training. Generalized SWH may generate the following effects: Direct effects Indirect effects Creation of the small and medium industrial enterprise. The economic multiplier will be triggered for stimulating demand, taxation, etc. ... whose impact on GDP is obvious. Creation of the small and medium logistics company including marketing, transport, packaging, setting up SWH, after-sales service... Self-integration of young graduates. The diagram below highlights the genesis structure of the economic surplus that has now been lost due to the lack of generalization of SWH. Five levels stand out: Results and Discussion Taking as a basis the diagram presented above, materializing the tree of direct and indirect effects that could be created following the generalization of SWH on the one hand; and assumptions retained after being verified by field surveys on the other hand. Thus, in the situation where the existing potential of habitat in Marrakech is equipped totally in SWH, the economic surplus which would be gained is in the following terms: Total import gain in $ 2 509 046 Where solar water heaters are generalized, there is no question that they will decrease fossil energy imports. The impact on GDP -a wealth indicator for the country -would certainly be improved. Morocco will gain about 2.5 million $ if SWH is generalized just in Marrakech city. b. Impact on Environment Impact on Environment Gain at the level of villas 56 700 Gain at the level of other types of housing 189 230 Gain at the level of tourism 20 984 Gain at the level of administrations 1 064 The total gain in $ 267 978 The widespread use of the SWH lowers GHG emissions and thus alleviates the harmful effects of climate change. Gain in income at the level of households "villa zones." Total number of households 14 000 Total consumption of households "villa areas" in KWH 151 200 000 Number of households concerned (50%) 7 000 Total consumption of the households concerned 75 600 000 The percentage of gain following the installation of an SWH 25% The unit price of a kWh in DH 1 Direct gain on revenue in $ 1 890 000 Income gain at household level "other types of housing." Total number of households 219 020 Total household consumption excluding villas in kWh 315 388 800 Number of households concerned 175 216 The percentage of gain following the installation of an SWH 25% The percentage of gain following the installation of an SWH 63 077 760 The unit price of a kWh in DH 1 Direct gain on income in $ 6 307 776 Houses receiving solar water heaters will see their monthly energy bills dropped by a quarter at least. The generalization of this energy source would certainly improve the household's revenue, boost demand and ensure safety. Generalizing SWH at the household level would be around 8 million $. d. The impact on the tourism sector Gain in tourism Annual nights 6 700 000 Average consumption per night in L / 24h (according to the survey) 80 Total water consumption in L 536 000 000 Total Administrations 1400 Total number of SWT to be installed 213 600 The unit price of an SWH in $ 500 Revenue 107 million $ On the horizon of 10 years, the annual turnover 10 million $ The number of necessary companies (with a million $ of revenue per company) 10 The number of jobs created (If each company employs 10 positions) 100 Indirect jobs 20 Number of jobs created 120 Wealth in terms of employment Indirect jobs 20 Conclusion The main objective of this study is to analyze and quantify the impact of the generalization of solar water heaters on different levels: the economic, social, and environment. According to the results of our study, From the result of our assessment analysis, we find that generalizing solar water heaters in Marrakech will have a positive effect on the environment, national GDP, balance of payment, and household revenue. The direct financial impact of the generalization of SWH at the direct city level is around $ 15 million. Reinvested at the national economy level, and by adopting a multiplier of the order 4, the indirect impact and of the order of $ 60 Million. The total economic surplus is around $ 75 million; this amount represents 1.33% of the national energy bill and 0.08% of the kingdom's GDP In other words, a dozen cities the size of Marrakech can save 13% on the energy bill if they benefit from the generalization of SWH. In addition, the generalization of SWH will reduce the unemployment rate in Morocco as it will create more job opportunities for young graduates. At the environmental level, the generalization of SWH will reduce the emission of CO2 into the atmosphere and contribute to the reduction of global warming. Furthermore, this research will raise public understanding of the importance of solar water heaters by knowing the different positive impacts on different levels. In addition to this, the research's overview will push for new paradigms that will be valuable for future discussion of the impacts of solar water heaters and may lead to a more in-depth investigation of it. In terms of recommendations, the main ones that emerged, especially during the field surveys, are expressed in the following terms: 1. The need for a political will encouraging the installation of SWH instead of traditional water heaters. 2. Take advantage of microfinance to help access SWH. 3. Create a cell or a research center working with NGOs to make this project a reality. Funding: This research received no external funding Conflicts of Interest: The authors declare no conflict of interest.
2021-12-23T16:09:40.300Z
2021-12-21T00:00:00.000
{ "year": 2021, "sha1": "875982c6887fbe6f34e574c55751a15572a7083b", "oa_license": "CCBY", "oa_url": "https://al-kindipublisher.com/index.php/jefas/article/download/2585/2253", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dfe5cfd518a880c6d765f98306382f94634d6181", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
3582697
pes2o/s2orc
v3-fos-license
Effect of cobalt-mediated Toll-like receptor 4 activation on inflammatory responses in endothelial cells Cobalt-containing metal-on-metal hip replacements are associated with adverse reactions to metal debris (ARMD), including inflammatory pseudotumours, osteolysis, and aseptic implant loosening. The exact cellular and molecular mechanisms leading to these responses are unknown. Cobaltions (Co2+) activate human Toll-like receptor 4 (TLR4), an innate immune receptor responsible for inflammatory responses to Gram negative bacterial lipopolysaccharide (LPS). We investigated the effect of Co2+-mediated TLR4 activation on human microvascular endothelial cells (HMEC-1), focusing on the secretion of key inflammatory cytokines and expression of adhesion molecules. We also studied the role of TLR4 in Co2+-mediated adhesion molecule expression in MonoMac 6 macrophages. We show that Co2+ increases secretion of inflammatory cytokines, including IL-6 and IL-8, in HMEC-1. The effects are TLR4-dependent as they can be prevented with a small molecule TLR4 antagonist. Increased TLR4-dependent expression of intercellular adhesion molecule 1 (ICAM1) was also observed in endothelial cells and macrophages. Furthermore, we demonstrate for the first time that Co2+ activation of TLR4 upregulates secretion of a soluble adhesion molecule, sICAM-1, in both endothelial cells and macrophages. Although sICAM-1 can be generated through activity of matrix metalloproteinase-9 (MMP-9), we did not find any changes in MMP9 expression following Co2+ stimulation. In summary we show that Co2+ can induce endothelial inflammation via activation of TLR4. We also identify a role for TLR4 in Co2+-mediated changes in adhesion molecule expression. Finally, sICAM-1 is a novel target for further investigation in ARMD studies. INTRODUCTION Metal-on-metal (MoM) hip replacements are associated with the development of adverse reactions to metal debris (ARMD), which includes inflammatory pseudotumours, soft tissue necrosis, osteolysis and resulting aseptic implant loosening. Peri-implant tissues are often infiltrated by monocytes, macrophages and lymphocytes (referred to as aseptic lymphocyte-dominated vasculitis-associated lesion, ALVAL) which is indicative of an inflammatory response. However the cellular and molecular mechanisms that underlie ARMD are not wellunderstood. Co 2+ from MoM implants activates human Tolllike receptor 4 (TLR4) [1][2][3], an innate immune receptor expressed on immune cells as well as endothelial and epithelial cells. The major ligand for TLR4 is lipopolysaccharide from Gram negative bacteria, and Research Paper: Immunology Oncotarget 76472 www.impactjournals.com/oncotarget receptor activation causes adaptor protein recruitment and an intracellular signalling cascade that upregulates the activity of transcription factors including NFκB [3]. We have previously shown that activation of TLR4 by Co 2+ increases the secretion of inflammatory cytokines, including interleukin-8 (IL-8) and chemokine (C-X-C motif) ligand 10 (CXCL10), in MonoMac 6 macrophages [4]. Previous studies investigating the inflammatory effects of Co 2+ in endothelial cells have primarily focused on endothelial cells transfected with TLR4 and its coreceptor MD2 [3,5], but few studies have investigated the effect of Co 2+ on endogenous TLR4-expressing endothelial cell lines. Endothelial cells are exposed to Co 2+ present in the blood of MoM hip replacement patients [6] and therefore understanding the cellular response is important in defining the causes of ARMD and identifying potential therapeutic targets for ARMD prevention. In the present study we assessed the immune response of endothelial cells to Co 2+ , with a focus on the role of TLR4. We also investigated the effect of Co 2+ on adhesion molecule expression by endothelial cells and macrophages because of their critical role in inflammatory process such as leukocyte binding and extravasation. To assess the role of TLR4 in the observed cytokine secretion, HMEC-1 were pre-incubated with 1μg/ml CLI-095 (a small molecule TLR4 antagonist) for 6h followed by stimulation with 0.75mM Co 2+ or 100ng/ ml LPS for 24h. IL-8 and IL-6 secretion were measured by ELISA. Pre-treatment with CLI-095 significantly HMEC-1 were pre-treated with 1μg/ ml CLI-095 followed by 24h stimulation with 0.75mM Co 2+ or 100ng/ml LPS. C. IL-8 and D. IL-6 secretion was quantified by ELISA. All data is representative of three independent experiments. Oncotarget 76473 www.impactjournals.com/oncotarget decreased secretion of both cytokines in response to Co 2+ (p < 0.001), showing that their release is TLR4dependent. The cytokine release was not a result of Co 2+mediated cytotoxicity as trypan blue staining revealed no change in HMEC-1 viability following cobalt stimulation (Supplementary Material, Figure 6). Co 2+ -mediated TLR4 activation increases ICAM1 expression in endothelial cells and macrophages Endothelial cells are known to express adhesion molecules, including intercellular adhesion molecule-1 1μg/ml CLI-095 for 6h prior to 24h stimulation with 0.75mM Co 2+ or 100ng/ml LPS. RNA was isolated and cDNA synthesised by reverse transcription. ICAM1 expression was quantified by qRT-PCR. Data is representative of three independent experiments. Co 2+ induced a small but significant 3-fold upregulation in ICAM1 expression by HMEC-1 (p = 0.013) ( Figure 2A) and a larger 35-fold upregulation in MonoMac 6 cells (p < 0.001) ( Figure 2B). In both cell lines the response was found to be TLR4-dependent because it was inhibited by the TLR4 antagonist CLI-095 (both p < 0.001). HMEC-1 exhibited a significant 16-fold increase in MMP9 expression following stimulation with 100ng/ ml LPS (p < 0.001) ( Figure 4A). This was inhibited by CLI-095, showing that it is a TLR4-dependent effect (p < 0.001). In contrast there was no change in MMP9 expression in response to Co 2+ (p = 0.999) ( Figure 4A). A similar pattern was observed in MonoMac 6 cells; following LPS stimulation there was a 7-fold increase in MMP9 expression by (p < 0.001) ( Figure 4B). CLI-095 inhibited this upregulated expression, showing that it is TLR4-dependent. However there was no increase in MMP9 expression in response to Co 2+ , although CLI-095 decreased its expression further. DISCUSSION In the present study we describe a TLR4-dependent inflammatory response to Co 2+ in human endothelial cells Oncotarget 76475 www.impactjournals.com/oncotarget and macrophages. HMEC-1 exhibited significant increases in secretion of inflammatory cytokines IL-8 and IL-6 when stimulated with Co 2+ . This was inhibited by the TLR4 antagonist CLI-095, showing that the receptor is central to the responses. Previous studies have shown that Co 2+ upregulates adhesion molecule expression [8][9][10], but have not demonstrated the exact signalling pathways involved. The data obtained in this study supports the findings of these studies and also indicates a previously unidentified role for TLR4 in Co 2+ -mediated ICAM1 expression in both endothelial cells and macrophages. Furthermore, for the first time a soluble adhesion molecule, sICAM-1, was detected in conditioned media from Co 2+ and LPSstimulated HMEC-1 and MonoMac 6 cells. CLI-095 inhibited sICAM-1 changes and consequently they are TLR4-dependent. We investigated the effect of Co 2+ on MMP9 expression because MMP-9 can cleave membranebound ICAM-1 resulting in the release of its soluble form, sICAM-1. In addition, MMP-9 can be regulated by LPS activation of TLR4 [11] and therefore it is possible that Co 2+ -mediated TLR4 activation results in MMP-9 activity and sICAM-1 generation. However, although LPS increased MMP9 expression in a TLR4-dependent manner, there was no change in expression in response to Co 2+ . The absence of any effect was consistent between HMEC-1 and MonoMac 6 cells. The lack of change in MMP9 expression following Co 2+ stimulation suggests that the enzyme is not responsible for the changes in sICAM-1 secretion observed in response to Co 2+ . Other proteolytic enzymes potentially involved in sICAM-1 cleavage include serine proteases [12], neutrophil elastase [13], and cathepsin G [14]. However the effect of Co 2+ on these factors remains to be elucidated. sICAM-1 has previously been proposed as a marker of inflammation [15] that is cleaved to regulate inflammatory responses but studies are now reporting a broader role for sICAM-1, including promotion of angiogenesis and neovascularisation [16]. This is of particular interest to the present study because blood vessel Oncotarget 76476 www.impactjournals.com/oncotarget formation is required for pseudotumour development, which is a major factor in ARMD. Soft tissue necrosis is also a common feature of ARMD and can result from vascular inflammation restricting oxygen supply to the tissues. The ability of Co 2+ to cause an inflammatory response, including pro-inflammatory cytokine release, in endothelial cells may indicate that similar effects occur in vitro, which could result in ischaemia and subsequent tissue death. A limitation of the present study is the high Co 2+ concentrations that we have used to stimulate the cells. Even the concentrations at the lower end of the range are considerably higher than those detected in the serum and synovial fluid of patients with failed MoM implants [17][18][19]. However, the Co 2+ concentrations used in our study are in line with those of similar in vitro studies of the inflammatory effects of metal ions [3,10,20,21]. Hence, they are appropriate and relevant for this study. A working model of the possible mechanisms indicated by our results is shown in Figure 5. In summary, we have shown that Co 2+ has the potential to induce an inflammatory response in the endothelium through activation of TLR4. It also shows for the first time that Co 2+ increases sICAM-1 secretion in a TLR4-dependent manner. Although the exact mechanism of its release remains unclear, sICAM-1 is an interesting target for further investigation in ARMD because of its previously described roles in angiogenesis, neovascularisation and tumour formation [16]. MonoMac 6 cells are a human TLR4-expressing cell line derived from acute monocytic leukaemia. Cells were cultured as previously described [22]. Cell stimulation Cells were stimulated with cobalt chloride hexahydrate (referred to as Co 2+ in this study) in complete culture medium appropriate for each cell line. Complete culture medium was used as a negative control while 100ng/ml TLR4-specific LPS (Alexis Biochemicals, San Diego, USA) provided a positive control. qRT-PCR Gene expression changes were assessed by quantitative reverse transcriptase polymerase chain reaction (qRT-PCR) using TaqMan primers and probes (ThermoFisher Scientific, Massachusetts, USA). RNA was isolated using a Qiagen RNeasy Mini kit (Qiagen, Venlo, Netherlands) and cDNA synthesised using Superscript III reverse transcriptase (ThermoFisher Scientific). Each qRT-PCR reaction contained 5μl TaqMan Gene Expression Mastermix (ThermoFisher Scientific), 2μl diluted cDNA template, 2.5μl nuclease-free H 2 O and 0.5μl TaqMan Gene Expression Assay (ThermoFisher Scientific). No template controls with nuclease-free H 2 O instead of cDNA were included. All reactions were performed in triplicate and target gene expression was normalised to GAPDH expression. CLI-095 Inhibition of TLR4 was performed by pre-incubating cells for 6h with 1μg/ml CLI-095. CLI-095 (Invivogen, UK) is a small molecule TLR4 antagonist that binds to the intracellular domain of the receptor and prevents recruitment of downstream adaptor proteins. Cytotoxicity assay Cytotoxicity was assessed by trypan blue staining. Stimulated cells were resuspended in a small volume of supernatant and 10μl cell suspension was mixed with 10μl trypan blue dye. Staining was visualised on a Luna II automated cell counter (Logos Biosystems, Virginia, USA) Statistical analysis Statistical significance was calculated using a oneway analysis of variance (ANOVA). When samples were compared to an untreated control ( Figures 1A, 1B, 3A, and 3B), Dunnett's test for multiple comparisons was performed. When comparing all samples to each other, Tukey's test for multiple comparisons was performed.
2018-04-03T02:58:23.437Z
2016-11-09T00:00:00.000
{ "year": 2016, "sha1": "00bafc3602bba0ce2153627554143aa8ea4d1563", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=13260&path[]=42061", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00bafc3602bba0ce2153627554143aa8ea4d1563", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15799496
pes2o/s2orc
v3-fos-license
Evolution of Spatially Coexpressed Families of Type-2 Vomeronasal Receptors in Rodents The vomeronasal organ (VNO) is an olfactory structure for the detection of pheromones. VNO neurons express three groups of unrelated G-protein-coupled receptors. Type-2 vomeronasal receptors (V2Rs) are specifically localized in the basal neurons of the VNO and are believed to sense protein pheromones eliciting specific reproductive behaviors. In murine species, V2Rs are organized into four families. Family-ABD V2Rs are expressed monogenically and coexpress with family-C V2Rs of either subfamily C1 (V2RC1) or subfamily C2 (V2RC2), according to a coordinate temporal diagram. Neurons expressing the phylogenetically ancient V2RC1 coexpress family-BD V2Rs or a specific group of subfamily-A V2Rs (V2RA8-10), whereas a second neuronal subset (V2RC2-positive) coexpresses a recently expanded group of five subfamily-A V2Rs (V2RA1-5) along with vomeronasal-specific Major Histocompatibility Complex molecules (H2-Mv). Through database mining and Sanger sequencing, we have analyzed the onset, diversification, and expansion of the V2R-families throughout the phylogeny of Rodentia. Our results suggest that the separation of V2RC1 and V2RC2 occurred in a Cricetidae ancestor in coincidence with the evolution of the H2-Mv genes; this phylogenetic event did not correspond with the origin of the coexpressing V2RA1-5 genes, which dates back to an ancestral myomorphan lineage. Interestingly, the evolution of receptors within the V2RA1-5 group may be implicated in the origin and diversification of some of the V2R putative cognate ligands, the exocrine secreting peptides. The establishment of V2RC2, which probably reflects the complex expansion and diversification of family-A V2Rs, generated receptors that have probably acquired a more subtle functional specificity. Introduction The accessory olfactory organ or vomeronasal organ (VNO) of Jacobson is a sensory structure dedicated to the detection of pheromones that are molecules secreted or excreted by conspecifics (Tirindelli et al. 2009). The VNO first originated in a tetrapod ancestor and led to the appearance of a rudimentary structure in amphibians that became highly organized in many animal orders such as Squamata, Didelphimorphia, Rodentia, and in primates (Prosimians and New World Monkeys). In contrast, VNO is absent in birds, bats, Old World Monkeys, Apes, and humans (Dennis et al. 2004;Grus et al. 2005;Smith et al, 2005;Shi and Zhang 2007;Zhao et al. 2011;Syed et al. 2013). In Didelphimorpha, Lagomorpha, and Rodentia, the VNO presents two distinct neuronal layers (apical and basal) characterized by the expression of different G-protein subunits and receptors (Young and Trask 2007). Apical and basal neurons project to two separate regions (anterior and posterior) of the accessory olfactory bulb. From here, the projections of the apical and basal neurons remain segregated in the amygdala and hypothalamus (Yoon et al. 2005;Martinez-Marcos 2009). The molecular organization of the rodent vomeronasal neuroepithelium in apical and basal neurons is based on the specific expression pattern of two transduction molecules, namely G-protein Ga i2 and Ga o , and two distinct groups of putative pheromone receptors namely type-1 vomeronasal receptors (V1Rs) (Dulac and Axel 1995) and type-2 vomeronasal receptors (V2Rs) (Herrada and Dulac 1997;Matsunami and Buck 1997;Ryba and Tirindelli 1997). The vomeronasal neurons also express formyl-peptide receptors (FPRs) which are known to sense antimicrobial peptides rather than pheromonal cues (Liberles et al. 2009;Riviere et al. 2009). Ga i2 is expressed in the apical neurons and colocalizes with V1Rs, whereas Ga o coexpresses with V2Rs in the basal neurons (Dulac and Axel 1995;Herrada and Dulac 1997;Matsunami and Buck 1997;Ryba and Tirindelli 1997). FPRs are expressed in both apical and basal region of rodent VNO (Liberles et al. 2009;Riviere et al. 2009). In addition to V1Rs, V2Rs, and FPRs, the basal neurons of the VNO specifically express molecules of the nonclassical class I genes of the Major Histocompatibility Complex termed H2-Mv. The nine vomeronasal-specific H2-Mv genes are differentially expressed in subsets of basal neurons (Ishii et al. 2003;Loconto et al. 2003). Although not indispensable to generate physiological responses in the VNO neurons, H2-Mv molecules are required to obtain a supersensitive detection of pheromones (Leinders-Zufall et al. 2014). V2Rs differ from V1Rs for the presence of introns and for the long N-terminal extracellular region that reflects the chemical properties of their ligands. In fact, the sequence analysis of V2Rs reveals that the highest intraspecies and interspecies variability is located in the extracellular N-terminal domain (Yang et al. 2005), which is believed to bind pheromones. Whereas the airborne pheromones are detected by V1Rs (Boschat et al. 2002), evidence indicates that substances such as major urinary proteins, gland secreting peptides (ESPs) and MHC peptides are candidates as V2R ligands (Loconto et al. 2003;Kimoto et al. 2005;Ishii and Mombaerts 2008;Leinders-Zufall et al. 2009;Papes et al. 2010;Ferrero et al. 2013;Sturm et al. 2013). However, only one member of the large family of exocrine secreting peptides (ESPs), namely the lacrimal peptide ESP1, has been unequivocally demonstrated to bind to a specific V2R subtype, eliciting behavioral effects in female mouse (Haga et al. 2010;Abe and Touhara 2014). From the evolutionary point of view, V2R genes appeared in fish and amphibians (Shi and Zhang 2007;Young and Trask 2007;Grus and Zhang 2009;Ji et al. 2009;Francia et al. 2014). A striking variation of V2R genes occurred in terricolous species as intact genes have only been reported in Squamata (lizard and snake), Didelphimorpha, (opossum), and Rodentia and Lagomorpha (rabbit). No functional V2Rs have been identified in Carnivora (dog), Artiodactyla (cow), or primates (macaque, chimpanzee, gorilla, and human) with the exception of prosimians (loris, lemur, and tarsier) (Dong et al. 2012;Hohenbrink et al. 2012;Yang et al. 2005;Shi and Zhang 2007;Young and Trask 2007;Ishii and Mombaerts 2011). In mouse and rat, V2Rs are classified into four families, A-D (Yang et al. 2005). Receptors of family A typically represent the majority of the V2Rs (95% in the mouse) and show a strong lineage specificity so that orthologs can be exclusively found in closely related species, in which they tend to form small but independent clades (Grus and Zhang 2008). In mouse and rat, family A further expanded originating two distinct groups, namely subfamily A1-6 and subfamily A7-10 (Silvotti et al. 2011). Family C is the most ancient among V2R families and it is typically represented by one or two genes in each species with the exception of mouse and rat (Rodriguez 2004;Shi and Zhang 2007;Silvotti et al. 2011;Brykczynska et al. 2013). Intact family-C genes are found in prosimians whereas Old World Monkeys, Apes, and humans possess only pseudogenes (Hohenbrink et al. 2013). In mouse and rat, family-C V2Rs expanded, originating two distinct subfamilies, namely C1 and C2. In the VNO, family-ABD V2Rs are expressed monogenically (Rodriguez et al. 1999), although the basal neurons show a multigenic expression of V2Rs. In fact, family-ABD V2Rs are coexpressed with family-C V2Rs according to a specific pattern (Martini et al. 2001;Silvotti et al. 2007Silvotti et al. , 2011. In the rat and mouse, the expansion of family C and family A defined two populations of basal neurons. One population expresses subfamily A8-10, family-BD, and subfamily-C1 V2Rs, whereas the other population expresses combinations of subfamily A1-6 and subfamily-C2 V2Rs (Silvotti et al. 2011). In this study, we analyzed some evolutionary features of V2Rs in Rodentia. First, we characterized the phylogenetic tree of these receptors starting from the most basal species. Second, we identified when the expansion of family A and family C V2Rs occurred. Third, we traced the evolutionary history of the V2R coexpressing H2-Mv genes and of the V2R putative protein ligands, ESPs. Finally, we sought a correlate between the diversification and expansion of family-C genes with their potential functions. To perform this evolutionary analysis, we partially reconstructed the V2R sequences in various rodent species either employing the currently available databases or by PCR amplification and Sanger sequencing of genomic DNA obtained from tissue specimens. animals (T-RT3), or from the tissue library of the University of Montpellier, France (Michaux et al. 2001) (table 1). Species Genomic DNA was isolated by digestion of tissues with proteinase-K and SDS followed by phenol extraction (Sambrook et al. 1989). To amplify family-A, family-C V2Rs, and H2-Mv genes, degenerate primers were designed within conserved regions (supplementary table S1, Supplementary Material online). PCR conditions were as follows: an initial denaturation step of 5 0 at 95 C, followed by 40 cycles of 30 00 at 95 C, the annealing of 45 00 at 55 C-60 C, the extension of 1 0 at 72 C, and a final step of 5 0 at 72 C. A second amplification of 20-35 cycles was occasionally required. The band of interest was excised from the agarose gel and purified using the Qiagen gel extraction kit. Normally, the results of four separate amplifications were pooled for cloning. Purified products were cloned using the pGEM-T Easy Vector System (Promega) or Topo TA cloning kit (Invitrogen). For each species, a minimum of 50 colonies were sequenced for family-A and family C analysis. RT-PCR RNA was extracted from fresh tissues of C57BL mice and purified using Trizol reagent (Invitrogen Milano, Italy). About 2 mg of total RNA served as template for oligo-dT primed first strand cDNA synthesis with Im-Prom-II Reverse Transcriptase (Promega, Milano, Italy). PCR was performed in Mastercycler Personal (Eppendorf, Milano, Italy) using AmpliBiotherm DNA polymerase, 3 mM MgCl2, 0.2 mM for each dNTPs, and 200 pmol forward/reverse target-specific oligonucleotide primers. Cycling parameters consisted of an initial denaturation step (95 C, 2 min) followed by 35 cycles, each of these included a denaturation (95 C, 30 s), a primer annealing (50 C, 30 s), and an extension (72 C, 30 s) step. Reaction was completed by a final extension step at 72 C for 5 min. Semi-quantitative analysis of RNA expression was performed on agarose gel after electrophoresis using the NIS-Elements Advanced Research software (Nikon, Firenze, Italy). PCR primer pairs were designed to amplify family-C V2Rs (supplementary table S1, Supplementary Material online). The expected amplified sequences encompass exon 3 and exon 4 and corresponded to the C-terminal region of the extracellular domain of these receptors. In Situ Hybridization The sequence encoding the family-B and the family-A6 probes of S. vulgaris was obtained by RT-PCR from VNO cDNA with primers shown in supplementary table S1, Supplementary Material online. PCR products were cloned into pGEM-T Easy Vector and were subjected to sequence analysis. Specific antisense cRNA probes were obtained by using digoxigenin-labeled NTPs (Roche) starting from 2 mg of linearized DNA template. The reaction products were precipitated and resuspended in 200 ml of hybridization buffer and used at the working dilution of 1:1,000. Experiments were performed as previously described (Schaeren-Wiemers and Gerfin-Moser 1993), except that hybridization and washing procedures were performed at 63 C unless otherwise indicated. Bioinformatics Primers for V2R and H2-Mv amplification were assessed for their theoretical capacity to match with genomic sequences obtained from the current databases of rodents. Imperfect matches with degenerated primers were determined using the Fuzznuc program of the Emboss package (supplementary tables S2 and S3, Supplementary Material online). The position of mismatches along the primer sequence was determined by parsing the Fuzznuc output with a Perl script based on the IUPAC module of the Bioperl package (Ver. 1.006901) (supplementary figs. S1 and S2, Supplementary Material online). The search for genes encoding V2Rs was conducted using vertebrate genomic sequences available in the Ensembl (http:// www.ensembl.org/, last accessed January 8, 2015) and Genbank (http://www.ncbi.nlm.nih.gov/genbank/, last accessed January 8, 2015) databases. An initial tblastn search with the mouse V2R sequences was performed to identify and retrieve the genomic contigs containing V2R genes in the various species. Next, complete coding sequences and pseudogenes were determined using Genewise (Birney et al. 2004) through homology comparisons with a Hidden Markov Model (HMM) of V2R proteins. The V2R HMM was constructed using the Hammer package (Eddy 1998) with an alignment of ten manually curated full-length V2R sequences from mouse and rat. The full-length protein sequences extracted with this procedure are reported in the supplementary information. Protein sequence alignments were carried out with ClustalW 2.1 (Thompson et al. 2002). DNA sequence alignments were based on protein alignments using the Transalign program (Bininda-Emonds 2005). Phylogenetic analysis was performed with the neighbor-joining (NJ) algorithm implemented in ClustalW, and the maximum-likelihood method (Felsenstein 1981) implemented in the PHYML (Guindon et al. 2009) and RAxML ver. 7.7.8 (Izquierdo-Carrasco et al. 2011) programs. The SH-LIKE algorithm and GTRCAT substitution models were used for PHYML and RAxML analysis, respectively, while the Kimura model was used for the NJ analysis. The validity of the Kimura model for V2R phylogeny was tested by estimating the transition/ transversion ratio of mouse V2R sequences (T s /T v = 1.72) with the codeml program of the Paml package (Yang 1997). Trees were visualized and annotated with the FigTree program (http://tree.bio.ed.ac.uk/software/figtree/, last accessed January 8, 2015). The exon-3 and exon-5 sequences obtained by PCR amplifications were clustered using the cd-hit program (Fu et al. 2012). Sequences with <1% nucleotide differences were considered to be the same sequence, which could originate from alleles of the same locus or from PCR errors (Dehara et al. 2012). Sequences obtained from molecular cloning were trimmed by excluding the primer sequences. Multiple alignment encompassing database and PCR sequences were trimmed at the same length before phylogenetic analysis. Trees reconstructed with the NJ methods shown in the figures were consistent with the ML trees. The MHC tree was reconstructed with the ML method based on an untrimmed alignment. The detection of possible pseudogenes in the amplified exon-3 sequences was based on the identification of frame-shift or nonsense mutations. The putative intact sequences were compared with the closest mouse homologues with the codeml program to verify that the ratio of synonymous and nonsynonymous substitutions (dN/dS) was <1. Immunohistochemistry For immunohistochemistry, 2-month old FVB mice were deeply anesthetized with pentobarbital and transcardially perfused at room temperature with a solution containing 10% saturated picric acid, 2% paraformaldehyde in PBS for 5 0 followed by a 50 ml of PBS. Tissues were then dissected, decalcified in EDTA 0.5 M pH 8.0 for 48-72 h at 4 C, and cryo-protected in 30% sucrose at 4 C overnight. Subsequently, tissues were included in OCT embedding solution (CellPath, UK) and frozen in liquid nitrogen cooled pentane. Cryostat-cut sections (20 mm) were treated with 0.5% sodium dodecyl-sulfate for 10 min, washed in PBS, prior to incubation with the primary antibody. For Vmn2r1/ChAT double staining, sections were incubated with an anti-Vmn2r1 antibody (1:100) (Silvotti et al. 2007) and an anti-ChAT antibody (1:25, developed in goat) (Ogura et al. 2011) in PBS solution containing 1% albumin, 0.3% Triton X-100 (blocking buffer) for 48 h at 4 C. After washes, sections were first incubated with an Alexa488 conjugated anti-goat antibody for 2 h. After washes, sections were preincubated with blocking buffer containing 10% goat serum for 1 h before incubation with an Alexa568 conjugated anti-rabbit antibody. For Vmn2r1/IP3R3 double staining, sections were first incubated with both the anti-Vmn2r1 antibody and an anti-IP3R3 antibody (1:100, developed in mouse) (Elsaesser et al. 2005) for 48h at 4 C. Staining was visualized with an Alexa488 conjugated anti-rabbit antibody (Vmn2r1) and an Alexa568 conjugated anti-mouse antibody (IP3R3). For preabsorption controls, 5 mg of each anti-V2R antibody were incubated with 10 mg of the polypeptide against which the antibody was raised. Anti-IP3R3 was purchased from BD Ca. porcellus Guinea pig En; WGS -(1) 26 (45) 8 (1) 2 (1) , family-C, H2-MV intact and pseudogenized (in brackets) sequences are identified by BLAST search on GenBank whole genome shotgun sequence (WGS), NCBI nonredundant sequence (nr) and Ensembl sequence (En) databases or obtained by molecular cloning of genomic DNA extracted from tissue samples (T-prefix). Cells with no values refer to not-searched sequences. The symbol "-" refers to searched but not identified sequences. Mouse and rat sequences are from Young and Trask (2007 Transduction Laboratories, anti-ChAT from Millipore and the Alexa-conjugated antibodies from Molecular Probes-Life Technologies. Fluorescent images were obtained using a Zeiss fluorescent microscope. All experiments were carried out on rodents and involved only the painless suppression of animals. The experiments comply with the Principles of Animal Care (publication no. 85-23, revised 1985) of the National Institutes of Health and with the current law of the European Union and Italy. The present project was approved by the Ethical Committee of the University of Parma: approval ID: 17/14, March 27, 2014. Accessions Accession numbers of the sequences used in this study are reported in supplementary file S1, Supplementary Material online when not indicated in figures. Gene Nomenclature The nomenclature of V2Rs was that proposed by Young and Trask (2007). Results and Discussion The four gene families (ABCD) in which rodent V2Rs are classified (Yang et al. 2005;Shi and Zhang 2007;Young and Trask 2007) show different evolutionary histories. Family-C genes were found in the shark genome (Callorhinchus milii) (Grus and Zhang 2009), whereas our analysis revealed that family-B V2Rs were already established after the separation of Squamata, in a lineage leading to Monotremata (platypus). Family D could not be clearly identified in platypus by our analysis, but it was detected in the opossum genome (Didelphimorpha) (Young and Trask 2007) (supplementary fig. S3, Supplementary Material online). Our phylogenetic data also suggest that family-A V2Rs (V2RA) is the most recently derived family. A large gene group of V2RA was found in the armadillo (Cingulata) genome, suggesting that V2RA was established before the rodent separation from Laurasiatheria (supplementary fig. S3, Supplementary Material online). In the mouse where a striking expansion and diversification of V2Rs occurred, V2RA reportedly includes nine subfamilies split into two groups, namely subfamily A1-6 and subfamily A8-10 (with subfamily A7 exclusively present in rat) ( fig. 1A) (Yang et al. 2005). Subfamily A10 represents the basal branch of V2RA (Silvotti et al. 2011). While V2RA-subfamilies show an average identity >40% among each other, the mouse A10 gene shows an average identity of <40% with other V2RA subfamilies, which is comparable with that of the closest external V2R branch (family B) (supplementary table S4, Supplementary Material online). Moreover, in contrast to all the other V2RA-subfamilies, orthologues of A10 were already present in progenitors of mammal basal species such as elephant and tenrec (Afrotheria) (supplementary fig. S3A, Supplementary Material online). Thus, in this study, we classified this subfamily as a separate branch, which we have named Family E. Furthermore, as all family-E and family-D sequences reported in the available mammalian databases are incomplete, we reconstructed these genes in mouse and rabbit in order to establish that these V2R families included putatively functional genes (supplementary file S2, Supplementary Material online). Because we recently proposed that in Muridae (mouse and rat), V2RA, and family-C V2Rs (V2RC) underwent a specific expansion originating the coexpressing subfamily A1-6 and subfamily C2 (Silvotti et al. 2011), the first question we asked was when this phylogenetic event occurred throughout the evolution of Rodentia. Expansion of family-A V2Rs in Rodentia To characterize the V2R lineages and identify the origin and expansion of subfamily A1-6, we analyzed the V2R sequences in representative species of Rodentia ( fig. 2A and table 1). Due to their complex gene structure (six exons) ( fig. 1B), the complete reconstruction of all rodent V2R genes is difficult to obtain. Thus, to build a correct phylogenetic tree, we asked whether single exon sequences could be representative of the full-length genes. To answer this question, we first aligned the DNA sequences of all mouse intact V2RA and then we created independent phylogenetic trees with each exon. Each tree was compared with that obtained with the alignment of the full-length V2RA sequences. Our results indicate that the phylogenetic tree reconstructed from exon 3, encompassing most of the ligand binding domain of V2Rs ( fig. 1B and 1C), shows a very similar topology with that obtained by aligning the full-length V2RA sequences (supplementary fig. S4, Supplementary Material online). On these grounds, we approached the problem in two ways. In species with a draft genomic coverage (table 1), the exon 3 of V2RA genes was identified and reconstructed based on blastn similarity with V2R sequences of mouse and rat. The novel sequences obtained from each species were used as queries to conduct further blastn analysis. For rodent species without a genomic coverage but important for our analysis, PCR experiments with specific primer pairs (see Methods and supplementary tables S2 and S3, Supplementary Material online) were performed to amplify the exon 3 of family A genes from tissue-extracted genomic DNA (table 1 and supplementary table S1, Supplementary Material online). To increase the number of novel V2RA sequences that could be obtained by this approach, the genomic DNA of each species was amplified at different annealing temperatures and the resulting amplicons were pooled and subcloned for sequencing into two different vectors. Using this molecular strategy, we were able to identify 20-25 novel sequences for each species (with the exception of Anomalurus sp. and S. vulgaris) which were considered to be sufficient for the aim of this study (table 1). To infer the evolutionary history of rodent V2RA, we performed a phylogenetic analysis of the sequences obtained by blastn search and cloning approach. Although we tentatively distinguished between putatively functional genes and pseudogenes on the basis of exon-3 sequences, all sequences were included in the analysis, as pseudogenes provide information on the origin of the V2RA-subfamilies. The consensus topology was verified and supported by maximum likelihood and NJ algorithm. The V2R phylogenetic tree obtained from this analysis was compared with that of the most commonly accredited evolutionary tree of Rodentia ( fig. 2, supplementary fig. S5, and file S3, Supplementary Material online) (Hedges et al. 2006;Blanga-Kanfi et al. 2009). The first observation from our analysis suggests that rodent V2RA were established from few ancestral families. This is evident in I. tridecemlineatus and S. vulgaris (Sciuromorpha), which represent the basal branch of Rodentia. These two distantly related squirrel species revealed a limited number of V2RA, highly pseudogenized, and all clustering with subfamily A6 (table 1, fig. 2B, and supplementary fig. S5, Supplementary Material online). Hence, it is most likely that the ancestor of rodent A1-5 subfamilies was established after the split of Rodentia from Lagomorpha (Huchon et al. 2002;Hedges et al. 2006). A second observation shows that rodent V2RA tend to form private clades in distantly related species or semiprivate clades (with few orthologues) in closely related species as also reported for other lineages (Shi and Zhang 2007;Grus and Zhang 2008). The occurrence of gene duplication following species separation is a condition that makes it difficult to define orthologous relationships (Sonnhammer and Koonin 2002). Moreover, the subfamily classification is also complicated by the presence of several pseudogenes (as in Spalacidae) ( fig. 2B). Thus, the inclusion of sequences into a specific mouse subfamily was based on their monophyletic grouping in the tree. In Anomalurus sp. (Anomaluromorpha), we found no evidence of V2Rs clusterizing with subfamily . This is indicative of a small V2R repertoire in this species although it remains possible that Anomalurus, as previously discussed, formed phylogenetically distant V2RA sequences which could not be amplified with our primers. From our analysis, J. jaculus (Dipodidae), Sp. leucodon, and N. galili (Spalacidae) all have receptors in the A1-5 group and thus they are candidates to represent the species most distantly related to mouse in which this clade appeared (table 1). In these rodent families, given their crucial evolutionary position, sequences of exon 3 were obtained from both the molecular and bioinformatic approach (supplementary fig. S6, Supplementary Material online and table 1). Moreover, the analysis extended to mouse/rat closely related species such as M. unguiculatus, C. griseus, Mes. auratus, P. maniculatus, and Mi. ochrogaster confirmed that all these rodent species have orthologues of V2RA1-5 ( fig. 2B, supplementary fig. S5, Supplementary Material online, and table 1). Phylogenetic Origin of C-Subfamilies Family-C represents the basal V2R branch in rodents ( fig. 3A). In mouse and rat, V2RC genes underwent a process of duplication and inversion starting from a single gene that is present in most rodent and nonrodent species (Silvotti et al. 2011). Since, in Muridae, all inverted/duplicated family-C genes cluster in subfamily C2, we previously proposed that the establishment of this subfamily might coincide with this genetic event (Silvotti et al. 2011). Indeed, the analysis of the family-C locus in C. griseus (Cricetidae) supports this finding as this species has the same genomic organization as rat and mouse (Silvotti et al. 2011). Our analysis extended to J. jaculus (Dipodidae), suggested that duplication/inversion did also occur; here, however, the duplicated/inverted family-C gene clustered with the subfamily-C1 group ( fig. 3B). This indicates that either the progenitor of the Dipodidae species independently duplicated and inverted the family-C gene or that the establishment of subfamily C2 occurred later in rodent evolution. Thus, we supposed that the origin of subfamily C2 took place in ancestral lineages postdating the separation between Dipodidea and Muroidea ( fig. 2A). To date this phylogenetic event, we considered the family of Spalacidae (Sp. leucodon and N. galili), which represents the basal branch of the muroid lineage. Our reconstruction of the complete V2RC gene in N. galili by blastn search against the whole genome shotgun sequence (WGS) and NCBI nonredundant sequence (nr) databases using mouse sequences as queries (Fang et al. 2014), indicates the presence of a single V2RC gene, split into two contigs (gij605751882, gij605715922), clustering with subfamily C1 (supplementary file S4, Supplementary Material online). To confirm that subfamily-C2 did not include spalacid sequences, we amplified the genomic DNA of Sp. leucodon with primers specific for the exon 5 of rodent V2RC genes (supplementary table S1, Supplementary Material online). The choice of exon 5 as a template for PCR amplification was the result of the sequence analysis of V2RC in all rodent and nonrodent species with a draft genome. In exon 5, we identified a single amino acid substitution that was likely to be a feature to discriminate subfamily-C1 V2Rs (V2RC1) from subfamily-C2 V2Rs (V2RC2). All V2RC2 genes so far identified invariably presented a lysine at position 552 in place of a glutamine, histidine, or aspartate (K 552 ! Q/H/D, referred to mouse Vmn2r1) which are distinctive residues of all V2RC1 genes ( fig. 4 and supplementary fig. S7, Supplementary Material online). The substitution is located in the cysteinerich (CR) domain ( fig. 1C) that is thought to play a role in transmitting the ligand-induced conformational change to the G-protein and in the receptor oligomerization process (Muto et al. 2007). The phylogeny as well as the sequence analysis of exon 5 in Sp. leucodon and N. galili confirm that Spalacidae did not evolve V2RC2 genes ( fig. 4 and supplementary fig. S8, Supplementary Material online). Given the loss of the V2R repertoire and probably of the vomeronasal functions in these species, we cannot exclude that the spalacid progenitor inherited (and then completely lost) the duplicated/ inverted V2RC1 gene which was observed in Jaculus. Phylogeny of H2-Mv in Rodents In rat and mouse, the coexpression of V2RC2 and V2RA1-5 in a specific subset of VNO neurons is also related to the expression of nonclassical major histocompatility complex molecules (H2-Mv) (Silvotti et al. 2007;Ishii and Mombaerts 2011). These proteins are specifically expressed in the VNO and are phylogenetically distinct from classical major histocompatility complex (Mhc) molecules (Ishii et al. 2003;Loconto et al. 2003). Furthermore, H2-Mv are reportedly involved in pheromone detection (Leinders-Zufall et al. 2014). Thus, we asked whether the phylogenetic origin of H2-Mv was correlated with that of the coexpressing V2RA1-5 or V2RC2 that, respectively, date before and after the separation of Dipodidea and Muroidea. To establish this, we first searched for H2-Mv sequences in the rodent databases to generate a phylogenetic tree. Our tblastn analysis using rat and mouse queries against the WGS and nr databases of C. griseus, P. maniculatus, and Mi. ochrogaster revealed that orthologues of the mouse H2-Mv genes were already present in Cricetidae (C. griseus). An iterated tblastn search in the genomes of Spalacidae (N. galili), Dipodidae (J. jaculus), Heteromyidae (D. ordii), and Caviidae (Ca. porcellus) failed to identify H2-Mv sequences ( fig. 5 and table 1). To further exclude the presence of H2-Mv genes in Spalacidae, which represent the basal branch of the Muroidea clade, we used a PCR approach to analyze the genome of Sp. leucodon. Degenerate primer pairs (H256 and H257) were designed based on the exon-2 sequence of H2-Mv annotated in the nr sequence databases of Mu. musculus, R. norvegicus, and C. griseus (supplementary tables S1 and S3, Supplementary Material online). As expected, these primers successfully amplified H2-Mv sequences in these species but they were unable to generate amplicons in all the other rodent species, including Sp. leucodon and J. jaculus (supplementary fig. S9, Supplementary Material online). We also designed less degenerate primers (H255) that matched most conserved H2-Mv regions of exon 4 but that were also predicted to match sequences of some Mhc type-I subclasses different from H2-Mv (supplementary tables S1 and S3, Supplementary Material online). Control experiments indicated that our primers indeed amplified both mouse H2-Mv and Mhc sequences. As expected, primers H255 generated amplicons in Sp. leucodon (and in all of the other rodent species) that were cloned and sequenced (supplementary fig. S9, Supplementary Material online). The analysis of 86 clones yielded 16 different sequences, all encoding Mhc, phylogenetically distinct from H2-Mv (supplementary fig. S10 and file S5, Supplementary Material online). Thus, our data strongly support the hypothesis that H2-Mv coevolved with V2RC2 genes in an ancestor of the Cricetidae species. Differential Expression Pattern of Subfamily-C1 and -C2 V2Rs in Extra-Vomeronasal Tissues One important issue is understanding why family C has diversified and expanded to establish the phylogenetically recent subfamily C2 in Muridae and Cricetidae species. Although V2RC are typically defined as vomeronasal receptors, they are not phylogenetically linked to the origin of the VNO. Fish, that did not develop this organ, express V2RC genes in the olfactory epithelium (MOE) (DeMaria et al. 2013). Interestingly, V2RC genes are exclusively expressed in the MOE of amphibians although they have a functional VNO (Syed et al. 2013). In phylogenetically more recent species such as mouse and rat, a V2RC1 gene, but not any other V2R, was detected in the olfactory-related sensory cells of the Grueneberg ganglion (Roppolo et al. 2006;Fleischer et al. 2007). All these observations suggest that V2RC may subserve to broader chemosensory functions, possibly common to many species (Mamasuew, Hofmann, Breer et al. 2011, Mamasuew, Hofmann, Kretzschmann, et al. 2011Liu et al. 2012;Hanke et al. 2013). On these grounds, we hypothesized that subfamily-C2 establishment and expansion was related to a functional requirement of vomeronasal specificity for V2RC. Thus, taking into account the mouse as a model, we analyzed V2RC expression in different tissues. To achieve this purpose, we reverse-transcribed mouse RNA from different tissues and amplified the cDNA with primers specific for each family-C gene (Vmn2r1-7). Vmn2r1 (subfamily C1) amplicons were identified in all tissues we have tested including MOE (figs. 6A and 2B). In contrast, Vmn2r2 (subfamily C2) expression was only revealed in cerebellum, subcortical regions, and lung ( fig. 6B) whereas Vmn2r6/7 (subfamily C2) expression was exclusively detected in lung. No expression of the Vmn2r3 (subfamily C2) gene was identified in any of the tested tissues ( fig. 6B). All PCR products were subcloned for sequence confirmation. We also performed PCR reactions with different cDNAs and primer pairs that matched family-ABDE V2R sequences. No PCR products of the predicted molecular weight were detected in all tested tissues. Since V2RC amplicons were detected in different tissues, we assayed them for protein expression by immunohistochemistry using antibodies against family-C V2Rs. By staining tissue sections with an antibody raised against Vmn2r1, we found a specific labeling of a cellular subset located in the upper part of MOE lining above the sustentacular cell layer. Both olfactory and tracheal Vmn2r1-positive cells appeared equipped with microvilli, therefore we asked whether they FIG. 4.-Multiple alignment of family-C V2Rs in rodents. The amino acid substitution (Q/H/D ! K) in exon 5 that differentiates subfamily-C1 from subfamily-C2 V2Rs is encased by a red and blue rectangle, respectively. The position of this residue in the 3D structure of the protein is indicated in figure 1C with an arrow. For species abbreviations refer to Methods. corresponded to the microvillous cells previously described by different authors (Elsaesser et al. 2005;Lin et al. 2008;Krasteva et al. 2011). A first subset of olfactory microvillous cells was shown to express molecules of the phosphatidylinositol triphosphate pathway such as PLC-beta-2, type III inositol 1,4,5-triphosphate receptor (IP3R-3) and the transient receptor potential channel C6 (TRPC6) (Elsaesser et al. 2005). These cells respond to specific odorants and are probably involved in the processes of neural regeneration of the MOE (Montani et al. 2006;Hegg et al. 2010;Jia et al. 2013). To prove the correlation between this microvillous cellular subset and our Vmn2r1 positive cells, we performed double label immunohistochemistry with an antibody raised against IP3R-3. IP3R-3 staining revealed that 90% of the Vmn2r1 positive cells did not show IP3R-3 immunoreactivity at detectable levels ( fig. 6D). Thus, the microvillous cells described by Elsaesser et al. (2005) did not express the vomeronasal receptor Vmn2r1. A second subtype of microvillous cells, identified in the olfactory and tracheal epithelium, was reported to express the transient receptor potential channel M5 (TRPM5) (Lin et al. 2008;Krasteva et al. 2011). These cells are cholinergic as they also express the signature markers of choline acetyltransferase (ChAT) and the vesicular acetylcholine transporter (VAchT) (Ogura et al. 2011). The olfactory TRPM5/ChAT/VAchTexpressing microvillous cells are believed to respond to xenobiotic chemicals or to thermal stimuli, releasing acetylcholine to modulate activities of the olfactory sensory neurons (Ogura et al. 2011). In contrast, in trachea, microvillous cells expressing ChAT are solitary chemosensory cells that also express bitter receptors and sense bitter compounds via a cholinergic pathway initiating an aversive reflex (Krasteva et al. 2011). For this reason, we assessed whether these two cellular subsets may also express Vmn2r1. Double label immunohistochemistry with anti-Vmn2r1 and anti-ChAT antibodies clearly indicated that these molecules indeed colocalized ( fig. 6D and supplementary fig. S12B, Supplementary Material online). Thus, V2RC1 expression in the olfactory and tracheal epithelium is associated with a specific subset of excitable nonneuronal cells that responds to sensory stimuli different from those typically pheromonal (Ogura et al. 2011). In contrast, all V2RC2 appeared to be exclusively expressed in sensory neurons of the VNO. Thus, the expansion of V2RA1-5 genes in muroid species probably required a more subtle functional specificity for the coexpressing V2RC. Conclusions In this study, we have analyzed the evolutionary history of V2Rs in rodent species supported by the analysis of their expression pattern. We first observed that the last rodent common ancestor exhibited a very small repertoire of V2Rs, probably restricted to one or few receptors for each ABCDEfamily (table 1). This minimal repertoire remained almost unmodified in the extant species of Sciuridae (basal branch of Rodentia) such as I. tridecemlineatus and S. vulgaris (table 1). Recently, a correlation was proposed between the reduction in the number of V2R expressing neurons in the VNO and the occurrence of sexual dimorphism in mammalian species (Suarez et al. 2011). Our observations do not support this hypothesis, as we found a similarly limited V2R repertoire in both the dimorphic species of I. tridecemlineatus (ground squirrel) and the monomorphic species of S. vulgaris (tree squirrel) (Shulte-Hostedde 2007; Mateju and Kratochvil 2013). In this latter species, we also observed a striking reduction of the V2R expressing neuronal layer by in situ hybridization and immunohistochemical studies on the VNO (supplementary fig. S13, Supplementary Material online). In addition, as inferred from the annotation in the nr databases, rodent species which are reportedly dimorphic such as the Octodon degus and Chinchilla lanigera (Hystricomorpha) (Lammers et al. 2001;Shulte-Hostedde 2007;Suarez and Mpodozis 2009) have evolved a consistent repertoire of V2Rs (table 1 and supplementary file S1, Supplementary Material online). Thus, in contrast to V1Rs, where a correlation exists between the size of this receptor group and adaptive situations (Wang et al. 2010), the functional significance of the expansion or loss of V2Rs in mammalian species is not clearly established. Yet, it is noteworthy that the majority of the identified V2RA sequences (exon 3) of Sp. leucodon and the whole V2R repertoire (with the exclusion of V2RC) of N. galili is characterized by pseudogenes (table 1). Because the Spalacidae family only includes fossorial species, it is possible that the constrained sociosexual behavior in the subterranean environment resulted in the loss of some if not all the vomeronasal functions (Smith et al. 2007). In line with this hypothesis, a strong pseudogenization of V2RA genes is also evident in H. glaber, another burrowing rodent of the Hystricomorpha suborder (table 1). Our study also elucidates that the phylogenetically recent V2RA1-5 group was established in myomorphan species, most likely from an ancestral receptor of the subfamily A6, after the split of Anomaluridae (Anomaluromorpha) and Dipodidae, an event that approximately occurred about 76 million years ago (MYA) ( fig. 7) (Hedges et al. 2006). In contrast, the origin of the coexpressing (in rat and mouse) V2RC2 and H2-Mv occurred in a common ancestor of Cricetidae and Muridae, then after the appearance of the V2RA1-5 genes ( fig. 7) (Hedges et al. 2006). This finding may have interesting implications on the regulatory mechanisms underlying V2R gene expression that are believed to occur according to a temporal succession in the developing VNO neurons. Since in mouse, a single V2RA1-5 gene is expressed first and drives the expression of a V2RC2 gene (Ishii and Mombaerts 2011), it is conceivable that, in J. jaculus (Dipodidea), that possesses V2RA1-5 but not V2RC2 genes, different regulatory mechanisms (genes encoding transcription factors, for example) have evolved to address the coexpression of subfamily A1-5 with the extant subfamily-C1 genes. The obligatory coexpression of V2RC with non-V2RC genes in the VNO also predicts a common evolutionary history for these two groups of receptors. Indeed, we generally observed that the incidence of pseudogenization in non-V2RC genes highly correlates with that of V2RC genes in most mammalian species with the exception of N. galili (Spalacidae) whose V2R repertoire, according to our analysis, is limited to one intact subfamily-C1 gene (supplementary fig. S3, Supplementary Material online). The presence of a single V2R gene, as observed in Spalacidae, could also be explained with the loss of the VNO basal layer in this species and the expression of V2RC1 in tissues different from the VNO, as reported in this and previous studies (Fleischer et al. 2007;DeMaria et al. 2013;Syed et al. 2013). Our phylogenetic analysis within the A1-5 group shows that subfamily A1-2 V2Rs (V2RA1-2) are first found in Dipodidae (J. jaculus) and Spalacidae (Sp. leucodon and N. galili), whereas subfamily-A3 and subfamily-A5 V2Rs (V2RA1-2 and V2RA5) are detected in Cricetidae (C. griseus, Mes. auratus, P. maniculatus, and Mi. ochrogaster). Finally, subfamily A4 probably evolved in murid species (rat and mouse) (supplementary fig. S5 and table S5, Supplementary Material online). As largely reported, V2Rs are supposed to primarily respond to peptide or protein pheromones (Chamero et al. 2007;Leinders-Zufall et al. 2014). However, the only ligand which was unequivocally demonstrated to bind to a specific V2R is a member of the ESP family, namely ESP1 (Kimoto et al. 2005(Kimoto et al. , 2007. The mouse receptor for ESP1, V2Rp5 (Vmn2r116) In conclusion, data presented here contribute to clarify the evolutionary history of V2Rs in the Rodentia order starting from the most basal species, focusing on the diversification and expansion processes underlying these receptors in the superfamily of Muroidea which includes the most speciose families in the animal kingdom. Moreover, functional correlations have been highlighted between the expansion of these receptors and their expression in the VNO and in other tissues. This evolutionary analysis will also provide useful information for the search and identification of protein ligands for V2Rs.
2016-05-12T22:15:10.714Z
2014-12-23T00:00:00.000
{ "year": 2014, "sha1": "48cb03836ff302e0e419559da55ced7f5fe75f2c", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/gbe/article-pdf/7/1/272/17925541/evu283.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "032dba974bef0adfdfe3138f058d0bb566e82464", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264907794
pes2o/s2orc
v3-fos-license
Insights into drivers of mobility and cultural dynamics of African hunter–gatherers over the past 120 000 years Humans have a unique capacity to innovate, transmit and rely on complex, cumulative culture for survival. While an important body of work has attempted to explore the role of changes in the size and interconnectedness of populations in determining the persistence, diversity and complexity of material culture, results have achieved limited success in explaining the emergence and spatial distribution of cumulative culture over our evolutionary trajectory. Here, we develop a spatio-temporally explicit agent-based model to explore the role of environmentally driven changes in the population dynamics of hunter–gatherer communities in allowing the development, transmission and accumulation of complex culture. By modelling separately demography- and mobility-driven changes in interaction networks, we can assess the extent to which cultural change is driven by different types of population dynamics. We create and validate our model using empirical data from Central Africa spanning 120 000 years. We find that populations would have been able to maintain diverse and elaborate cultural repertoires despite abrupt environmental changes and demographic collapses by preventing isolation through mobility. However, we also reveal that the function of cultural features was also an essential determinant of the effects of environmental or demographic changes on their dynamics. Our work can therefore offer important insights into the role of a foraging lifestyle on the evolution of cumulative culture. Introduction Despite being less genetically diverse than all our Great Ape relatives, humans are able to inhabit every terrestrial habitat of the © 2023 The Authors.Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/,which permits unrestricted use, provided the original author and source are credited. planet [1].This unique adaptive ability has been largely explained by our capacity to rely on cumulative culture for survival [2].Culture is a second inheritance system that parallels and interacts with the genetic system, generating most of human population diversity [3][4][5].Cultural variation and innovations accumulate in populations throughout time, allowing for complex cultural adaptations to evolve [6,7]. An important body of theoretical [2,8] and experimental [9,10] work has highlighted the role of changes in the size and interconnectedness of populations in determining the persistence, diversity and spatial scale of material culture [6,[10][11][12].Moreover, predictions derived from these studies have been increasingly used to try to explain historical patterns of cultural changes and even the appearance of modern human behaviour and cumulative culture [13][14][15][16].For example, it has been proposed that an increase in population interconnectedness may be responsible for the differences between the Upper and the Middle Palaeolithic archaeological records: While the high connectivity of Upper Palaeolithic could have stabilized technological volatility, decreasing the risk of technological losses and increasing demographic robustness, the opposite would have applied to the more fragmented and unstable social networks characterizing the Middle Palaeolithic [16][17][18].Nonetheless, other studies have yielded extremely contradictory results [19,20]. For the most part of our evolutionary history, human populations were exclusively composed of hunter-gatherer communities [21].The mobile nature of these groups is one of the main strategies by which human foragers adapt to changing environments [22][23][24][25].Therefore, changing mobility patterns were almost certainly a key factor influencing the population and cultural dynamics of early members of our species [15,20,26].However, we are still lacking both formal models and empirical studies that allow an explicit examination of how changing local environmental conditions may interact with social, demographic or geographical factors to determine hunter-gatherer mobility patterns.This has also hindered our ability to predict mobility patterns in the past, and their implications for the emergence and distribution of cumulative culture over our evolutionary trajectory [15,20,26]. In order to overcome these issues, here we develop a spatio-temporally explicit agent-based model (ABM) to examine, on the one hand, the socio-ecological drivers of hunter-gatherer demographic and mobility patterns, and on the other, how changes in such factors over our evolutionary history could have affected the ability of members of our species to invent, exchange and accumulate complex culture.By modelling separately demography-and mobility-driven changes in interaction networks, we can assess the extent to which cultural transitions and patterns of cultural diversity are driven by different types of population dynamics.We then validate our model and test our predictions using empirical data from a real-world setting in Central Africa that homes some of the largest, most resilient and genetically diverged hunter-gatherer populations in the world, with a history of habitation of their current ecological niche dating at least 120 000 years (and possibly much longer) [27][28][29][30].This allows us to explicitly ask how past environmental changes in this region would have altered the demographies of hunter-gatherer populations, their mobility patterns, as well as their ability to interact with one another and subsequently develop, accumulate and exchange different types of cultural traits.Furthermore, by exploring which aspects of hunter-gatherer social structures are required for the emergence and maintenance of complex, cumulative culture we can offer insights into the adaptive nature of a foraging lifestyle. Concretely, we set to address the following outstanding questions: 1. Theoretical models regarding the effects of demography on cumulative cultural evolution often exclusively focus on traits, such as technology, for which complexity is associated with increased efficiency (and therefore greater pay-offs) [13,14].In these cases, sequential innovations of greater complexity might accumulate more readily at higher population sizes.It has been argued that when complexity is not associated with greater advantages, as in the case of folk-tales or other stylistic traits, demographic fluctuations might not necessarily affect the cultural repertoire of populations [31,32] (but see also [33]).We therefore model both types of traits (see §2.3) and ask: do demographic fluctuations exert a similar effect on them? 2. The two main factors that have been proposed to mediate the effect of demography on cumulative cultural evolution have been cultural loss and cultural innovation [13,14,34].If there are too few people, (adaptive) innovations are unlikely to emerge.However, provided that they emerge, they are also more likely to be lost completely by chance.Therefore, we ask: what is the relative role of innovation and cultural loss in determining the diversity and complexity of the cultural repertoires of populations?3. Cultural evolutionary studies have remarked that the size of the population of agents innovating and exchanging culture does not only depend on its census size, but also on the tendency or ability of its royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 members to interact with one another, i.e. on its social structure [34,35].Given that hunter-gatherers live in small camps, it has been proposed that their high mobility in the form of frequent changes in camp residence, migration, seasonal gatherings between individuals belonging to different regional groups or even long-range trade or exchange of gifts over hundreds of kilometres [6,[36][37][38] might act as a mechanism for maintaining high levels of cultural complexity and diversity even at low population sizes/densities.Hence, lastly, we ask: could mobility have compensated for demographic collapses during periods of environmental hardship, allowing Central African hunter-gatherers to maintain high levels of cultural complexity and diversity? 2. Model formulation We formulate our model on the micro-scale as an ABM to resemble approaches described in [39][40][41][42].The basic setting considers a set of n agents that follow rules for their spatial movement and social interaction that govern cultural transmission.An agent α represents a hunter-gatherer camp that at every point in time t has a position X a ðtÞ, representing a location of the camp (agent) in terms of latitude and longitude coordinates within a study area; a camp population D a ðtÞ, representing the number of people who live in the camp (agent); a cultural status S a ðtÞ, representing a cultural tradition or technology of the camp (agent) (see §2.3). Thus, the state of the agent α at time t is given by the collection of all three variables and the system state Y(t) by the collection of the state variables of all agents. We model the mobility of agents as a diffusion process in a landscape, where agents change their position according to the suitability of their physical environment and in reaction to the movements of the other agents.Note that the mobility process does not describe the mobility of the camp members, but the relocation of the camp position, i.e. 'residential mobility', which is one of the defining features of hunter-gatherers around the world and has been proposed to have important implications for cultural transmission and evolution [12,22,23,[43][44][45].The demographic changes of the population D(t) are modelled deterministically with population growth and decline depending on the carrying capacity of the landscape as well as the system state Y(t).To model cultural dynamics and hence track cultural evolutionary processes, we use a common adapted version of Axelrod's definition [46] in which culture is defined to be a set of attributes that are subject to social influence [34,47].The culture of an agent (a camp of hunter-gatherers) consists of some number of these attributes, referred to as cultural features, each of which can take a number of values (or traits).This results in agents being monomorphic for each cultural feature.We consider a finite number of c features, each represented by an integer value.A feature could be knowledge about a certain kind of tool or technique but also a shared belief, song, story or societal value (see §2.3). Mathematical modelling of agents' mobility In our model, the movement of the agents, i.e. the relocation of camp positions, is governed by the environmental influences, interaction with other agents and possible unknown influences.Formally, these dynamics are generated by the stochastic differential equation (SDE), such that the movement of every agent α is given by dX a ðtÞ ¼ Àr(VðX a ðtÞ, tÞ þ U a ðXðtÞÞ) dt þ sðX a ðtÞÞdB a ðtÞ, ð2:1Þ where V is the time-dependent suitability landscape of the environment, U a the interaction force between agents, σ the friction-dependent scaling function for the noise and B a ðtÞ the standard Brownian motion. A detailed description of all mathematical formulations for the mobility dynamics are given in the electronic supplementary material, §1.2. Environmental influence We account for the possible environmental factors by constructing a time-dependent suitability landscape V that determines which areas of the domain are attractive for agents [40].For the royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 construction of the suitability landscape V( • , t) at time t, we use a bio-climatic environmental niche model (ENM) from which we derive the likelihood of a hunter-gatherer camp being present at each position throughout the study area (see [27] for further details on the construction and validation of suitability landscapes.Higher suitability values from the ENM in a particular area correspond to a higher attractiveness of that area for hunter-gatherers (see §3 for details). Social interaction The mobility of the agents is also influenced by the positions of other agents, such that agents generate an interaction potential U that is similar to physical models for inter-atomic forces [48].Intuitively, these interaction forces between agents represent the trade-off between avoiding isolation in order to benefit from social interactions for material or cultural exchanges and avoiding conflicts over territories or scarce resources [22,23,25,49].In our model, interaction forces are constructed such that agents maintain a minimum distance from each other, in order to avoid the overlap of their foraging areas (i.e.areas regularly used for subsistence activities by their members), as has been reported in the literature [12,22,49]. Stochastic effects The stochastic part of the mobility dynamics is represented by a scaled Brownian motion, and prevents the system from becoming stationary even if every agent has found a position with high suitability and with enough distance from other agents.The scaling of the Brownian motion determines how fast camps can travel, and hence it is dependent on the friction of the terrain.We define a scaling function σ such that higher friction of the terrain implies a lower scaling of the Brownian motion.For environmental barriers, e.g. a cliff or a mountain, the scaling function will take values close to 0, such that it is very unlikely that an agent moves through impassable terrain. Cultural evolution of agents For changes in the cultural status of agents, we consider two different event types: (i) events that result from intrinsic processes of agents, e.g. the development of a new innovation or loss of knowledge, by first-order status changes, and (ii) events that result from exchange or social learning between agents, e.g. the transmission of a trait from one camp to another, by second-order status changes.Transmissions between two agents are governed by the agents' spatial proximity that is captured in the structure of a time-dependent interaction network.Additionally, we consider two types of cultural features that have been modelled in the cultural evolutionary literature: progressive and non-progressive.The former represent tools whose efficiency and pay-offs increase with an increasing number of component elements, which means that they tend to evolve towards greater complexity [31].On top of it, these types of tools must be adaptive for the extraction of available resources in specific environments, which may potentially promote high-fidelity copying to prevent 'maladaptive errors' and bias the transmission of certain functional (i.e. more elaborate) variants [50,51].On the other hand, 'Non-progressive' cultural features represent those cultural domains not subject to ecological pressures and where complexity is not necessarily associated with greater pay-offs.Examples might be songs, folk-tales, ornaments or other nontechnological traits, but the types of cultural dynamics most appropriate for particular cultural features may vary across settings [52][53][54].For a mathematically rigorous definition of the cultural evolution of agents, we refer to the electronic supplementary material, §1.3. Interaction network We assume that, for the interaction of two agents to occur, spatial proximity is required and construct a network from the agents' positions X(t), where two distinct agents α and β are adjacent at time t if their Euclidean distance is closer than a specified interaction radius r, i.e. kx a ðtÞ À x b ðtÞk r.In our model, we assume more frequent interactions, i.e. a higher interaction rate w 1 , in the closer neighbourhood of the agents compared with long-distance interactions that are less frequent with rate w 2 .We thus choose an interaction radius r 1 > 0 for defining short-range and another interaction radius r 2 > r 1 for defining long-range interactions between agents.Based on these assumptions, we define for each time t an royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 adjacency matrix A(t) that represents the time-dependant weighted interaction network between agents, given by else. < : ð2:2Þ In figure 1, we plot a part of the interaction network from one simulation at time 80 000 BP. Progressive cultural features In the case of progressive features, we consider only gradual changes of trait values, i.e. depending on the type of event, an agent either increases or decreases its trait value by 1.There is no upper bound for possible trait values in order to not artificially limit the possibilities for cultural diversity and complexity.The minimum trait value is 0, which can be interpreted as the lack of knowledge about variants of a particular feature. First-order status changes.We choose constant rates γ i , λ i > 0 for the events of cultural innovation and cultural loss in each feature i and assume no additional dependence on the suitability at the camp position or the trait value itself (with the exception that while an agent is assigned the minimum trait value in a feature the respective rate for cultural loss is set to zero). Second-order status changes.For cultural transmission between agents, we use the interaction network A(t) to define the rates of each possible event depending on connectivity.We assume that a trait value represents a number of known variants of a cultural feature and thus a difference in trait values can be interpreted as one of the agents being more knowledgeable than the other.Assuming that each variant can be learnt independently, we make the additional assumption that the greater the difference in the trait values between two agents, the more likely it is that an interaction between the two agents will lead to an adoption of knowledge in that feature for the agent with the lower value.This is analogous to people learning from more knowledgeable or skilled individuals [55,56].An agent can only increase the trait value through interaction if it becomes in contact with an agent that has a higher trait value in that feature.As long as there are neighbours with higher trait values, there is a positive rate for an adoption event leading to an increment in the trait value. Governed by the rules for the dynamics of progressive cultural features, in figure 2, we plot two consecutive snapshots of one simulation run around 110 000 BP. Here, we consider a case of c = 3 features and colour the agents according to their cultural status, such that each trait value denotes one component of the RGB colour vector.Thus, in the resulting plot agents of a similar status are coloured by a similar colour.By construction, brighter colours indicate higher trait values, while darker colours (including black) indicate lower trait values. Non-progressive cultural features For non-progressive cultural features, the numerical values of the possible traits are unordered and not associated with a specified complexity.In this case, we define the number of possible trait values as finite and all switches between different trait values are possible.royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 First-order status changes.We choose a constant rate γ i for spontaneous changes of trait values in each feature that does not depend on the suitability of the agent position or the trait value itself. Second-order status changes.Similar to the progressive case, we assume that interactions between agents require spatial proximity and thus consider the same weighted network defined by the adjacency matrix A(t).The dynamics of the interaction network, however, differ from those in the progressive case in that agents copy traits of neighbouring agents in a similar fashion to models of opinion dynamics [46].As long as there are neighbours with a different trait value in some feature, there is a positive rate for a status change event, where an agent copies the trait value of a neighbour. Demographics For each agent (i.e.camp), we model growth and decline of its population vector D(t) as a deterministic process that depends on the local carrying capacity of the environment and the local population.We assume constant rates for exponential population growth and decline, and do not explicitly model microscopic processes within the agents. The local carrying capacity K t (x) is defined for each time t as the maximum number of people that can be sustained by foraging within the short-range interaction radius around the location x and is assumed to depend linearly on the suitability (see [27] for empirical evidence of this relationship).For each agent, α we calculate a local population at the agent position to which the agent α itself contributes with its number of camp members.In the case of two agents being close enough to have overlapping foraging areas, they each contribute to the local population of the other agent proportionally to the size of the intersection of their foraging areas.While the local population is smaller than the local carrying capacity, the population of an agent grows exponentially with a constant rate.Otherwise, we assume an exponentially declining agent population. In reality, the population of a camp cannot grow indefinitely, as after a certain point it can become too large to stay organized as a single unit [57,58].We thus define a fission threshold h fis that sets a maximum for the population size of an agent.If a fission event occurs, i.e. the agent population exceeds the threshold h fis , the camp represented by agent α splits up into two agents with the same position and cultural status as the original agent into which the original agent population is equally distributed. Given that hunter-gatherer survival and reproductive abilities depend on camp-wide division of labour, cooperation and sharing [43,[58][59][60], we also assume that a camp needs a minimum number of members to be able to survive and thus define a fusion threshold h fus that sets a minimum population for an agent [47].If the population of an agent α falls below the fusion threshold h fus the members of agent α try to get taken in by another nearby agent within their long-range interaction radius.In the case that there is another agent β within range, the two agents can merge, which is realized by modifying the agent β so that the camp population is increased accordingly and its cultural status adjusted β in the progressive case.We assume that knowledge about the variants of a feature is conserved in the merging process which means that the trait value of agent β after merging is increased to the trait value of the agent α in the case that agent α had a higher trait value in that feature, i.e. the merged camp instantly learns all variants known to the camp it merges with.When royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 there are no nearby camps for a fusion process, then camp α goes extinct.The number of camps is thus varying in time but ultimately bounded by the carrying capacity of the landscape. Application of the model to Central African hunter-gatherer populations 3.1.Spatial setting of the model and suitability landscapes Although the model and analytical tools presented above are designed to be generalizable across temporal, environmental and ethnographic settings, in this section, we parametrize our model with data from contemporary hunter-gatherer populations living in Central Africa.Our suitability landscapes were derived from an ENM built using N = 749 contemporary hunter-gatherer camps from Central Africa.Suitability values define the likelihood of a map cell being occupied by contemporary hunter-gatherers, as well as its carrying capacity (K t ) [27].We then used projections of our ENM predictions into 1000-or 2000-year time slices from the present up to 120 000 years before present (BP) using a bias-corrected time series of global terrestrial climate and vegetation to extrapolate our model into the past, and in doing so, obtain suitability landscapes since the last interglacial [27,61].This time period is of vital importance for our species evolution in Africa as it is when we observe well-defined regional cultural variation across the continent in lithic reduction tools (i.e progressive cultural features) as well as the first evidence for body decoration and hence clear symbolic cultural marking (i.e.cultural diversity in 'non-progressive' cultural features) [62,63].The aim of the present study is not to make conclusive claims regarding the precise cultural dynamics that would have taken place in Central Africa throughout evolutionary history; instead, to demonstrate the potential of our model for testing and discussing cultural evolutionary hypotheses in a spatio-temporally explicit setting of great importance for understanding human history.Future studies should test the results (and parameter settings) from our model against empirical data, to determine the precise drivers of the cultural dynamics that would have taken place in the area over time. The fact that the resolution of our palaeoclimatic reconstructions ranges from 1000 to 2000 years results in our model having longer phases in which the suitability landscape is stable and some discrete points in time with instantaneous changes in the landscape.In the stable phases of the suitability landscape, the agent distribution can converge temporarily to an equilibrium state, which possibly is disrupted at the time points of change.These piecewise stationary dynamics can only arise if the convergence to a new equilibrium happens on a time scale that is fast enough compared with the frequency of landscape changes.As the accurate modelling of the residential movements and social interactions requires a time step size of one simulated month per time step but resolution of the suitability data is much coarser we update the suitability landscape not in every time step during stable phases and only at the times of landscape changes. Model initialization and parameter specification To calculate the potential environmental carrying capacity at each time step, following [27,49] we considered a short-range interaction radius for foraging (r 1 ) of 20 km of land around each agent (i.e.camp).This radius was determined on the basis of the estimation by Olivero et al. [49] of the mean radius (18.5 ± 1.0) encircling a camp that is regularly used for subsistence activities as well the average travel distance covered for foraging activities from 36 studies (21.0 ± 3.65 km). Fission and fusion thresholds were derived from data on the size of 75 contemporary Central-African hunter-gatherer (CAHG) camps from [27] alongside additional data compiled from the literature and available in electronic supplementary material, dataset 1.These ranged from 8 to 174 individuals (mean ¼ 50 individuals; median ¼ 47 individuals).After performing sensitivity analyses (electronic supplementary material, §7), we found that a fusion threshold of N = 18 was optimum in order to maximize cultural diversity (electronic supplementary material, figures S8 and S9).We also found that diversity in progressive traits (but not in non-progressive traits) increased with increasing fission thresholds up to 60 individuals (electronic supplementary material, figures S4 and S5).Similarly, we found a steep negative relationship between both our fusion and fission thresholds and the ability of agents to accumulate complex traits (mean trait complexity across camps) (electronic supplementary material, figures S7 and S10).This is because higher fusion thresholds result in less populated agents, which in turn may make agents more vulnerable to the loss of adaptive (more complex) variants [14].At the same time, higher fission thresholds result in less agents in the landscape, and therefore, royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 reduced chances of having models from whom to learn adaptive variants.However, a fusion threshold of N = 18 and a fission threshold of N = 60 marked an inflection point in this negative relationship, and resulted in a distribution of camp populations that closely resembled that observed in the compiled dataset from contemporary hunter-gatherer populations, with camp sizes with less than 18 individuals or more than 60 individuals corresponding to the smallest and largest 12% of camps (electronic supplementary material, figures S2 and S3). We then ran 10 model runs, with c = 3 cultural features for each combination of parameters on 1 for the 120 000 years for which we had available suitability landscapes.In addition, we recorded every time an agent (camp) moved, and the distance it travelled.Since agents in our model updated their position in each time step we defined agent movements as travel distances that were at least 2 km long.As the computational effort for simulating the ABM is very high, but the results from the mobility model are very consistent, we limited the number of simulations for each parameter setting to a rather small number to enable the exploration of more different settings (see table 1 for the range of the selected parameter values, the electronic supplementary material, table S1 also contains the technical scaling parameters).However, because of this limitation, we restrict the analysis of the cultural dynamics also to macroscopic variables that are behaving consistently throughout the different simulations and the mesoscopic patterns of single simulations. The agent system was initialized with random positions drawn according to the equilibrium distribution of the initial suitability landscape.The cultural status was initialized at random in the nonprogressive case and with the minimum trait value in each feature in the progressive case.We then let the system run for an additional time period, such that the system state of the initial time of our case study already features some level of cultural diversity and also cultural similarity between agents that form a strongly connected interaction network.The numerical simulation was performed using a combination of the Gillespie algorithm and the Euler-Maruyama scheme, which is also used in [40]. Model validation with ethnographic data Given that our simulation time steps represented one month, we first extracted the number of movements per year, average distance ( per move) and total distance moved per year from our simulation and compared them with published data from the Aka and Mbuti Western and Eastern CAHG (respectively) from [58].We found that our model reproduced very closely the mobility patterns observed in both populations (table 2).This suggests that the modelled relationships between changing suitability landscapes, demography and mobility are representative of those driving the population dynamics of hunter-gatherers currently living in the area (see also electronic supplementary material, dataset 1 for a more extended comparison with the existing literature).Consequently, after verifying that our model accurately reproduced the population dynamics of Central African hunter-gatherers, we could use its predictions alongside the reconstructed past demographies and environmental carrying capacities, to determine the location of clusters of interacting agents at every time period.In doing so, we could assess how changing suitability landscapes would have affected the ability of hunter-gatherer groups to interact with one another.More precisely, we studied how environmentally driven changes in demography and resulting changes in mobility patterns would impact the 'effective size' of the populations of agents regularly interchanging culture (or genes) over time [11].To identify clusters of regularly interacting agents at every time period (henceforth 'mobility clusters'), we considered the positions of all agents within a time horizon [t 1 , t 2 ] that corresponded to a chosen suitability landscape.Then, we used a hierarchical density-based clustering approach [64] to pinpoint the areas in which the agents were densely connected.As expected, we observe that more fragmented suitability landscapes would have resulted in a greater number of mobility clusters, and therefore, in a reduced region-wide connectivity (figure 3; electronic supplementary material, figures S13 and S14). Hunter-gatherer mobility allows the maintenance of cultural differentiation After identifying mobility-based clusters, we then assessed how these would have impacted cultural dynamics.In doing so, we could gain insights into how environmentally driven changes in regional connectivity could have affected the number of agents potentially exchanging culture, and consequently cultural differentiation over time.For this analysis, we introduced a novel spatio-temporal clustering method to identify culturally connected regions.We defined agents to be close in status if they either had the exact same status in the non-progressive case, or if the Euclidean distance between their status vectors was small in the progressive case (see electronic supplementary material, §5 for more details).We used these definitions of status closeness to construct a time-averaged connectivity matrix C([t 1 , t 2 ]) from which we could derive connected components corresponding to clusters based on the cultural royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 status of agents.As time intervals for the matrix construction, we chose the time intervals during which the suitability landscape is static.Via the mobility data we could make the connection between the cultural clusters and the areas of the landscape that the corresponding agents inhabited.We did this by assigning to each location the cultural cluster that most frequently occupied it.Although most of the time agents that were close in space tended to also be close in cultural status, overall, we find that for both types of cultural traits, mobility clusters could encompass several cultural clusters (figures 4, 5 and additional visualizations in the code and data repository).These results imply that cultural differentiation can be maintained even among hunter-gatherer groups regularly interacting with one another, consistent with empirical work showing that contemporary and ancient huntergatherer populations tended to be embedded in extremely large social networks encompassing individuals belonging to different camps, bands and even ethnolinguistic units; moreover, that such social organization far from compromised their ability to remain culturally distinct [6,27,36,[65][66][67]. Nonetheless, we also observe that for a given suitability landscape, clusters based on progressive cultural features tend to be smaller and more localized than those based on non-progressive traits (electronic supplementary material, figures S13 and S14).In the absence of differential preferences in terms of the spatial scale of social learning for different types of traits, the fact that in progressive traits, there are some variants that are favoured over others due to their increased efficiency may lead to a more limited diffusion of features across the landscape.In reality, however, we might expect individuals to condition their propensity to acquire these traits on inter-individual distance or cluster membership given that toolkits might be adaptive only in particular ecologies.This could further reinforce the localized nature of subsistence toolkit repertoires, and once more highlights the importance of considering the particular function of cultural elements when trying to assess the impact of ecological, social or demographic features on their dynamics.Taking such properties into account, these visualizations can be compared with maps showing the distribution of actual cultural traditions in the past and present, to gain understanding of the factors driving the spread or disappearance of particular cultural traditions.royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 The type of cultural features determines the effect of environmental changes on their diversity When running our model, at each time step we calculated the Simpson's diversity index for progressive and non-progressive (or opinion) cultural features (electronic supplementary material, §8.4).Simpson's diversity was calculated, for each type of cultural feature as probability that two agents randomly selected would share the same trait values (for progressive features) or traits (for non-progressive features) across all features.For progressive cultural features, we also calculated the mean trait value across agents in the simulation for each trait (analogous to cultural complexity).When assessing changes in overall cultural diversity throughout our period of study, we observed that agents would have been able to maintain relatively high levels of cultural diversity (figure 6) at all times unless innovation rates γ i were set to be extremely low (electronic supplementary material, figure S15).Nonetheless, in line with what has been reported in theoretical studies [34,66], we see punctuated increases and decreases in cultural diversity over the time period studied.On the other hand, for cultural traits for which some variants are more adaptive than others (i.e.progressive traits), and individuals who acquire rare complex variants that can innovate on top of them, a greater effective number of agents may lead to a greater chance of the sequential innovation of such complex variants and a reduction of the likelihood of them getting lost completely by chance [13,14].However, the effective population size, i.e. the number of agents exchanging cultural information [68] is not only dependent on the number of agents comprising a population but also on the interaction patterns between its members (within-group connectivity) as well as their ability to interact with members of other groups [6,10,16,66,68].In line with these results, we observed that while for non-progressive traits, the rate of innovation was the main determinant of cultural diversity (figure 6; electronic supplementary material, figure S15), the adoption rates (w 1 , w 2 ), i.e. the ability of individuals to socially acquire cultural traits from others, are the main determinants of fluctuations in royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 cultural diversity for progressive traits, as they directly affect the effective population size of agents (electronic supplementary material, table S3).In other words, our results confirm hypotheses that hunter-gatherer inter-camp interactions might be key for preventing the loss of adaptive cultural variants, being able to compensate for low population densities [6]. Social interactions enable the development and maintenance of complex culture even at low population densities If indeed a larger effective population size of agents helps reducing the probability of losing rare adaptive cultural variants, we would expect a positive relationship between both the number of agents as well as the tendency of agents to exchange culture with one another (i.e.long-range adoption rate) on the mean trait complexity achieved by populations for progressive cultural traits.Indeed, our simulations revealed positive correlations between the number of agents at any given time and the average trait complexity achieved across camps (Pearson's r = 0.265, p < 0.001; r = 0.426, p < 0.001; and r = 0.466, p < 0.001 at low, medium and high adoption rates, respectively, at high innovation rates and loss rate of 0.02; electronic supplementary material, table S2).Similarly, we observed that although changing suitability landscapes had a strong effect on the mean trait values over time, as long as the rate of cultural loss (λ i ) was smaller than or equal to the long-range adoption rate (ϕ 2 ), CAHG would have been able to prevent the loss of adaptive cultural variants, even during periods of low environmental suitability (figure 7).In other words, while the reduction in population sizes and connectivity caused by deteriorating environmental conditions would have stagnated the cultural evolution, hunter-gatherers would have prevented the loss of adaptive variants through maintaining a baseline level of interactions with members of other camps.This is also exemplified by the fact that the correlation between population size and mean trait complexity was lower when long-range adoption rates was higher (electronic S2).Similarly, our simulations illustrate a clear pathway through which isolation can lead to loss of complex skills, and hence to the complete disappearance of some technologies, as has been argued to have been the case in Tasmania following its isolation from continental populations at the start of the Holocene [14].Our results therefore highlight the importance of connectivity for the ability of cultural traits to increase in complexity, and thus in their adaptiveness over time. Discussion We have presented an ABM aimed at assessing ecological drivers of hunter-gatherer demographic and mobility patterns as well as the impact of such mobility patterns on cultural exchange and evolution over space and time.Furthermore, using data from hunter-gatherer populations in Central Africa throughout the past 120 000 years, we have illustrated the potential of the model to test precise hypotheses in a real-world context over evolutionary time. Our results add support to a key adaptive role of hunter-gatherer mobility patterns for maintaining the necessary inter-camp connectivity to sustain highly diverse and elaborate cultural repertoires [6,12,68].The method we develop to visualize clusters of agents based on mobility and of cultural status reveals that extremely high levels of cultural diversity and subpopulation differentiation can be maintained even among highly interconnected populations.Moreover, we found that during periods of reduced population sizes due to deteriorating environmental conditions, hunter-gatherers could have prevented the loss of adaptive variants through maintaining a baseline level of interactions with members of other camps.This was exemplified by the fact that the correlation between population size and mean trait complexity was lower when the rates of sociality (i.e.adoption) were higher.Although, in the present model, mobility strategies were independent of culture, studies have shown that huntergatherer groups around the world adapt their mobility regimes to maintain regular contact with members of their cultural group following environmental changes [25,65,69].Hence, future research including a feedback loop between cultural status and the mobility of agents, such that agents' cultural status also influences their movement, would shed further light on the implications of the flexible social organization that characterizes hunter-gatherer societies for cumulative culture. In addition, our findings highlight the importance of considering the function of cultural traits in society when assessing the drivers of their dynamics.For example, while for non-progressive (i.e.symbolic, or non-technological) traits, the rate of innovation was the main determinant of cultural diversity, adoption rates, that is, the ability of individuals to socially acquire cultural traits from others, were the main determinants of fluctuations in cultural diversity for progressive (i.e.technological) traits.In other words, our results confirm hypotheses that hunter-gatherer inter-camp interactions might be key for preventing the loss of adaptive cultural variants, being able to compensate for low population densities.Although in the present example, we implemented these functional differences by including cultural selection for progressive trait values (although not for non-progressive ones), we did not incorporate natural selection in either case.In other words, agents' trait values did not affect their survival probability or growth rate.Given that cultural differences in some domains can result in differences in the adaptive potential of individuals and groups to particular ecologies (e.g.clothing material in cold environments, traditions of food processing that eliminate naturally present toxins, marriage rules minimizing endogamy etc.) future studies could also consider the effect of natural selection acting on cultural traits [70,71].Last, the framework we have presented allows us to pinpoint the location of groups of agents regularly interacting and the spatial extent of cultural traditions over time.The flexible nature of our model means that other researchers can adjust the inputs and parametrization according to their area and time scale of study.In turn, these can be compared with and tested against archaeological and ethnographic data to identify patterns and processes promoting changes in population and cultural dynamics such as the spread or disappearance of particular cultural traditions in specific settings. Figure 1 . Figure 1.Snapshot of a simulation for illustrating the short-range interactions (a) and the weighted interaction network (b).Blue edges correspond to edge weights for short-range and red edges to edge weights for long-range interactions. Figure 2 . Figure 2. Two consecutive snapshots of a simulation run with c = 3 progressive cultural features.Agents are depicted as dots and coloured according to their cultural status.The borders of spatially grouped agents are marked black. Figure 3 . Figure 3. Two snapshots of the time-dependent suitability landscapes V at 80 000BP and 10 000BP with borders of mobility clusters marked in black. Figure 4 . Figure 4. Comparison between clusters based on non-progressive cultural status (coloured areas) and clusters based only on mobility trajectory data (marked by black borders) for three consecutive suitability landscapes centred around 10 000 BP.The relations between the different clusters are illustrated in an alluvial diagram.The simulation parameters are ϕ 1 = 0.08 and ϕ 2 = 0.008 as rates for short-and long-range adoptions, an innovation rate of γ i = 0.001 and a loss rate of λ i = 0.02 for each feature i. Figure 5 . Figure 5.Comparison between clusters based on progressive cultural status (coloured areas) and clusters based only on mobility trajectory data (marked by black borders) for three consecutive suitability landscapes centred around 10 000 BP.The relations between the different clusters are illustrated in an alluvial diagram.The simulation parameters are w 1 = 0.08 and w 2 = 0.008 as rates for short-and long-range adoptions, an innovation rate of γ i = 0.0001 and a loss rate of λ i = 0.02 for each feature i. Figure 6 . Figure 6.Changes in cultural diversity over time at high innovation rates γ i and variable adoption rates w 1 , w 2 and a loss rate of λ i = 0.02 for each feature i.The figure shows the average values over 10 simulations. Figure 7 . Figure 7. Changes in mean trait complexity over time, at high innovation rates γ i and a loss rate of λ i = 0.02 for each feature i. royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230495 Table 1 . Overview of the parameter values used. Table 2 . Mobility of CAHG compared with mobility in our model.
2023-11-02T15:11:08.178Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "b26fdb5198b31b343de3070f57106feb6b4c53c3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "18914cab88583a1cbeb987846f8e82d5d41035ad", "s2fieldsofstudy": [ "Environmental Science", "Sociology", "History" ], "extfieldsofstudy": [ "Medicine" ] }
104058457
pes2o/s2orc
v3-fos-license
Kinetics of Catalytic Wet Peroxide Oxidation of Phenolics in Olive Oil Mill Wastewaters over Copper Catalysts During olive oil extraction, large amounts of phenolics are generated in the corresponding wastewaters (up to 10 g dm–3). This makes olive oil mill wastewater toxic and conventional biological treatment challenging. The catalytic wet peroxide oxidation process can reduce toxicity without significant energy consumption. Hydrogen peroxide oxidation of phenolics present in industrial wastewaters was studied in this work over copper catalysts focusing on understanding the impact of mass transfer and establishing the reaction kinetics. A range of physicochemical methods were used for catalyst characterization. The optimal reaction conditions were identified as 353 K and atmospheric pressure, giving complete conversion of total phenols and over 50% conversion of total organic carbon content. Influence of mass transfer on the observed reaction rate and kinetics was investigated, and parameters of the advanced kinetic model and activation energies for hydrogen peroxide decomposition and polyphenol oxidation were estimated. INTRODUCTION Phenols are important industrial chemicals widely used as reactants and solvents in numerous commercial processes and therefore are often present in industrial effluents. The major anthropogenic sources of phenol-contaminated wastewaters are petrochemical, pharmaceutical, wood, pulp, and paper and food processing industries as well as landfill and agricultural area leachate waters. 1 There are several environmental concerns regarding phenols; thus, they are considered to be hazardous in industrial wastewaters, harmful even at low concentration levels (ppm range). Wastewaters containing phenols should therefore undergo a special treatment. In EU, the current limits for wastewater emission of phenols are 0.5 mg dm −3 (0.5 ppm) for surface waters and 1 mg dm −3 (1 ppm) for sewage systems with maximum allowed concentration levels in potable and mineral waters of 0.5 μg dm −3 (0.5 ppb). Significant quantities of phenolics are generated in olive oil mill wastewater (OOMW) including organic contaminants such as lignin, tannins, and polyphenolic compounds. Significant amounts of olive mill wastewater exceeding several million tons are produced in Europe alone despite stringent legislation 2 and are not properly treated. The properties of OOMWs depend on the method of extraction, feedstock properties, and region and climate conditions. In general, OOMW is a dark brown acidic effluent (pH = 4.0−5.5), with a distinctive odor and high conductivity, comprising besides water (80−83%) organic compounds (15−18%) and inorganic elements (2%, potassium salts and phosphates). The concentration of phenols and polyphenols in OOMW can be as high as 20 wt %. 3 Although several studies were reported on removal of phenolics in OOMW, 4 significant efforts are still needed. Often separation-based technologies are suggested as an alternative to biological processes, however, their effective application can be hindered by high operational costs and sustainability concerns such as generation of secondary toxic wastes because the toxic compounds are not destroyed but only separated. 5 Therefore, in the current work, the focus was on catalytic approaches to diminish the content of phenolics in OOMW. In particular, the catalytic wet peroxide oxidation (CWPO) process is a suitable method 6 generating hydroxyl radicals during hydrogen peroxide decomposition. Hydrogen peroxide is generally considered as a nontoxic and ecologically attractive oxidant. Application of heterogeneous catalysts, such as zeolites, 7 in CWPO of organic compounds has been reported. Transition metal-exchanged (mostly iron and copper) zeolites of FAU or MFI morphology showed promising results; however, there are still some open issues such as resistance to leaching of the active metal during the reaction. Apart from few recent reports, 8−11 most of the studies describe application of powdered catalysts for which mass transfer limitations can be neglected. 7,12,13 It is apparently clear that scaling up of a commercial CWPO process requires detailed studies with the pelletized catalysts. In this case, external and internal mass transfers (i.e., diffusion processes in the boundary layer surrounding the catalyst pellet and in the pores of the catalyst) should be properly considered. In this work, the activity of copper-containing catalysts was tested in catalytic wet peroxide oxidation of OOMW with a special attention to the stability of copper during the reaction, namely, its resistance to leaching. The influence of interparticle and intraparticle diffusion was investigated, and the reaction kinetics parameters of the proposed pseudo-second-order kinetic model were estimated. Depending on the bead size, between 2.5 and 10 g dm −3 zeolite was ion-exchanged with a 0.05 M copper acetate solution under agitation at 298 K for 0.5 to 3 h, followed by filtration of the samples and drying overnight at room temperature to obtain copper-bearing zeolites with a similar metal content. The detailed preparation method was described previously. 14 After copper ion-exchange, postsynthesis thermal treatment was performed consisting of calcination at 1273 K for 5 h (ramp 2 K min −1 ) to achieve materials exhibiting a higher stability against the loss of the active metal component during the reaction. The list of prepared catalysts and their designated names is presented in Table 1. 2.2. Catalyst Characterization. Textural characterization of the catalysts was performed by nitrogen physisorption at 77 K using a Sorptomatic 1900 Carlo Erba instrument. Prior to measurements, the samples were outgassed at 423 K for 3 h at reduced pressure below 0.1 mbar. The specific surface area and pore volume calculations were performed using Dubinin's equation for microporous and Brunauer−Emmett−Teller equation for mesoporous samples. Pore size distributions were acquired using the Horvath−Kawazoe method. The crystalline structures of the parent zeolite-and prepared zeolite-based catalysts containing copper were evaluated by powder X-ray diffraction (XRD) analysis on a XRD 600, Shimadzu instrument. Cu Kα was used as the radiation source at the wavelength of 0.154 nm with 2θ from 5 to 60°with a 0.02°step size. The peak identification was performed using X'Pert HighScorePlus software. The morphology of the fresh-and spent-zeolite-based copper catalysts Cu/13X and Cu/13X-K1273 was studied using scanning electron microscopy (SEM) and transmission electron microscopy (TEM). SEM analysis was performed on carbon-coated samples using a LEO Gemini 1530 instrument equipped with a Thermo Scientific UltraDry Silicon Drift Detector. The transmission electron microphotographs were taken by a JEM-1400 Plus transmission electron microscope (TEM) operated at 120 kV acceleration voltage. The powdered samples were suspended in 100% ethanol under ultrasonic treatment for 10 min. For each sample, a drop of ethanol suspension was deposited on a Cu fiber carbon grid (200 mesh) and evaporated, after which the images were recorded. Copper loading was measured using a UV/vis spectrometer (UV1600PC, Shimadzu) at 270 nm for the parent solution of copper acetate applied during ion exchange and later confirmed by energy-dispersive X-ray microanalysis (EDXA) during SEM analysis and by inductively coupled plasma-optical emission spectrometry (ICP-OES) (PerkinElmer, Optima 5300 DV) after dissolution in HF. The basicity of the prepared catalysts was elucidated using temperature-programmed desorption (TPD) of CO 2 on AutoChem 2010 (Micromeritics Instruments) in the temperature range of 373−1173 K according to the method described by Kumar et al. 15 Infrared spectroscopy (ATI Mattson FTIR) was applied to elucidate the strength of Brønsted and Lewis acid sites using the KBr pellet technique working in the range of wavenumbers of 4000−400 cm −1 with pyridine as the probe molecule. A detailed description of the analytical procedure is available. 16 2.3. Catalytic Experiments. The catalytic experiments were carried out under atmospheric pressure in a 250 cm 3 glass batch reactor equipped with a pH electrode and a temperature sensor. The stirring speed in the range between 50 and 800 min −1 and catalyst particle sizes from ca. 0.3 to 2.0 mm were ACS Omega Article varied to address the impact of mass transfer. For elucidation of reaction kinetics, the catalyst loading, reaction temperature, and hydrogen peroxide concentration were varied. OOMW was supplied by a private oilery (Dalmatia Region, Croatia) from a three-phase extraction process of olive oil production from green olive stock mixture (local sort Olea europaea var. oblica). Basic properties of the wastewater are presented in Table 2. Prior to reactions, OOMW was filtered through a 100 μm nylon filter bag and diluted with distilled water (v/v = 50:50). UV−vis absorbance was applied to monitor the concentration of phenolics and hydrogen peroxide. The standard Folin− Ciocalteu method at 765 nm described in the literature 17 was used to measure the total phenol concentration. A standard curve of gallic acid was used for quantification, and the results were expressed as gallic acid equivalent (GAE) concentrations. The ammonium metavanadate spectrophotometric method at 450 nm adopted from ref 18 was used for measuring hydrogen peroxide concentrations. The measured absorbances were recalibrated with reference to OOMW sample blanks not containing phenols or hydrogen peroxide to eliminate the potential error from the existing color or turbidity of the wastewater. Total organic carbon (TOC) was evaluated with a TOC-V CSN Shimadzu analyzer using diluted reaction mixtures, and chemical oxygen demand (COD) of the selected samples was measured by a UV/vis spectrometer using the dichromate colorimetric method at 605 nm (Hach-Lange cuvette tests). The copper content in the reaction mixture reflecting metal leaching was determined by atomic absorption spectrometry of diluted reaction mixture solutions on Shimadzu AAS 6300 using a Cu hollow cathode at λ = 324.9 nm. X-ray powder diffraction analysis and N 2 physisorption measurements were conducted to reveal potential structural changes and coking. RESULTS AND DISCUSSION 3.1. Catalyst Characterization. After testing all prepared catalysts having different sizes, it was concluded (see below) that Cu/13X-1 with the size range of 0.4−0.63 mm is the most appropriate for CWAO. Table 3 thus contains results obtained from N 2 physisorption analysis of Cu/13X-1 and its thermally treated counterpart. The incorporation of copper in 13X zeolite did not have a significant effect on the measured surface area. The thermal treatment resulted in a decrease of both specific surface area and pore volume with a shift of the pore size distributions (Figure 1) from the microporous (Cu/13X-1) to the mesoporous range (Cu/13X-K1273-1). Such pronounced differences in the physical properties for the catalyst calcined at 1273 K can be attributed to structural changes during thermal treatment. XRD diffractograms of Cu/13X-1 already presented in ref 19 confirm the FAU structure as no shifts in the peak positions and no significant diffraction lines assigned to any new or impurity phase were observed. XRD suggested 19 high crystallinity of the copper-containing material as incorporation of copper into the zeolite framework via ion exchange does not influence the crystal structure. In agreement with the literature, 20 the obtained results indicate that Cu 2+ ions are well dispersed in the zeolite framework of 13X and that the size of copper particles is below the detection limit for the XRD measurement (<2−4 nm). In fact, from the TEM image of a Cu/13X-1 catalyst (Figure 2a), very small metal particles highly dispersed in single zeolite crystals can be observed. Their average size calculated using TEM analysis was 1.7 nm. The copper-bearing zeolite calcined at 1273 K exhibited phase transformations from a zeolite to a silicate-based material upon heating. As previously reported, 19 several phases were determined for Cu/13X-K1273-1, including magnesium silicate, copper oxide, anorthoclase (Na 0.85 K 0.14 AlSi 3 O 8 ), and andesine (Na 0.685 Ca 0.347 Al 1.46 Si 2.54 O 8 ). Changes in crystal phases upon thermal treatment were in line with a decrease of the surface area and pore volume ( Table 3). The size of CuO in Cu/13X-K1273-1 according to the Debye−Scherrer equation was 26.0 and 25.1 nm for the respective peaks at 35.5 and 38.6°. An average metal particle size analysis using TEM was not applicable for the Cu/13X-K1273-1 catalyst because of a poor resolution between the dark metal particles and the dark surface of single catalyst crystals (Figure 2b). The increase in the size of the metal particles in the Cu/13X-K1273-1 catalyst is most probably a consequence of metal sintering and clustering of smaller metal particles into larger ones that occurs during thermal treatment. 21 The morphology, shape, and size of crystals of Cu/13X-1 and Cu/13X-K1273-1 catalysts were additionally characterized Although agglomerated, these can be associated with X zeolite morphology similar to that reported previously. 22 Single crystals in Cu/13X-K1273-1 were observed to be larger in size and of irregular shapes and a broad crystal size range (Figures 2b and 3b) in agreement with XRD, showing the presence of several crystal phases. To evaluate metal dispersion across the surface, SEM imaging in a backscattering mode of the pellets and the cross sections of pellets was performed ( Figure 4). The brighter areas in the backscattering images are representative of the higher densities of the more heavy elements (copper). It can be noticed that copper is consistently spread over the surface of the Cu/13X-1 catalyst (Figure 4a,b), whereas in the case of the Cu/13X-K1273-1 catalyst ( Figure 4c,d), copper is mainly located on the outer catalyst surface and in the narrow band close to the pellet surface several micrometers in width. Migration of copper from inside of the pellet to its outer surface is most probably a consequence of the structural changes during thermal postsynthesis treatment. XPS analysis was used for the identification of the oxidation state of copper cations in Cu/13X-1 and Cu/13X-K1273-1. From the XPS spectra presented in Figure 5, characteristic peaks were identified for Cu 2p, O 1s, Al 2p, and Si 2p for both catalysts. Differences in the high-resolution spectra of Cu 2p and O 1s indicate that the nature of copper species is different in Cu/13X-1 and thermally treated Cu/13X-K1273-1 catalysts. The first exhibits only two main peaks at 934 (Cu 2p 3/2 ) and 953.3 eV (Cu 2p 1/2 ), confirming the presence of Cu 1+ as in Cu 2 O. In the high-resolution Cu 2p spectra of the latter, strong Cu 2+ satellite peaks at 943.3 and 964.2 eV were present, contributing to the presence of the CuO phase, as previously identified by XRD. 23 Differences in O 1s signals additionally confirm the distinction between copper oxides found on the surface of Cu/13X-1 and Cu/13X-K1273-1 catalysts. It should be noted that reduction of finely dispersed Cu 2+ under exposure to the X-ray beam during XPS analysis in the case of the Cu/13X-1 catalyst cannot be excluded. Therefore, a difference between the catalysts can also be related to difficulties in reduction of larger CuO particles in the case of Cu/13X-K1273-1 during the XPS measurements. During catalyst preparation, the influence of metal incorporation into the zeolite support as well as the influence of postsynthesis thermal treatment on the acid−base properties of the parent and copper-bearing zeolites has been investigated. CO 2 -TPD profiles of the parent zeolite as such (13X), calcined form (13X-K1273), copper zeolite (Cu/13X-1), and the calcined material (Cu/13X-K1273-1) were presented previously. 19 The calculated amounts of desorbed CO 2 are given in Table 4. Weak, medium, and strong basic sites were identified in 13X and copper-modified 13X zeolites, 19 which is explained by the application of the sodium form of the commercial zeolite for catalyst preparation as well as with the intrinsic (structural) basicity of oxygen atoms present in the zeolite. 24 Copper-containing zeolite Cu13X-1 exhibited much higher quantities of desorbed CO 2 related to strong basic sites (>750 K), indicating a more pronounced basicity of copperexchanged zeolite. High temperature, however, can in general also lead to structural changes of the zeolite, thus preventing a straightforward assignment of high-temperature peaks to strong basic sites. This possibility was ruled out because only strong basic sites were seen for thermally stable Cu/13X-K1273-1. Acidity measurements were reported previously 19 showing that copper-containing catalysts exhibited Lewis acidity, which can be explained by the presence of copper. 2 Thermal treatment of Cu/13X-1 resulted in a decrease in acidity. Brønsted acid sites are degraded upon severe heat treatment above 773 K, 24 whereas Lewis acidity from Cu 2+ present in Cu/13X was diminished by the formation of copper oxide, showing a more basic character. As reported previously, 19 higher acidity was measured for Cu/13X-K1273-1 compared to that for the copper-free counterpart. Preliminary Catalytic Experiments and Analysis of Internal Mass Transfer. Catalytic wet peroxide oxidation of OOMW was performed under mild reaction conditions. During preliminary studies, the extent of thermal decomposition of polyphenols present in the OOMW was investigated as well as the influence of catalyst addition on the reactant conversion rates. The possible catalytic activity of the parent Na-13X zeolite in the CWPO of phenol was excluded during our previous investigations of a model catalytic system. 14 Preliminary results on catalytic oxidation were already reported, 19 confirming the role of catalysts in reducing the amount of phenolics and decomposing hydrogen peroxide (Figure 6a,b). Thermal treatment of the catalyst at high temperature was effective in decreasing hydrogen peroxide decomposition, improving also the conversion of total phenols. These results indicate that the oxidant is probably inefficiently used in the reaction on Cu/13X-1 and that hydrogen peroxide is mainly consumed in the reactions where hydroxyl radicals are lost and are not used for degradation of the polyphenols. In CWPO, oxidation of organic compounds is attributed to the presence of hydroxyl radicals that are generated when hydrogen peroxide is decomposed. Reaction pathways can be presented with Reactions 1−6. In the initial stages of the reaction, hydroxyl and perhydroxyl radicals are produced by hydrogen peroxide decomposition on the catalyst 25 Both radical species are capable of oxidizing the organic compounds; however, the reactivity of hydroxyl radicals is dominant. 7 Catalytically produced hydroxyl radicals react with phenolic compounds, oxidizing them through a series of intermediates to carbon dioxide and water when complete mineralization is achieved Hydroxyl radicals are very reactive, and they are involved in a number of competing side reactions such as scavenging hydrogen peroxide and termination between the hydroxyl and perhydroxyl radicals 7 If the latter reactions of hydroxyl radicals are dominant, hydrogen peroxide will be consumed fast and majority of the generated hydroxyl radicals will be spent inefficiently in undesired side reactions. This could be considered a preferred reaction pathway if the intraparticle diffusion resistances for the phenolic molecules are present. In this case, only hydrogen peroxide would be adsorbed and decomposed on the catalytically active sites on the internal catalyst surface, whereas adsorption of polyphenols would be limited mostly to the outer surface of the catalyst. In the absence of organic compounds, the hydroxyl radicals formed inside the catalyst would for the most part react with one another and hydrogen peroxide. Taking into account the average pore sizes in the Cu/13X-1 catalyst (2 nm) and cross sections of hydrogen peroxide (0.15 nm) and polyphenols (1−2 nm), configurational diffusion limitations could be expected for the phenolic compounds found in the OOMW. In addition to configurational limitations, intraparticle resistances for hydrogen peroxide and polyphenols could be present. They were verified using the Weisz−Prater criterion 26 where r obs is the observed reaction rate, R is the particle radius, c s is the molar concentration of the solute at the catalyst surface, D e is the effective diffusion coefficient of the solute, and n is the reaction order. For the porous media and the random pore model, the effective diffusion coefficient is defined as D D e = ε τ , where D is the diffusion coefficient, ε is the porosity, and τ is the tortuosity, which are connected to the structural characteristics of the catalyst and pore geometry. For The calculations of the diffusion coefficients were performed for hydrogen peroxide and phenol diffusing in water using the data and expressions obtained from the thermodynamic properties databank. 28 In the absence of thermodynamic data at the critical point for polyphenols such as hydroxytyrosol or tyrosol that are most commonly found in the OOMW, phenol was chosen as a model compound for the calculations. Because polyphenols are more complex and larger molecules than phenol, it is reasonable to expect that if the internal transfer limitations exist for phenol they would be even more pronounced for polyphenols. The obtained values of the diffusion coefficients at 353 K and normal pressure (typical reaction conditions) were 6.0 × 10 −9 m 2 s −1 for hydrogen peroxide and 3.5 × 10 −9 m 2 s −1 for phenol. Application of the Weisz−Prater criterion (eq 7) for the observed initial reaction rates of hydrogen peroxide decomposition (r HP,obs = 3.7 × 10 −4 mol dm −3 s −1 ) and polyphenols oxidation (r TPh,obs = 5.3 × 10 −5 mol dm −3 s −1 ) and their corresponding surface concentrations with the mean catalyst particle diameter of 0.515 mm and ε τ ratio of 0. On the other hand, because of the larger pore sizes of the calcined Cu/13X-K1273-1 catalyst (average pore size of 5 nm), the internal diffusion in this reaction should not be as significant as in the case of the Cu/13X-1 catalyst and faster decomposition of hydrogen peroxide and oxidation of polyphenols should be expected. However, this is not the case. The reason for a much slower radical generation rate lies in the fact that the postsynthesis thermal treatment induced migration of the catalytically active species (copper) toward the pellet surface, which was confirmed by SEM imaging in the backscattering mode of the cross sections of Cu/13X-1 and Cu/13X-K1273-1 pellets, as presented in Figure 5b,d. Hydrogen peroxide decomposition in the Cu/13X-K1273-1 catalyst takes place only in a narrow ring of few micrometers from the particle surface where the presence of copper is identified and where polyphenols are also present. In this case, it can be considered that the pore diffusion for polyphenols is not as significant as in the case of the Cu/13X-1 catalyst and that most of the generated hydroxyl radicals are reacting with the organic compounds and are not inefficiently spent in fast scavenging reactions inside the catalyst pellet (eqs 4−6). As a result, the rates of polyphenol oxidation are comparable for both catalysts with a higher extent of oxidation for Cu/13X-K1273-1 resulting in an almost complete removal of the phenolic content after 180 min of reaction. Comparison of Cu/13X and Cu/13X-K1273 during preliminary studies included also the investigation of their behavior in CWPO of OOMW, namely, measuring the extent of copper leaching during the reaction as well as by analyzing potential changes of the zeolite support after the reaction. XRD diffractograms of both catalysts prior and after catalytic experiments are close to each other (Figure 7a,b), indicating good stability of the support, while copper leaching was significantly different. By measuring the copper content in diluted reaction mixtures using atomic absorption spectroscopy, it was determined that after 180 min 38 wt % copper leached from the Cu/13X-1 catalyst in a striking contrast to only 2 wt % for its counterpart calcined at high temperature, indicating severe instability of Cu/13X-1. Contribution of the leached copper in the solution to the overall catalytic performance was discussed previously, 29,30 concluding that it can be neglected due to inactivation of copper by carboxylic acids. However, in this case, when over 20% of copper leached from the catalyst before the reaction was initiated by addition of hydrogen peroxide, the homogeneous contribution should not be excluded because oxidation of organic compounds catalyzed by copper cations in the liquid phase is possible. The copper leaching results were confirmed by energy-dispersive X-ray spectroscopy analysis of the fresh and spent catalysts, showing 42 wt % loss of copper for the spent Cu/13X-1 catalyst and 4 wt % loss of copper for the Cu/13X-K1273-1 catalyst. Specific surface area measurements were also supporting the superior resistance against leaching of the thermally treated catalyst. For the Cu/13X-1 catalyst, the specific surface area and pore volume decreased from the initial S FRESH = 618 m 2 g −1 and V p,FRESH = 0.34 cm 3 g −1 to S SPENT = 434 m 2 g −1 and V p,SPENT = 0.30 cm 3 g −1 , respectively, whereas the values for the Cu13X-K1273-1 catalyst did not significantly change before and after the reaction: S FRESH = 26 m 2 g −1 and V p,FRESH = 0.03 cm 3 g −1 to S SPENT = 24 m 2 g −1 and V p,SPENT = 0.04 cm 3 g −1 , respectively. One of the possible explanations for large variations in stability between calcined and noncalcined catalysts could be the different copper speciation, namely, the presence of Cu + in Cu/13X-1 as revealed by XPS. To our knowledge, no report on the differences in the stability of Cu + and Cu 2+ in the CWPO of phenolics has been published. However, different coordination of copper inside the zeolite lattice for Cu + and Cu 2+ cations was reported by Vanelderen et al., 31 which could have an impact on their catalytic properties as well. An alternative explanation was proposed by Taran et al. based on a study of Cu-ZSM-5. 13 The authors have shown that copper catalysts with 1−2 wt % loading possessed the highest activity and reasonable stability, whereas an increase in copper resulted in a lower activity and stability. In the current work, for the noncalcined catalysts, the amount of Cu could have been too high to allow formation of a stable material. After calcination, the zeolitic structure has been destroyed, giving several new phases. It could be due to the fact that partial encapsulation of CuO particles makes the catalyst less prone to leaching. Whatever the explanation, elucidation of mass transfer influence and kinetic analysis was done for the Cu/13X-K1273 catalyst in which the more efficient use of the oxidant was proven to take place. Mixing Efficiency and External Mass Transfer. In the case of catalytic wet peroxide oxidation of polyphenols over a solid pelleted catalyst, following mass transfer processes should be considered: transport of the dissolved reactants from the liquid bulk to the catalyst outer surface and transport inside the pores of the pellet. These effects result in the concentration gradients of reactants and products across phase boundaries and within the catalyst particle, as present in Figure 8. To evaluate all possible mass transfer limitations, a combined theoretical/experimental approach was adopted in this study. Mass transfer coefficients through the external boundary layer and inside the pores were calculated for hydrogen peroxide and model compound phenol, and the presence of diffusion limitations was evaluated by the application of external mass transport and internal pore diffusion criteria for the Cu/13X-K1273-1 catalyst. Additionally, the efficiency of mixing in the reactor was verified and evaluation of reaction conditions for achieving total suspension of the catalyst was performed. External mass transfer or the mass transfer in the thin boundary layer around the solid catalyst particle depends on hydrodynamic conditions in the reactor (stirring speed), physical properties of the liquids, and the size of the catalyst particles. In the catalytic reactions in which the suspended solid catalyst is used, external mass transfer resistances can be minimized by efficient mixing that establishes thorough dispersion of reactants and catalyst in the liquid and the use of the smaller catalyst particles. The first step in achieving this is ensuring that under the conditions of the catalytic experiments the solid catalyst is completely suspended in the liquid and no particles remain at the bottom of the reactor for longer than 1 s. The minimum stirrer speed necessary for total where g is the gravitational constant (cm s −2 ), d P is the particle diameter (cm), d M is the impeller diameter (cm), ρ L and ν L are the density and kinematic viscosity of water in g cm −3 and cm 2 s −1 , respectively, B is the percentage of the weight of the catalyst compared to the weight of the liquid, and Δρ = ρ S − ρ L . S is a dimensionless factor that depends on the reactor geometry and impeller type, 12) where r obs is the observed reaction rate, R is the particle radius, c b is the molar concentration of the solute in bulk, and k LS is the liquid/solid mass transfer coefficient. Liquid/solid mass transfer coefficients for hydrogen peroxide and phenol were calculated on the basis of the correlation between dimensionless Sherwood, Reynolds, and Schmidt numbers for slurry reactors 34 Sh Re Sc where Re number is expressed as based on the Kolmogorov theory of turbulence. By rearranging eq 13, the following expression for estimating liquid/solid mass transfer coefficient k LS can be derived In eq 14, ϵ denotes the energy of dissipation, D is the diffusion coefficient of the diffusing compound, d M is the impeller diameter, ρ and η are the density and dynamic viscosity of water, and d P is the particle diameter. The energy of dissipation or the maximum specific mixing power was calculated from where P is the mixing power that depends on the impeller type and stirring speed, N P is the power number of the impeller, and ρ L and V L are the density and volume of the liquid. For a 45°p itched four-blade turbine impeller 4.5 cm wide with N P = 1.3 in the turbulent region (Re > 10 3 ) at a stirring speed of 600 rpm mixing the volume of 250 cm 3 , the maximum specific mixing power was calculated to be 0.96 W kg −1 . The mutual diffusion coefficients of solutes in water were calculated using the Wilke−Chang equation (eq 8), resulting in the values of 6.0 × 10 −9 m 2 s −1 for hydrogen peroxide and 3.5 × 10 −9 m 2 s −1 for phenol. Next, the external mass transfer coefficients for hydrogen peroxide and phenol were calculated from eq 14, resulting in k LS,HP = 5.6 × 10 −4 m s −1 and k LS,Ph = 3.8 × 10 −4 m s −1 . The application of the external mass transfer criteria (eq 12) gave the values on the left side of the equation several orders of magnitude lower than those on the right side, showing that external mass transfer for both hydrogen peroxide and polyphenols is negligible even at the higher reaction orders and that the reaction mixture is effectively mixed at 600 rpm. The above theoretical approach results were experimentally verified by adopting the published procedures for the elimination of external mass transfer. 35 To confirm the specific conditions under which the reaction was operating with negligible external mass transfer resistances, the influence of the stirring speed and particle size on the reaction rates of hydrogen peroxide decomposition and polyphenols oxidation was investigated. The results are presented in Figures 9, 10, and S1. From the results presented in Figure 9, it can be seen that already above 100 rpm there are no significant changes in the reaction rates of hydrogen peroxide decomposition and polyphenols oxidation and that the increase in the stirring speed above 600 rpm does not further increase them. This indicates that for the stirring speed above 600 rpm the external mass transfer resistances are minimized and that the mass transfer through the boundary layer proceeds faster that the surface reaction. From the above-presented results, it can be concluded that the reaction mixture is most effectively mixed at the stirrer speed of 600 rpm and that decreasing the particle size below 0.8 mm resulted in only a slight improvement in the observed rate of phenol oxidation excluding the presence of external mass transfer limitations that could influence the reaction kinetics. 3.4. Influence of Catalyst Loading, Initial Concentration of Hydrogen Peroxide, and Temperature. The subsequent experiments aimed at revealing the optimal initial concentration of hydrogen peroxide, catalyst loading, and reaction temperature were performed under the abovementioned reaction conditions. The results of these studies presented in Figure 11 showed that the most significant influence on the extent of total phenols and TOC removal had the initial hydrogen peroxide concentration. By increasing the initial content of the oxidant, the rate and the extent of the total phenols, TOC, and COD removal increased. At higher initial concentrations of hydrogen peroxide (above 0.75 M), when all total phenols that constitute approximately 17 wt % in TOC loading are eliminated, no significant increase in the oxidation rate of polyphenols can be observed. This is considered to be the consequence of the intensification of side reactions of hydroxyl radicals and the scavenging effect of the oxidant as described earlier (eqs 4−6). However, oxidation of intermediates that are formed by polyphenol conversion becomes significant, further decreasing the organic content of the reaction mixture. The best results were obtained in the reaction conducted with the initial hydrogen peroxide concentration of 1.34 mol dm −3 at 353 K and with 2.5 g of catalyst when ∼97% of total phenols and 47% of TOC reduction were achieved with a rather small copper leaching ( Figure 12). A higher catalyst bulk concentration gave more prominent hydrogen peroxide decomposition, and total phenol oxidation increased as expected (Figure 11b,e). In contrast to the reports published for similar catalytic systems, 36 no limit of catalyst loading was observed and the reaction rates increased proportionally with the mass of the catalyst added to the reactor. The increase in the reaction temperature (Figure 11c,f) had a similar beneficial effect on the catalyst activity, yielding higher conversions of both reactants at elevated temperatures. 3.5. Kinetic Analysis. In heterogeneous catalysis, intrinsic kinetics can be evaluated only if the external or internal mass transfer resistances are not affecting the surface reaction rate. The above-presented results and discussion of the diffusion influence in CWPO of polyphenols from OOMW over the Cu/13X-K1273-1 catalyst indicate that for a stirring speed of 600 rpm, catalyst size of 0.4−0.63 mm, and the catalyst loading of 2.5 g the external and internal mass transfer resistances for both the oxidant and the polyphenols are minimized and that the surface reaction can be presumed to be the slowest step in the overall reaction rate. As mentioned before, the CWPO reaction mechanism is very complex, consisting of numerous parallel and serial reactions involving different molecular and radical species. However, the following main reaction steps can be identified: initiation of catalytic decomposition of hydrogen peroxide, which results in the generation of hydroxyl radicals (eq 1), followed by the oxidation of polyphenols and intermediates (propagation, eq 3), and finally a loss of hydroxyl radicals in the nondesired side reactions (termination, eq 4−6). In a very simplistic form, the oxidation pathway can be represented as Because of the complexity of the system, performing a detailed kinetic analysis is challenge even for the model wastewaters where most of the organic compounds present in the reaction mixture are known. The composition of the real wastewater effluent produced during olive oil extraction depends on the production process used, olive species, and climate region, but, in general, OOMW contains more than 30 different polyphenols as well as other prone to oxidation organic acids that can engage in the reaction with hydrogen peroxide and hydroxyl radicals. The presence of inorganic salts such as chlorides and phosphates complicates the matter further. 3 The heterogeneity of the OOMW composition as well as the reaction scheme complexity make carrying out the detailed kinetic study next to impossible that would incorporate all individual reactions with all of the initial and intermediate compounds and radicals. 37,38 Because of this, the kinetic modeling is often limited to some parameter that represents the group of targeted compounds or major constituents of the wastewater such as COD, TOC, or total phenol content. On the basis of the kinetic regularities and literature data for similar catalytic systems, 39 where c TPh is the molar concentration of total phenols expressed as gallic acid equivalent, c HP is the molar concentration of hydrogen peroxide, and γCAT is the catalyst loading in g dm −3 . The total phenol content was divided into two fractions: more reactive (c TPh,1 ) and less reactive (c TPh,2 ) based on preliminary analysis of the obtained experimental data. In all performed experiments, two reaction phases could clearly be distinguished: a fast decrease of the total phenol content, which occurred within the first 30−60 min of the reaction, and slow oxidation of the remaining less-reactive fraction of total polyphenols present in the reaction mixture. The initial value of the concentration ratio of the two polyphenol fractions was set as 0.5 during parameter estimation analysis. Reaction constants (k TPh,1 , k TPh,2 , and k HP ), reaction orders in reactants and the catalyst (n 1 , n 2 , n 3 , n 4 ), and concentration ratio of the polyphenol fractions are the kinetic parameters that were estimated during modeling. These expressions take into account that the oxidation rate of polyphenols depends on the concentrations of both reactants and the catalyst loading and that the decomposition rate of hydrogen peroxide considers the contribution of not only the reaction with polyphenols but also the side reactions of hydrogen peroxide decomposition. The contribution of the noncatalytic hydrogen peroxide decomposition and polyphenol oxidation was also considered based on the data acquired during reactions without a catalyst. However, it was found that the noncatalytic contribution to the overall reaction rate was marginal. The estimation of the kinetic parameters was carried out by nonlinear regression analysis using simulation and parameter estimation software MODEST. 41 was minimized with the hybrid Simplex−Levenberg−Marquardt method, where y exp represents experimental data and y est represents the estimated values, i.e., the concentrations. In the first iteration, the polyphenol oxidation reaction orders in polyphenols and hydrogen peroxide concentration were identified in a run with all of the kinetic parameters set as floating. In most cases, the polyphenol oxidation reaction orders with respect to phenol (n 1 ), hydrogen peroxide (n 2 ), and catalyst concentration (n 3 ) were close to 1, whereas the order of hydrogen decomposition reaction with respect to hydrogen peroxide was close to 2 (n 4 ). The second iteration of modeling was performed with fixed reaction orders, and the results are shown in Table 5 and Figures 11 and 13. Taking into account the complexity of the reaction mixture and limitations of the analytical methods, the obtained results show good agreement of the experiment and the proposed kinetic model for both hydrogen peroxide decomposition and polyphenol oxidation. In general, the fit is better for hydrogen peroxide decomposition, whereas the worst agreement for the polyphenol oxidation was achieved for the reactions in which the initial concentration of hydrogen peroxide was the lowest, indicating that for these reactions one of the kinetic model assumptions does not hold. Activation energies for hydrogen peroxide decomposition and polyphenol oxidation over the pelleted Cu/13X-K1273-1 catalyst were determined from the temperature dependencies of the calculated rate constants described by a modified Arrhenius equation where k av is the constant at the average temperature of the experiments T av . The obtained values of activation energies for hydrogen peroxide decomposition and polyphenol oxidation of E a,HP = 30.1 kJ mol −1 , E a,TPh,1 = 62.2 kJ mol −1 and E TPH,2 = 90.1 kJ mol −1 are in the range of values reported for similar model reaction systems using powdered catalysts, 9,14,29,39 i.e., 45−140 kJ mol −1 . 3.6. Catalyst Testing in Olive Oil Mill Wastewater Treatment. The catalyst performance was finally tested in the prolonged reaction over 10 h to determine whether the catalyst gets deactivated with prolonged use. From the results presented in Figure 14, it can be seen that the oxidation of organic content continues after phenolics are eliminated, demonstrating the catalyst ability to enhancing peroxidation of not only phenols but also other organic compounds present in the olive oil mill wastewater, resulting in complete conversion of total phenols and 52% conversion of TOC. The Cu/13X-K1273-1 catalyst preserved its stability even after 10 h of reaction when only 3.3 wt % copper leached from the catalyst. By a comparison of the fresh and spent catalysts, it was observed that the catalyst maintained its initial surface area and pore volume (S FRESH = 26 m 2 g −1 and V p,FRESH = 0.03 cm 3 g −1 to S SPENT-10h = 25 m 2 g −1 and V p,SPENT-10h = 0.03 cm 3 g −1 ). Although the loss of 3.3 wt % of the initial copper content from the catalyst is not negligible, this result is very encouraging when compared to that of similar catalytic systems described in the literature. The extent of leaching of the Cu/13X-K1273 pelletized catalyst is generally lower when compared to that of other zeolite or zeolite-based catalysts 9,13,42 and is comparable to leaching of copper-containing pillared clay catalysts according to the work of Inchaurrondo et al. 29 However, most of these results are connected to CWPO studies of model or often highly diluted wastewaters, which should be taken into account when comparing them with the results of this study where a real industrial effluent was used because the extent of metal leaching from the catalyst depends not only on the catalyst type or support material but also very strongly on the reaction conditions such as temperature, pH, and particular organic compound species and their concentrations. CONCLUSIONS Postsynthesis thermal treatment was beneficial for catalytic behavior of the copper-containing 13X zeolite in catalytic wet peroxide oxidation of OOMW as it resulted in an increased stability against leaching, allowing better removal of total phenols TOC due to the presence of pore diffusion limitations for polyphenols in the noncalcined Cu/13X-1 catalyst. Results of mass transfer and diffusion investigation for the calcined Cu/13X-K1273-1 catalyst excluded the influence of both external and internal mass transfer limitations. It was found that the rate of phenol oxidation and hydrogen peroxide decomposition increased with the increase of stirrer speed, catalyst loading, initial hydrogen peroxide concentration, and reaction temperature and with the decrease of catalyst bead size. Kinetic analysis of the catalytic system was preformed, the reaction orders in reactants and in the catalyst were identified, and parameters of the proposed kinetic model and activation energies were determined. By treating the industrial olive oil mill wastewater in a catalytic wet peroxide oxidation process over thermally stabilized copper-containing zeolite-based catalyst under mild reaction conditions (353 K and atmospheric pressure), it is possible to achieve complete conversion of total phenols and over 50% conversion of TOC, substantially minimizing copper leaching.
2019-04-09T13:11:11.620Z
2018-07-03T00:00:00.000
{ "year": 2018, "sha1": "f650686ce62bda93f089a3fc9999adcd783965c2", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b00948", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5280384845cc07cfec466929b7a00e89842d472c", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
9594184
pes2o/s2orc
v3-fos-license
Significance of Aurora B overexpression in hepatocellular carcinoma. Aurora B Overexpression in HCC Background To investigate the significance of Aurora B expression in hepatocellular carcinoma (HCC). Methods The Aurora B and Aurora A mRNA level was measured in 160 HCCs and the paired nontumorous liver tissues by reverse transcription-polymerase chain reaction. Mutations of the p53 and β-catenin genes were analyzed in 134 and 150 tumors, respectively, by direct sequencing of exon 2 to exon 11 of p53 and exon 3 of β-catenin. Anticancer effects of AZD1152-HQPA, an Aurora B kinase selective inhibitor, were examined in Huh-7 and Hep3B cell lines. Results Aurora B was overexpressed in 98 (61%) of 160 HCCs and in all 7 HCC cell lines examined. The overexpression of Aurora B was associated with Aurora A overexpression (P = 0.0003) and p53 mutation (P = 0.002) and was inversely associated with β-catenin mutation (P = 0.002). Aurora B overexpression correlated with worse clinicopathologic characteristics. Multivariate analysis confirmed that Aurora B overexpression was an independent poor prognostic factor, despite its interaction with Aurora A overexpression and mutations of p53 and β-catenin. In Huh-7 and Hep3B cells, AZD1152-HQPA induced proliferation blockade, histone H3 (Ser10) dephosphorylation, cell cycle disturbance, and apoptosis. Conclusion Aurora B overexpression is an independent molecular marker predicting tumor invasiveness and poor prognosis of HCC. Aurora B kinase selective inhibitors are potential therapeutic agents for HCC treatment. Backgroud Hepatocellular carcinoma (HCC) is the leading cause of cancer mortality in Taiwan [1] and many other countries in Asia and Africa [2]. The incidence of HCC is increasing in Europe and the United States [3]. In 2002, HCC became the sixth most common cancer worldwide with 626,000 annual new cases [4]. Despite surgical resection, which provides an opportunity for cure, the majority of patients with HCC have a dismal prognosis [5] because tumor recurrence frequently develops and usually leads to patient's mortality [6]. The development of HCC is closely related to chronic hepatitis B or C, cirrhosis of any etiology, and aflatoxin B1 exposure [2]. However, the detailed molecular mechanisms of hepatocarcinogenesis are still not fully understood [7]; molecular factors capable of predicting clinical outcome of HCC and acting as potential therapeutic targets remain limited. The identification of molecular markers related to hepatocarcinogenesis, tumor progression, and poor clinical outcome would benefit patients, providing for better management planning and serving as potential therapeutic targets for novel HCC drug treatments. Genomic instability has been correlated with hepatocarcinogenesis [8], and increased chromosomal instability has been associated with differentiation status of human HCC [9]. Aurora kinases, a subfamily of serine/ threonine mitotic kinases, are thought to be key molecules required for maintaining accurate cell cycling and genomic stability [10]. We previously showed that Aurora A was overexpressed in 137 (61%) of 224 human HCCs and that the overexpression of Aurora A was associated with aggressive tumor characteristics and poor prognosis of patients [11]. Furthermore, we demonstrated that VE-465, a novel pan-Aurora kinase inhibitor, had anticancer effects in preclinical models of human HCC [12]. These findings indicated that Aurora kinases may be important biomarkers and potential therapeutic targets in HCC. There are three highly related Aurora kinases in mammals, Aurora A, B, and C. Aurora A and Aurora B share a high degree of sequence homology in their catalytic domains, and overexpression of each has been identified in many human cancers [13]. Despite their sequence similarity, Aurora A and Aurora B differ in chromosomal gene loci, subcellular localization, cellular functions, and signaling substrates [13]. The Aurora A kinase gene is localized to chromosome 20q13.2, and that for Aurora B kinase is localized to chromosome 17p13.1. Aurora A kinase protein is localized in the centrosome and spindle poles and plays important roles in centrosome maturation and spindle assembly [14]. Aurora B kinase, which is a chromosome passenger protein localized in the centromeres during early mitosis and then at the spindle midzone after anaphase, is essential for chromosome biorientation, function of the spindle assembly checkpoint, and cytokinesis [15]. The enthusiasm of exploring Aurora kinases as anticancer therapeutic targets initially centered on Aurora A, but recent studies have demonstrated that several Aurora kinase inhibitors exhibit anticancer activity resembling that of Aurora B disruption induced by genetic methods [16]. Therefore, determination of the distinctive roles in carcinogenesis and individual clinical significance of Aurora A and Aurora B is mandatory. The aims of this study were to elucidate the clinicopathologic significance of Aurora B expression and Aurora A expression in HCC and to correlate their expression with p53 and b-catenin mutations, the two most frequently mutated genes in HCC [7,11]. Tissue samples During the period January 1987 through December 1997, 160 surgically resected, primary unifocal HCCs were selected for this study. After resection, tumor tissues were immediately cut into small pieces, snap frozen in liquid nitrogen, and stored in deep freezer. Patients had received comprehensive pathologic assessment and regular follow-up at National Taiwan University Hospital, as described previously [17,18]. This study was compliant with the regulations of the Ethics Committee of the host institution. The 160 patients included 122 men and 38 women with a mean age of 57 years (range, 14-88 years). Serum hepatitis B surface antigen (HBsAg) was detected in 107 cases (67%) and antihepatitis C virus antibody in 53 (35%), including 13 positive for both. Elevated α-fetoprotein (AFP; ≥200 ng/mL) was detected in 80 cases (50%). Liver cirrhosis was found in 61 patients (38%). All patients had adequate liver function reserve at the time of surgery. None of the patients had received local or systemic therapy before surgery. Histological study and tumor staging Tumor grade was divided into 3 groups: well-differentiated (grade I, 31 cases), moderately differentiated (grade II, 74 cases), and poorly differentiated (grade III-IV, 55 cases). The unifocal HCC was staged as stages I, II, IIIA, IIIB, and IV, as described previously [11,19,20]. Staging was based on the International Union Against Cancer criteria, with slight modification because HCC tends to spread in the liver via vascular invasion, which is an important unfavorable prognostic factor for this disease [21]. Stage I HCC included tumors that were ≤ 2 cm and showed no evidence of liver and vascular invasion (4 cases). Stage II HCCs included tumors that were ≤ 2 cm for which vascular invasion was limited to small vessels in the tumor capsule, as well as encapsulated tumors > 2 cm with no evidence of liver or vascular invasion (62 cases). Stage IIIA HCCs included invasive tumors > 2 cm with invasion of small vessels in the tumor capsule and/or satellites near the tumor, but no portal vein invasion (25 cases). Stage IIIB HCCs included tumors with invasion of the portal vein branch near the tumor, but not of the distant portal vein in the liver parenchyma (23 cases). Stage IV included tumors with involvement of major portal vein branches, satellites extending deeply into the surrounding liver, tumor rupture, or invasion of the surrounding organs (46 cases). No evidence of regional lymph node or distant metastasis was noted at the time of surgery in any of the cases. Among the 160 patients studied, 149 were eligible for the evaluation of early tumor recurrence (ETR; ≤12 months). Eleven patients who died within 1 year after surgery without objective evidence of tumor recurrence were excluded from the evaluation of ETR. Reverse transcription-polymerase chain reaction Reverse transcription-polymerase chain reaction (RT-PCR) was used to determine the mRNA levels of Aurora A and Aurora B in paired HCCs and nontumorous liver samples, as described elsewhere [11,22]. The ribosomal protein S26 mRNA, a housekeeping gene, was used as an internal control [23]. Briefly, total RNA was isolated from the frozen tissues using a guanidium isothiocyanate/CsCl method. RNA was quantified by spectrophotometry at 260 nm. Stock RNA samples were kept in alcohol in deep freezer until used. Complementary DNA (cDNA) was prepared from the total RNA of paired HCCs and nontumorous liver samples. Two microliter reverse transcription product, 1.25 units Pro Taq polymerase (Protech Technology Enterprise, Taipei, Taiwan), Pro Taq buffer, and 200 μM dATP, dCTP, dGTP, and dTTP (each) were mixed with primer pairs for Aurora A, Aurora B, and S26 in a total volume of 30 μl. Onetube PCR reaction was stopped at the exponential phase of gene amplification: 29 cycles for Aurora A, 32 for Aurora B, and 23 for S26. The reaction was performed in an automatic DNA thermal cycler (model 480; Perkin-Elmer Co., Wellesley, MA), with limited reaction reagents (Tag enzyme and dNTPs), and processed with initial heating at 94°C for 2 minutes, followed by 29 (Aurora A) or 32 (Aurora B) PCR reaction cycles of 94°C for 30 seconds, annealing at 60°C for 1 minute, extension at 72°C for 1 minute, and a final 72°C extension for 10 minutes. The PCR reaction was stopped at cycle 7 (Aurora A) or 10 (Aurora B), and the reaction tubes were quenched on ice to allow adding S26 primers, then complete the final 23 PCR reaction cycles. Primers for amplified genes were as follows: The PCR products were electrophoresed on a 2% agarose gel. Concentrations of the PCR fragments were determined with the IS-1000 digital imaging system (Alpha Innotech, San Leandro, CA). The Aurora A and Aurora B mRNA levels were determined according to the ratio of signal intensity for Aurora A or B to that of S26 as measured by 1 D Image Analysis software (Kodak Digital Science, Rochester, NY) and scored as high (ratio >1.0), moderate (ratio > 0.5 and ≤1.0), or low (ratio ≤0.5). The Aurora A and Aurora B mRNA levels of nontumorous liver rarely exceeded a ratio of 0.5, and a ratio > 0.5 was regarded as overexpression. Analysis of p53 and b-catenin mutations Mutations of the p53 and b-catenin genes were analyzed in 134 and 150 tumors, respectively, by direct sequencing of exon 2 to exon 11 of p53 and exon 3 of b-catenin, as described previously [17,24,25]. Samples with incomplete study results were excluded from statistical analysis. Follow-up observation and assessment of early tumor recurrence Early tumor recurrence (ETR) was designated as intrahepatic tumor recurrence or distant metastasis detected by imaging tools, pathology and/or high AFP levels within 12 months. All 160 patients had been followed for more than 5 years or until death. At the end of the follow-up in November 2008, 37 patients remained alive. One hundred forty-nine cases (93%) were eligible for evaluation of ETR. Cell viability and flow cytometry A total of 5 × 10 4 Huh-7 or Hep3B cells were plated in six-well plates. After overnight culture, cells were treated with DMSO or 1, 5, 25, and 125 nM of AZD1152-HQPA. At 72 hours of drug treatment, cells were trypsinized and the total number of cells were counted using hemocytometer. Trypan blue dye exclusion assay was used to determine the number of viable cells. The experiments were carried out in 3-4 replicates and repeated trice. Cells in logarithmical growth were incubated with either AZD1152-HQPA or DMSO for 24 to 48 hours. Cells were labeled with 0.5~1 mL propidium iodide (50 μg/mL) after being trypsinized and fixed in 70% methanol overnight. Cell cycle profiles and sub-G1 fractions were determined using a FACS caliber (Becton Dickinson, San Jose, CA, USA). Statistical analysis Data analyses were carried out with Statistical Analysis System software (version 9.1; SAS Institute, Inc., Cary, NC). Two-tailed P < 0.05 was considered statistically significant. The χ 2 , Fisher's exact test, and log-rank test were used for univariate analyses. Multivariate analyses were conducted for ETR, tumor size, stage, and grade by fitting multiple logistic regression models [27]. Time to death was analyzed by fitting multiple Cox's proportional hazards models [28]. In our regression analyses, basic model-fitting techniques for (a) variable selection, (b) goodness-of-fit assessment, and (c) regression diagnostics (including residual analysis, influence analysis, and check of multicollinearity) were used to ensure the quality of the analysis results [27,28]. For the in vitro studies, the mean differences among groups were tested by one-way analysis of variance (ANOVA) followed by multiple comparisons using the Dunnett's post hoc test or the Bonferroni's correction of alpha level. Results Expression of Aurora B mRNA and protein in liver and hepatocellular carcinoma Using RT-PCR in the linear range, Aurora B mRNA overexpression was detected in 98 (61%) of 160 surgically resceted, primary unifocal HCC specimens (Fig. 1A). Of these 160 HCCs, RNA samples of nontumorous liver were examined in 153 cases. In nontumorous liver, overexpression of Aurora B mRNA at a moderate level was detected in 2 cases (1.3%). We then examined Aurora B gene expression in cell lines, and all 7 liver cancer cell lines showed high expression levels of Aurora B mRNA, which correlated with protein levels (Fig. 1B). Clinicopathologic significance of Aurora B mRNA overexpression in hepatocellular carcinoma To elucidate the biologic significance of Aurora B in HCC, we correlated Aurora B expression with major clinicopathologic features of HCC. As shown in Table 1, Aurora B overexpression was associated with high serum AFP level (≥200 ng/mL; P < 0.0001), but not with age, gender, chronic hepatitis B/C virus infection, or liver functional reserve (Child-Pugh class). Histologically, Aurora B overexpression did not correlate with the presence of liver cirrhosis. Nevertheless, HCCs with Aurora B overexpression were associated with large tumor size (> 5 cm; P = 0.021), high-grade histology (P = 0.0007), and advanced tumor stage (P < 0.0001). Genes for p53, b-catenin, and Aurora A are most frequently deregulated in HCC and are closely associated with HCC progression [7,11]. Therefore, relations between Aurora B overexpression with mutations of p53 and b-catenin and with Aurora A overexpression were analyzed. Table 1 shows that Aurora B overexpression was correlated with Aurora A overexpression (P = 0.0003) and p53 mutation (P = 0.002). In contrast, Aurora B was more frequently overexpressed in HCCs without b-catenin mutation (P = 0.002). Aurora B overexpression predicts early tumor recurrence and poor prognosis HCC with Aurora B overexpression were associated with worse 5-year survival than HCC without Aurora B overexpression (P < 0.0001; Table 1 and Fig. 2A). Moreover, HCC with Aurora B overexpression showed more frequent ETR (P < 0.0001; Table 1), the most crucial clinical event associated with poor prognosis of HCC after hepatectomy [6,19]. As listed in Table 2, multivariate analysis showed that Aurora B overexpression (odds ratio [OR], 4.679; P = 0.0011], tumor size (OR, 3.735; P = 0.0031), tumor stage (OR, 3.611; P = 0.0073), and age ≤55 years (OR, 1.043; P = 0.0245) were significant independent risk factors by Cox's proportional hazards model for the occurrence of ETR. A conditional effect plot of age and Aurora B overexpression on ETR was drawn based on the multiple logistic regression model with fixed tumor size and stage (Fig. 2B). The probability of ETR was significantly higher in patients with HCC showing Aurora B overexpression. Furthermore, ETR (OR, 29.181; P < 0.0001), tumor grade (OR, 1.516; P = 0.0041), and tumor size (OR, 1.072; P = 0.0048) were significant independent risk factors associated with poor patient survival (Table 2). Principally, we found that Aurora B overexpression was an independent risk factor associated with high-stage tumor (OR, 7.439; P = 0.0003; Table 2) and ETR (OR, 4.679; P = 0.0011), hence contributing to poor patient survival. Nevertheless, Aurora B overexpression did not exert prognostic effects for tumor size or tumor grade ( Table 2). Interaction of Aurora B overexpression with Aurora A overexpression and mutations of p53 and b-catenin in hepatocellular carcinoma Because both Aurora A and Aurora B correlate closely with unfavorable prognosis of HCC and may be potential therapeutic targets [11,12], we analyzed the possible interplay between these two important biomarkers. In this study, Aurora A overexpression, which was found in 100 (63%) of 160 HCCs examined, significantly correlated with Aurora B overexpression (P = 0.0003; Table 1). Moreover, as shown in Table 3, HCC with overexpression of both Aurora A and Aurora B showed the highest occurrence of high serum AFP level (≥200 ng/ mL; 71%), large tumor size (> 5 cm; 72%), grade II to IV tumor (94%), stage IIIA to IV tumor (82%), p53 mutation (64%), wild-type b-catenin (92%), and the worst 5year survival rate (19%) than the other groups. Because Aurora B overexpression was correlated with Aurora A overexpression (P = 0.0003), p53 mutation (P = 0.002), and infrequent b-catenin mutation (P = 0.002) in this study (Table 1), we then analyzed the prognostic value of Aurora B overexpression for patient survival in relation to Aurora A overexpression and p53/b-catenin mutations. We showed that HCC with Aurora B overexpression was associated with worse 5-year survival regardless of Aurora A expression status (P = 0.013 for HCC without Aurora A overexpression and P = 0.001 for HCC with Aurora A overexpression; Fig. 3A), p53 mutation (P = 0.016 in wild-type p53 HCC and P = 0.123 in p53-mutated HCC; Fig. 3B), and b-catenin mutation (P = 0.329 in b-catenin-mutated HCC and P < 0.001 in wild-type b-catenin HCC; Fig. 3C). Anticancer effects of Aurora B kinase selective inhibitor, AZD1152-HQPA, in HCC cells The association of Aurora B overexpression and tumor invasiveness of HCC prompted us to explore the effects of Aurora B kinase inhibition on HCC cell viability. Huh-7 and Hep3B cells were treated with increasing concentrations of an Aurora B selective small-molecule inhibitor, AZD1152-HQPA, for 72 hours. Concentration-dependent inhibitory effects of cell viability were observed in both cell lines (Fig. 4A). The ratios of viable Huh-7 and Hep3B cells consistently decreased with higher concentrations of AZD1152-HQPA. The 50% inhibitory concentrations of cell viability (IC 50 ) at 72-hr were 16.72 ± 2.44 nM and 4.79 ± 1.03 nM for Huh-7 and Hep3B, respectively (Fig. 4A). Aurora A autophosphorylation at T288 [29] and histone H3 phosphorylation at Ser10 [12] represent the activity of Aurora A and Aurora B, respectively. As shown in Fig. 4B, AZD1152- HQPA induced dephosphorylation of histone H3 (Ser10) in a concentration-dependent manner, while the phosphorylation level of Aurora A (T288) did not change. The data suggest that AZD1152-HQPA exerts its anticancer effects in HCC cells through the inhibition of Aurora B. Because Aurora kinase inhibitors have been shown to induce cell death after the cell cycle has been disturbed [10]. We therefore investigated the effects of AZD1152-HQPA on HCC cell cycle progression and apoptosis. As shown in Fig. 5A, AZD1152-HQPA treatment resulted in accumulation of Hep3B cells with 4N DNA contents at 24-h, followed by the appearance of cells with 8N DNA contents at 48-h. Our data demonstrated that AZD1152-HQPA induced cell cycle disturbance in a concentration-dependent manner (Fig. 5A). We also examined the ability of AZD1152-HQPA to induce apoptosis in HCC cells. As shown in Fig. 5B, AZD 1152-HQPA induced concentration-dependent apoptosis in both Huh-7 and Hep3B cells. After 48 hours of treatment with AZD1152-HQPA above 25 nM, the sub-G1 fractions of Huh-7 cells and Hep3B cells significantly increased (P < 0.01). In Fig. 5B, AZD1152-HQPA induced apoptosis more efficiently in Hep3B cells, which is in accordance with the antiproliferative effects. Discussion In mammals, there are three highly related Aurora kinases: Aurora A, B, and C. These 3 closely related kinases share a high degree of sequence homology in their catalytic domains [30]. Despite the sequence homology and common association with mitotic regulatory events, the subcellular localization and signaling substrates differ, and hence the functions of Aurora A and Aurora B are essentially distinct [13]. We have reported that Aurora A is highly expressed in HCC and that overexpression is closely associated with aggressive tumor phenotypes and worse patient prognosis [11], but the clinicopathologic significance of Aurora B in HCC progression remains to be clarified. In this study, we demonstrated that overexpression of Aurora A and Aurora B was detected in 63% and 61% of 160 surgically resected, primary unifocal HCCs, respectively. Importantly, Aurora B mRNA expression correlated with major clinicopathologic parameters related to tumor progression by univariate analyses, including high AFP level (P < 0.0001), large tumor size (P = 0.021), higher tumor grade (P = 0.0007), and higher tumor stage (P < 0.0001). By multivariate analyses, we showed that Aurora B overexpression was associated with high-stage (stages IIIA, IIIB, and IV) HCC, which exhibits vascular invasion and various extent of microscopic intrahepatic spread (OR, 7.439; P = 0.0003). These findings suggest that overexpression of Aurora B is associated with tumor invasion and intrahepatic metastasis of HCC, as having been shown in Aurora A [11]. Although the diagnosis and management of HCC have progressed significantly, the prognosis for patients receiving surgical treatment remains poor because of the high ETR [6,31]. Hence, the identification of molecular factors The multivariate analysis confirmed that Aurora B overexpression was an independent risk factor associated with ETR (OR, 4.679; P = 0.0011; Table 2). These findings are consistent with the correlation of Aurora B overexpression with poor tumor differentiation and worse patient survival in thyroid [32], prostate [33], and hepatobiliary cancers [34,35]. Taken together, our findings suggest that Aurora B overexpression serves as a useful marker predicting ETR and hence poor prognosis. In the present study, we showed that the expression of Aurora B and Aurora A were closely correlated (P = 0.0003; Table 1) and exhibited an interaction contributing to HCC progression. HCC with overexpression of both kinases exhibited the highest rates of high AFP level (71%), vascular invasion (stage IIIA-IV; 82%), and ETR (74%), 4-fold higher than those without overexpression of either kinase (15%, 15%, and 12%, respectively). Consistently, HCC with overexpression of both kinases showed the lowest 5-year survival (19%), approximately one-third of those without any overexpression (56%). Our findings suggest that Aurora A and Aurora B contribute cooperatively to a more malignant HCC phenotype, ETR, and poor prognosis. HCC has been classified into two major groups according to chromosomal stability status [36]; tumors characterized by chromosomal instability were associated with more p53 mutation and less b-catenin mutation, the two major genetic mutations in human HCC [17,24,25]. Mutation of p53 correlated with aggressive HCC and poor prognosis [24,25], whereas b-catenin mutation was associated with less tumor aggression and more favorable prognosis [17]. We also showed that Aurora A overexpression correlated positively with p53 mutation and inversely with b-catenin mutation [11]. In the present study, we showed that Aurora B overexpression positively correlated with p53 mutation (P = 0.002) and inversely with b-catenin mutation (P = 0.002). Despite the association with these important molecular factors, Aurora B overexpression predicted worse 5-year survival regardless of Aurora A expression status, p53 mutation, or b-catenin mutation (Fig. 3). Hence, it is suggested that Aurora B overexpression, independent of Aurora A overexpression and p53/b-catenin mutations, is an important molecular factor associated with vascular invasion, leading to high-stage tumor, ETR, and poor prognosis for patients with surgically resected HCC. Since the discovery of Aurora kinases, Aurora A has attracted much attention as an appealing therapeutic Histone H3 (Ser10) phosphorylation, the key substrate of Aurora B signaling, was downregulated in a concentration-dependent manner. Aurora A (T288) phosphorylation, the key substrate of Aurora A signaling, was not repressed by AZD1152-HQPA treatments. target because of its oncogenic potential [37], and the frequent overexpression of Aurora A in a variety of human cancers [38]. However, subsequent pharmacologic studies demonstrated that dual Aurora A and Aurora B kinase inhibitors produced biologic responses equivalent to Aurora B disruption alone [10], suggesting that Aurora B is a critical therapeutic target for cancer. We previously reported that a novel dual Aurora A and Aurora B kinase inhibitor, VE-465, had anticancer effects in human HCC [12]. Hence, determining whether Aurora A or Aurora B is the pertinent therapeutic target for HCC is imperative. In the present study, we first showed that Aurora B overexpression was associated with major clinical (high AFP, ETR) and histopathologic (large tumor, higher tumor grade, and higher tumor stage) features, which are critical for tumor progression of HCC, and hence is an independent risk factor for poor prognosis of patients with surgically resected HCC. In addition, we showed AZD1152-HQPA, an Aurora B selective inhibitor, has anticancer effects in HCC cells. AZD1152-HQPA treatment resulted in profound inhibition of Aurora B signaling, which in turn led to cell cycle disturbance, apoptosis, and growth suppression in HCC cells. Our results suggest that Aurora B selective inhibitors are potential drugs for HCC treatment, confirming the observation that AZD1152 is a novel promising therapeutic approach for HCC [39]. Nevertheless, whether targeting Aurora B kinase alone is a better therapeutic strategy, compared with the targeting of both Aurora A and Aurora B kinases, will require further exploration. Conclusion In this study, we showed frequent overexpression of Aurora B in HCC, which was closely associated with aggressive tumor phenotypes. Aurora B overexpression, independent of Aurora A overexpression and p53/bcatenin mutations, is an important molecular marker associated with early recurrence and poor prognosis. Besides, an Aurora B kinase selective inhibitor, AZD1152-HQPA, had anticancer effects in HCC cells. These findings indicate the importance of Aurora B kinase in HCC progression and as a potential therapeutic target for HCC. Abbreviations HCC: hepatocellular carcinoma; AFP: α-fetoprotein; ETR: early tumor recurrence; RT-PCR: reverse transcription-polymerase chain reaction; OR: odds ratio. Authors' contributions ZZL has made substantial contributions to conception, experimental design, data analysis, and manuscript writing of this study. YMJ conceived and designed the study. FCH was responsible for the statistical analysis. HWP participated in the experimental design. HWT performed the RT-PCR. PLL performed analysis of p53 and b-catenin mutations. PHL supplied tissue samples and collected clinical data. ALC and HCH participated the conception and design of the study, guided the data analysis, manuscript preparation, and reviewed the manuscript. All authors read and approved the final manuscript.
2017-06-16T07:46:51.090Z
2010-08-28T00:00:00.000
{ "year": 2010, "sha1": "0a0acbc6317e75fabffb71785d668109fa460ed6", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-10-461", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcf8135bd95a9b3b86afbd792efd786f77b4c0c4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
212753094
pes2o/s2orc
v3-fos-license
Chitosan Oligosaccharides Attenuate Amyloid Formation of hIAPP and Protect Pancreatic β-Cells from Cytotoxicity The deposition of aggregated human islet amyloid polypeptide (hIAPP) in the pancreas, that has been associated with β-cell dysfunction, is one of the common pathological features of patients with type 2 diabetes (T2D). Therefore, hIAPP aggregation inhibitors hold a promising therapeutic schedule for T2D. Chitosan oligosaccharides (COS) have been reported to exhibit a potential antidiabetic effect, but the function of COS on hIAPP amyloid formation remains elusive. Here, we show that COS inhibited the aggregation of hIAPP and disassembled preformed hIAPP fibrils in a dose-dependent manner by thioflavin T fluorescence assay, circular dichroism spectroscopy, and transmission electron microscope. Furthermore, COS protected mouse β-cells from cytotoxicity of amyloidogenic hIAPP, as well as apoptosis and cycle arrest. There was no direct binding of COS and hIAPP, as revealed by surface plasmon resonance analysis. In addition, both chitin-oligosaccharide and the acetylated monosaccharide of COS and glucosamine had no inhibition effect on hIAPP amyloid formation. It is presumed that, mechanistically, COS regulate hIAPP amyloid formation relating to the positive charge and degree of polymerization. These findings highlight the potential role of COS as inhibitors of hIAPP amyloid formation and provide a new insight into the mechanism of COS against diabetes. Introduction Type 2 diabetes (T2D), also known as non-insulin-dependent diabetes, is a widespread chronic disease characterized by insulin resistance, progressive loss of pancreatic β-cell function and mass, impaired insulin release, and hyperglycemia [1,2]. T2D is also an age-related disease prevalent in adults over 40 years old, and accounts for 90-95% of the total number of diabetic patients [3]. A variety of factors, including glycolipid toxicity, inflammation, and cholesterol accumulation have been reported to correlate with β-cell dysfunction and occurrence of T2D [4]. It is also suggested that the accumulation of aggregated islet amyloid polypeptides in the islets of Langerhans plays a critical role in pancreatic damage. Human islet amyloid polypeptide (hIAPP), alternate name amylin, is co-secreted with insulin by β-cells in the pancreas. Mature hIAPP contains 37 amino acids and is one of the most aggregation-prone peptides. It has been reported that aggregated hIAPP formed amyloid deposits in 70-90% of patients with T2D [5]. The amyloidogenic process of hIAPP for diabetes is shown in two aspects. First, small assemblies (usually called oligomers) of hIAPP exhibit a direct cytotoxicity on the β-cells [6]. Second, invasive amyloid deposits cause a strong inverse correlation with β-cell area [7,8]. Thus, it is necessary to develop potential inhibitors to prevent early aggregation of hIAPP and/or depolymerize its amyloid deposits in order to avoid irreparable damage to β-cells. Chitosan is a product of partial deacetylation of chitin, which exists commonly in the exoskeletons of arthropods and insects and the cell walls of fungi. Chitosan oligosaccharides (COS), characterized as linear polymers of β-(1→4) linked d-glucosamine (GlcN) and N-acetyl-d-glucosamine (GlcNAc) residues with degree of polymerization (DP) less than 20 are derived from the hydrolysis of chitosan via physical, chemical, or enzymatic hydrolysis [9]. Compared with chitosan, COS have more bioavailability due to their lower molecular weight, higher solubility, and lower viscosity. Update, COS are the only positively charged oligosaccharides found in nature [10]. It has been demonstrated that COS possess diverse pharmacological activities and a broad range of applications [11]. Among them, the anti-diabetic bioactivity of COS has been extensively investigated by using various types of diabetic models [12]. COS were validated to be able to ameliorate glucose metabolism by improving glucose uptake, increasing insulin secretion, reducing insulin resistance, accelerating β-cell proliferation or neogenesis, and defending β-cells against apoptosis [13,14]. Mechanistically, COS suppress gluconeogenesis and stimulate glycogen synthesis in the liver through inhibition of p38 MAPK and phosphoenolpyruvate carboxykinase expression and AMPK activation with up-regulation of glucokinase expression [14]. In addition, COS may also improve glucose metabolism by reshaping the unbalanced gut microbiota of diabetic mice [15]. However, the direct biomolecules targeted by COS have not been revealed, and the correlation between COS and hIAPP amyloid formation remains unclear. Therefore, in this study, a battery of biophysical and cellular assays was performed to demonstrate the potential function of COS on hIAPP amyloid formation. Furthermore, we also preliminarily explored its underlying mechanism. Chitosan Oligosaccharides Preparation For COS preparation, we used recombinant chitosanase CsnA, an endo-type enzyme that has been shown to specially hydrolyze chitosan and generate chitobiose, chitotriose, and chitotetraose as main hydrolysates [16]. The purified enzyme had a purity over 90%, with a molecular weight of 28 kD on SDS-PAGE ( Figure 1A). Enzyme hydrolysates were purified and enriched by ultrafiltration, nanofiltration, and rotary evaporation. Quality analysis showed that the final products contained 15.9 mg/mL of sugars with 25 µg/mL of residual proteins, and that endotoxin was under the detection limit. Further analysis of the sugar components using thin layer chromatography (TLC) and mass spectrometry (MS) methods indicated that the products were mainly tri-saccharides and tetra-saccharides, along with a small amount of di-saccharides and penta-saccharides ( Figure 1B,C). Fluorescent Assay of hIAPP Aggregation Influenced by COS ThT is widely used as a fluorescence probe to track the amyloid deposits due to its fast, inexpensive, and reproducible fashion in the emission spectrum [17][18][19][20]. The effect of COS supplementation on the kinetics of hIAPP fibrillization was monitored over a period of 48 h. Samples were taken at time intervals to record ThT fluorescence. As shown in Figure 2A, the time-dependent kinetics of hIAPP aggregation were characterized by a typical S-shaped curve consisting of three phases: lag phase (formation of stable nuclei), elongation phase (elongation of nuclei to fibrils), and an equilibrium phase (floccule formation), similar with previous reported [21][22][23]. The fluorescence increase of hIAPP alone exhibits a short lag phase and a rapid growth phase up to 24 h, followed by reaching a plateau after further 12 h. Interestingly, COS drastically reduced fluorescence in a dose-dependent manner, confirming its inhibitory effect on hIAPP fibril formation. To rule out the interference of any residual protein, COS samples were boiled for ten minutes before use and similar results were observed (data not shown). The simplicity effect of COS on fibrillation dynamic was plotted as a function of time and fitted by a sigmoidal growth model [24]. It showed that both doses of COS nearly doubled the lag time of hIAPP aggregation, indicating the delaying effect of COS on hIAPP nucleation. Moreover, the fluorescence intensity values at the saturation phase decreased by nearly 46% for 2.5 mg/mL of COS and 60% for 5.0 mg/mL of COS, respectively ( Table 1). The apparent aggregation constant k slightly increased with the presence of COS, suggesting a faster growth of hIAPP fibers after nucleation, which may reduce the formation of toxic intermediates. Disaggregating pre-existing hIAPP fibrils was an alternative treatment strategy for amyloid clearance. As shown in Figure 2B, the burst reduction of fluorescence occurred within the first hour, then tended to retardation. After 48 h of treatment with COS, the fluorescence intensities of hIAPP fibrils reduced by 11% for 5.0 mg/mL and 35% for 10.0 mg/mL, respectively. The above results clearly demonstrated the role of COS in preventing the development of hIAPP monomers into fibrillary amyloid and disaggregating the preformed fibrils. defined as the time when the slope at the point of maximum fibrillation intersects the abscissa. 3 k belongs to the kinetic constant, which is defined as the apparent first-order aggregation constant. 4 Maximum intensity is the maximum fluorescence intensity. Secondary Structure Analysis of hIAPP Influenced by COS Far-UV Circular Dichroism (CD) spectroscopy was used to provide a direct insight into the secondary structure transition of hIAPP during fibrillization [25]. Figure 3A showed that the CD spectra of hIAPP alone experienced a typical structural transition from random coil to β-sheet, as indicated by the appearance and intensity enhancement of the positive peak at 195 nm and the negative peak at 217 nm ( Figure 3B). These two peaks correspond to the β-sheet structure, a characteristic feature for amyloid fibrils [26,27]. Three doses of COS (2.5, 5.0 or 10.0 mg/mL), based on the ThT result, were used to evaluate the influence on hIAPP conformational transition. The data recorded at 48 h indicated that COS significantly blocked the structural transition of hIAPP to β-sheet rich structure ( Figure 3C). The characteristic peak intensity of the β-sheet at 195 nm decreased in a COS-concentration dependent manner. Of note, the COS-co-incubated hIAPP showed a similar trend of structural alteration with hIAPP alone, indicating that the function of COS was in preventing structural change, instead of forming new structures. Additionally, we conducted CD experiments on the disaggregation of preformed hIAPP fibrils by COS. The monitored second structure change of preformed hIAPP fibrils suggested that COS could partially disassemble the mature fibrils ( Figure 3D). Consistent with the aggregation process, no new structural features were observed in the disaggregation process. Morphologies of hIAPP Aggregates Visualized by Transmission Electron Microscope The effect of COS on morphology changes of hIAPP during fibril formation was determined by transmission electron microscope (TEM). As shown in Figure 4, hIAPP alone showed typical amyloid morphology transition during the incubation. Monomeric hIAPP did not show any visible fibrillar structure, whereas upon 48 h incubation, it formed long, thick, unbranched fibers and crossed into a highly dense network ( Figure 4A,B). On the other hand, the final fibers derived from COS treated hIAPP were slender and crossed into sparse mesh ( Figure 4C,D). The disassemble ability of COS on mature hIAPP fibrils is shown in Figure 4E,F. Treated with 5.0 mg/mL of COS, the long hIAPP fibers were disrupted and fragmented, resulting in obvious rupture of the mesh. In contrast, 10.0 mg/mL of COS fractured the fibers into small pieces. Collectively, these results show that COS hindered the amyloid formation and disrupted the preformed amyloid of hIAPP. Mechanism Study of hIAPP Aggregation Influenced by COS To interpret the underlying mechanism of COS, we conducted surface plasmon resonance to directly evaluate the binding affinities between COS and hIAPP [28]. One of the main hydrolysis products, chitotetraose, was used for SPR analysis. The hIAPP peptide was coupled to the surface of the Biacore chip and increasing concentrations of chitotetraose were injected in a stepwise manner. Insulin was used for the positive control [29,30]. As shown in Figure 5A, no binding signal of COS to hIAPP was detected, suggesting that the function of COS on hIAPP aggregation was binding-independent. Under physiological conditions, both COS and hIAPP are positively charged, underlying the electrostatic interactions between the two molecules. Therefore, we checked the effect of chitin oligosaccharides (CHS), with identical concentration and polymerization to experimental COS. CHS have similar constituents as COS, but are neutrally charged. As assessed by monitoring ThT fluorescence, CHS failed to prevent the fibrillization of hIAPP, emphasizing the importance of the positive charges ( Figure 5B). Invalid effects on hIAPP aggregation were also shown in three monosaccharides, N-acetyl-d-glucosamine (GlcNAc), glucosamine sulfate (GS), and glucosamine hydrochloride (GH). These findings indicate that the function of COS not only relates to the charge, but the degree of polymerization. Effect of COS on hIAPP Cytotoxicity Considering that amyloid formation may lead to the failure of islet β-cells, we conducted a cell viability experiment by lactate dehydrogenase (LDH) release and flow cytometry assay with β-TC-6 cells to examine the protective role of COS. LDH analysis was done to explore the integrity of the cell membrane in the presence of hIAPP amyloid. The concentration of hIAPP used was 50 µM, where obvious amyloid formation was observed during the incubation. The results showed that cell viability decreased to 75% after exposure to hIAPP for 24 h. At 2.0 mg/mL, COS alone showed no obvious LDH release induction, however, when incubated with the peptide together, COS significantly prevented hIAPP induced LDH release ( Figure 6A). Flow cytometry was performed to further preliminarily demonstrate hIAPP-induced apoptosis and necrosis in β-TC-6 cells. Annexin V vs. PI are presented in Figure 6B, which were the markers for apoptosis necrosis, respectively. As displayed in Figure 6B,C, cells treated with COS showed almost the same results as the negative control. On the contrary, a severe increase of apoptosis cells was observed (Q3-2 and Q3-4, 31.18%) after hIAPP treatment for 24 h. Also, it can be observed that COS effectively rescued pancreatic cells from apoptosis induced by hIAPP, confirming that COS could alleviate hIAPP-induced apoptosis. Furthermore, hIAPP treatment caused a remarkable cell cycle arrest at S phase, highlighting the β-cell proliferation inhibition effect of amyloidogenic hIAPP (Figure 7). This finding may explain, at least in part, the failure of correct expansion of β-cell mass. No remarkable change of cell cycle was observed when treated with COS. In contrast, COS relieved the cycle arrest induced by hIAPP. Based on the above results, it can be concluded that COS may ameliorate hIAPP-induced cytotoxicity, apoptosis, and cycle arrest of β-TC-6 cells. Discussion T2D is on the rise worldwide and the number of T2D patient is predicted to reach as many as 438 million by 2030 [31]. One of the hallmark features of T2D is the misfolding and aggregation of functional hIAPP into inactive amyloid fibrils, followed by deposition in the pancreatic islets, leading to cellular damage and dysfunction. Therefore, preventing and/or reversing the process of hIAPP aggregation provides an important therapeutic strategy for T2D. A growing body of evidences indicate that COS can inhibit the aggregation of Aβ and reduce its neurotoxicity, exerting an anti-Alzheimer's activity [25,32,33]. Considering that T2D has comparable pathophysiological features with Alzheimer's disease, we propose that the antidiabetic function of COS owns to the mechanism of anti-amyloid formation of hIAPP. The effect of COS against hIAPP amyloid formation was verified by ThT, CD, and TEM methods. Moreover, COS not only inhibit hIAPP aggregation, but also disrupt the existing hIAPP fibrils in a concentration-dependent manner. In the aggregation stage, COS retard the nucleation process of hIAPP and decrease the amount of amyloid fibrils. However, the secondary structure transmission of hIAPP treated with COS, revealed by CD, was similar with that of hIAPP alone, suggesting that COS influenced hIAPP aggregation by inhibiting the binding between hIAPP molecules, rather than changing the secondary structure of hIAPP. Consistently, although slender and sparse, COS-treated hIAPP still aggregated into fiber-like networks. This mechanism was also reinforced by the affinity analysis, as no binding signals were observed between COS and hIAPP. On the other hand, our study proved that the inhibition effect of COS relates to its charge property. The positively charged property of COS was also correlated to its anticancer, anti-bacteria, and anti-obesity bioactivities [34,35]. Therefore, we propose that COS enhance the intermolecular electrostatic repulsion of hIAPP, accordingly reducing the aggregation and destroying the formed fibrils. The cytotoxicity of amyloidogenic hIAPP has been widely reported [36][37][38]. Here we found aggregated hIAPP induced β-cell apoptosis and inhibited cell proliferation. These results may partially explain β-cell loss and dysfunction in T2D. Previous reports showed that COS act as antidiabetic agents by protecting pancreatic β-cells related to immunopotentiation and antioxidation [39]. In this study, we provide the direct evidence that COS protect β-cells by alleviating the cytotoxicity of amyloidogenic hIAPP. The mechanism of COS counteracting with hIAPP amyloid needs to be studied further. Collectively, these findings provide a reasonable mechanistic link between anti-amyloid formation and the antidiabetic effects of COS. Given the availability, low toxicity, and high bio-tolerance, COS as a potential therapeutic agent for the treatment of T2D deserves further study. Purification of Recombinant Enzymes The expression and purification of recombinant enzymes were conducted as our previous report [40]. Briefly, recombinant E. coli BL21(DE3)/pET24a(+) -csnA was induced with 0.5 mM IPTG (Beyotime Biotechnology, Shanghai, China) at 25 • C, 160 rpm for 60 h. The supernatants were harvested by centrifugation at 10000 × g for 20 min at 4 • C and further subjected to the Ni-Sepharose column. The purity and molecular weight of the protein were analyzed by SDS-PAGE. The protein concentration was detected using BCA protein assay kit (Beyotime Biotechnology, Shanghai, China). Enzyme activity was determined by 3,5-dinitrosalicylic acid (DNS) method. Preparation of Chitosan Oligosaccharides For the preparation of chitosan oligosaccharides, the procedure was conducted as described previously [40]. Initially, 10 grams of water-soluble chitosan was dissolved in 1 L of water, and then 10 mL of purified CsnA (162 U/mL) was added. The mixture was incubated and stirred at 37 • C. The hydrolysis process was monitored at 30 min intervals until finished. A continuous hydrolysis process was performed by adding 1 g of substrate every 2 h to a reach the final concentration of 10%. The final hydrolysis supernatant was harvested by centrifugation at 10000 × g for 20 min at 4 • C and stored at 4 • C. An ultrafiltration membrane with a molecular weight rejection of 8000 Da was used to remove macromolecular polysaccharides and proteins from hydrolysis solution. A nanofiltration filter with a 200 Da cut-off molecular weight was used to desalinate. The solution was concentrated by rotary evaporation and further freeze-dried. Finally, the products were stored at −20 • C before use. The products were assayed by sulfuric acid-phenol method for sugar content, BCA method for protein content, and hydrazine reagent gel method for endotoxin content. The constituents of the oligosaccharides were analyzed using thin layer chromatography (TLC) and further identified by mass spectrometry (MS). hIAPP Preparation and Aggregation Synthesized hIAPP was dissolved in HFIP at a final concentration of 1 mM to remove pre-existing aggregates, and the solvent was completely removed by freeze-drying in a vacuum freeze-dryer. The peptide was stored at −20 • C until use. Before experiment, the lyophilized hIAPP was dissolved in 2 mM HCl, sonicated for 10 min, and centrifuged at 16000 × g for 20 min at 4 • C. The solution was diluted with 20 mM Tris-HCl buffer (pH 7.4) and incubated at 37 • C for aggregation assay. The effect of COS on hIAPP aggregation was evaluated by dissolving COS in freshly prepared hIAPP monomer solutions to the final concentrations of 2.5 and 5.0 mg/mL. For the disassemble assay, hIAPP fibrils were prepared by incubating at 37 • C for 48 h and verified by ThT fluorescence assay to assure mature fibril formation. Then COS were added and incubated with hIAPP for another 48 h. Thioflavine T (ThT) Fluorescence Assay ThT fluorescence assay was used to monitor the progress of hIAPP fibril formation and the preformed fibrils' disruption in the presence or absence of COS. Samples were diluted 19-fold with ThT solution (10 µM) (Sigma-Aldrich, Saint Louis, MO, USA). The hIAPP aggregation status at designated points was determined by ThT fluorescence assay and recorded on a Multimode Plate Reader (PerkinElmer EnSpire, Waltham, MA, USA) at 482 nm with an excitation wavelength of 440 nm. The fluorescence intensity of solution without hIAPP was subtracted from that of solution containing hIAPP to deduct background fluorescence. Data were representative of three independent experiments. Circular Dichroism Spectroscopy Far ultraviolet circular dichroism (CD) spectra of the hIAPP solutions were measured at a concentration of 0.2 mg/mL in 20 mM Tris-HCl buffer (pH 7.4), using a Jasco J-810 spectropolarimeter (Jasco Corp., Tokyo, Japan) with a 0.1 cm path-length quartz cuvette. Spectra were recorded in triplicate scans with a step size of 0.5 nm and a bandwidth of 1 nm; the ellipticity data were collected from 190 to 250 nm. A background value for each test was subtracted from the corresponding value of each sample, and the spectra were smoothed using the FFT filter function of the Jasco software. The curve of hIAPP coupled with COS was obtained by subtracting the background with same concentration of COS. Transmission Electron Microscopy (TEM) TEM measurements were performed at different time intervals to characterize the morphological changes of hIAPP aggregates in the presence or absence of COS. Samples were negatively stained with 1.5% (wt./vol.) uranyl acetate solution on grids (400 mesh) covered by carbon-coated collodion film. The morphology of amyloid fibers was observed and photographed by a JEM-1200 EX transmission electron microscope (JEOL, Tokyo, Japan) operated at 100 kV after drying. Cytotoxicity Assay and Cell Cycle Analysis Lactate dehydrogenase release (LDH) and flow cytometry were employed to measure the toxicity of hIAPP. β-TC-6 cells were seeded in 96-well culture plate at a density of 1 × 10 4 / well and incubated at 37 • C for 48 h. Then the cells were exposed to hIAPP (50 µM), COS (2.0 mg/mL), or the mixture of COS and hIAPP for 24 h. Cell viability was measured by LDH assay following the manufacture's instruction (Beyotime Biotechnology, Shanghai, China). The experiment was performed in triplicate. For apoptosis analysis, the treated cells were incubated with Annexin V-FITC/PI for 20 min in the dark at room temperature, following the manufacturer's instructions (Beyotime Biotechnology, Shanghai, China). Then, cells were analyzed by flow cytometry assay using NovoCyte D3080 and visualized by NovoExpress (ACEA, Los Angeles, CA, USA). Cell cycle study was also performed with flow cytometry. Briefly, after treatment, β-cells were harvested and subsequently fixed in 70% (v/v) chilled ethanol overnight at 4 • C. Then the cells were washed with PBS and subsequently resuspended in PBS and incubated with PI and RNase (Beyotime Biotechnology, Shanghai, China) for 30 min at room temperature. Finally, the samples were applied to flow cytometry analysis. Surface Plasmon Resonance (SPR) Purified chitotetraose was used to test the binding ability between COS and hIAPP on a Biacore T200 instrument (GE Healthcare, USA) in PBS buffer at 25 • C. The mono-hIAPP at a concentration of 100 µg/mL was immobilized on a CM5 sensor chip (GE Healthcare, USA) at a density of 400 response units. Chitotetraose (2.5 µM) was diluted in PBS buffer and passed over the CM5 sensor chip at a flow rate of 10 µL/min. Human insulin was used as the positive control [27,28]. The binding of analytes to the immobilized hIAPP resulted in a change of refractive index. The response was measured using SPR and compared with the control sample (an activated and blocked flow-cell without hIAPP) on the same chip. The experiments were repeated three times. Statistical Analysis All experiments were performed in triplicate, and the data were expressed as means ± SD of three independent experiments. Statistical evaluation was performed by one-way analysis of variance (ANOVA), followed by post-hoc Student-Newman-Keulsmethods. A level of p < 0.05 was considered to be statistically significant. Conclusions In summary, hIAPP is a major component of amyloid deposits found in pancreatic β-cells of T2D. While COS not only reduced significantly the aggregation of hIAPP, they also disassembled preformed hIAPP fibrils in a dose-dependent manner. Furthermore, COS protected mouse β-cells from cytotoxicity of amyloidogenic hIAPP, as well as apoptosis and cycle arrest. COS regulating hIAPP amyloid formation may relate to their positive charge and degree of polymerization. Thus, COS may be considered as promising inhibitors of hIAPP aggregation to treat T2D. Moreover, in the later studies, we will make efforts to identify the active component from the COS mixture and give deep insight into its antidiabetic mechanism. Conflicts of Interest: The authors declare no conflict of interest.
2020-03-19T10:49:12.381Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "f160cec2882ffd9d4b1f9202223953b4c19fa1c2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/6/1314/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be97da50f61ebe1d50ba93f6c10c3f1ccacc9760", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
264573384
pes2o/s2orc
v3-fos-license
SAR model for accurate detection of multi-label arrhythmias from electrocardiograms Objective Arrhythmias are prevalent symptoms of cardiovascular disease, necessitating accurate and timely detection to mitigate associated risks. Detecting arrhythmias from ECGs quickly and accurately holds great significance in preventing heart disease and reducing mortality. This research endeavors to outperform previous studies by developing a scientific neural network model capable of training and predicting ECG signals for 11 categories of arrhythmias, accounting for up to 5 co-existing labels. Methods In this study, we initially address the issue of imbalanced datasets by employing Borderline SMOTE and Cluster Centroids techniques during preprocessing. Subsequently, we propose a novel SAR model that combines attention and resnet mechanisms. The dataset is subjected to a 10-fold validation process to train and evaluate the model. Finally, several metrics such as HammingLoss, RankingLoss, F1-score, AUC and Coverage are used to evaluate the model. Results By evaluating the results of the tests, the average Hamming Loss is 1.12 %, the average Ranking Loss is 1.17 %, the average Micro F1-score is 98.46 %, the average Micro AUC is 98.76 %, and the average Coverage is 3.2762. The results show that the SAR model outperforms previous related studies on the task of classifying arrhythmia signals with multiple categories and labels. Conclusion The SAR model demonstrated excellent performance in accurately classifying multi-category and multi-label arrhythmia signals, affirming its scientific validity. Compared with previous studies, the model achieves a certain improvement in performance, which can help cardiologists to achieve scientific and accurate diagnosis of arrhythmia diseases. Introduction As per the statistics from the World Health Organization (WHO), cardiovascular diseases lead to the maximum number of mortalities globally, resulting in over 10 million deaths annually.Arrhythmia, which is a common problem in cardiology, is when the heart's activity becomes too fast, too slow, or irregular, or that the order of its parts becomes disordered.The swift and precise diagnosis of arrhythmia is tremendously valuable for the therapy of cardiac diseases and saving lives [1,2]. Currently, the identification of arrhythmia primarily depends on the electrocardiogram (ECG) [3,4].A Surface electrocardiogram is the most simple, cheap and accurate method to diagnose arrhythmia.In actual clinical diagnosis, doctors usually collect a long period of ECG signals to determine whether patients have related heart diseases, which requires cardiologists to spend a long time analyzing patients' ECG, which brings a huge workload to doctors [5,6]. At the same time, the accuracy and speed of arrhythmia detection are very high.Once misdiagnosis or failure to intervene in the best treatment time will delay the patient's condition, which is serious and even life-threatening.Some patients may have more than one type of arrhythmia at the same time.Therefore, we need to establish an accurate automatic analysis model of ECG signals to identify such multi-category and multi-label ECG signals.The application of analytical models can reduce the difficulty and workload of cardiologists, increase the probability of detecting arrhythmia events, and thus improve the survival rate of patients with arrhythmia [7,8]. In the last decade, rapid progress has been made in finding patterns in images and signals using neural networks [9].Deep learning, an advanced machine learning approach, facilitates the automatic derivation of crucial characteristics via the integration of diverse neural networks encompassing multiple convolutional layers, nonlinear variation layers, pooling layers, and fully connected layers.Deep learning has accomplished remarkable advancements in image recognition, speech recognition, and natural language processing.In ECG analysis, deep learning techniques have exhibited superior classification capabilities over conventional approaches when trained with ample data [10][11][12][13].However, there are still some limitations and challenges in analyzing ECG signals using machine learning methods, such as Ardeti et al. [14] argued that there is a general problem of low accuracy in ECG signal analysis using machine learning methods. The most of existing research has concentrated on developing ECG classifiers for single-label arrhythmia detection, aiming to categorize individual heartbeats into one of several predefined abnormality types.However, single-label models are ill-equipped to handle patients exhibiting multiple concurrent arrhythmia modalities, which is a commonly encountered clinical scenario [15][16][17].The multi-label classification task, though better representative of real-world complexity, poses added challenges.With the rise in the number of arrhythmia types, the quantity of plausible label permutations surges exponentially, posing challenges to model training and assessment.Consequently, multi-label ECG classification has received limited attention compared to single-label approaches [18,19]. In light of the aforementioned challenges, the present study aimed to develop a scientific and accurate solution for multi-label ECG arrhythmia detection, building on prior work [15].We approached ECG abnormality diagnosis as a multi-label classification problem, and proposed a self-attention residual network (SAR) model tailored for this task.The key contributions of this research are as follows. (1) The proposed SAR model can accurately classify fine-grained ECG morphological features, supporting recognition of 11 arrhythmia types.(2) Our approach can handle multi-label ECG signals with up to 5 concurrent abnormality labels, better resembling real-world complexity. (3) Innovative use of the SAR model to classify ECG signals with 11 categories and 5 labels, achieving better classification results than similar previous studies. Related work ECG signal based arrhythmia detection serves as a vital instrument extensively utilized in medicine for monitoring cardiovascular illnesses.Effective and precise identification of anomalous ECG signals has constituted a crucial subject probed by medical academia. At present, Resnet is a powerful neural network structure that is widely used in various tasks [20].It is also a relatively advanced deep network, which can be leveraged across diverse data modalities spanning images to time series [21].It has the advantages of improving accuracy easily by increasing depth, being easy to optimize, and superior to other networks.It benefits from a shortcut module that enables the network to plunge deeper while preserving a relatively low complexity, thereby making learning easier.Some researchers have added attention mechanism.It strives to ascertain weights for the constituents, anticipating pivotal elements to possess greater weightage, and non-critical entities hold lower weight.The weight of the components mirrors the respective element's contribution towards the target objective [18].Some researchers have deployed residual networks to ECG prediction, and the arrhythmia detection algorithm derived from onedimensional CNN (1D-CNN) with residual blocks has achieved excellent performance [11,20,22].Jihye [11], Zhu [23] Wang [24] et al. used the resnet model as the basic model of classification, and added one or two extruder blocks and excitation blocks to ordinary resnet to improve the performance, and the results showed that the model had a good performance.Wong et al. proposed staged neural network architecture for automatic coding to extract heartbeat, embed sequences, and train a multi-layer perceptron for classification [25].Pardasani et al. developed a method based on a 1D-CNN with class-dependent thresholds for identifying arrhythmias from 12-lead ECG [26].But all of these studies suffer from low metrics.Other researchers have attempted to combine resnet with other algorithmic mechanisms.For example, Rong et al.combined ResNet34 with GRU and simultaneously extracted local and sequence features from ECG for model training [12].An improved FL function is proposed as a loss function to solve the problem of an unbalanced data set.But it doesn't work very well when it comes to predicting outcomes.Borra et al. designed the decoding workflow of time series classification based on Inception Time, ResNet, and X ResNet, the three most advanced architectures [27].However, these algorithms have limited interpretability to the learned features. In the study of attention mechanism, some scholars have also carried out related research.For example, Nan et al. designed a finegrained multi-label ECG (FM-ECG) framework to detect abnormal cation mechanisms on ECG images through weakly supervised finegrained classification, which can find potential identification sites adaptively [18].They are fused only with image-level annotations.Secondly, a recurrent neural network (RNN) is used to deduce ECG label correlation.model, which is meaningful for data symmetry [28].Liu et al. developed a deep 1D-CNN with a residual block and compression-excitation (SE) attention mechanism [29].Wang et al. proposed a multi-label classification method based on a weighted graph attention network [30].Li et al. proposed a neural network structure 12-lead ECG multi-label classification algorithm based on resnet, including data denoising, frame segmentation, data balance, and other pre-processing parts, combined with attention-based bidirectional short and long-duration memory (BiLSTM) [31].Yang et al. adopted the improved stage-based residual network and split attention block residual network, but the performance of ECG, each abnormal recognition needs to be further improved [20]. By reviewing previous studies, ECG arrhythmia detection is usually categorized as a multi-label classification problem.After analyzing the existing literature related to the multi-label classification problem of arrhythmia, We summarized and found the following two main problems. (1) Classification labels are not detailed enough and rarely cover multiple categories of arrhythmias.For example, in the arrhythmia recognition system developed by Liu, Li et al. , seven arrhythmias are classified and detected [32,33].In summary, we propose a novel SAR model according to the actual situation in this research, which can better cope with the problem of multi-label ECG analysis and achieve better performance. Methodology As illustrated in Fig. 1, our experiment encompasses three principal components, namely pre-processing, classification, and evaluation. Pre-processing For the dataset, we employed the MIT-BIH dataset, which is internationally accepted as a standard ECG dataset and is the most widely used ECG database in recent years [35,36].This database is widely employed as experimental data in numerous related studies and has gained broad recognition in the academic sphere.Supplied with trustworthy information, meticulous notes, abundant cases and diverse ECG signal categories, the database can undergo training and examination through investigative algorithms, thereby establishing a robust foundation for detection research.The MIT-BIH dataset was also used to reflect the generalizability of the method in this paper and to facilitate comparison with other researchers' research methods. The database contains 43 available samples, each containing labeled two-lead ECG data with each segment lasting 30 min.The labeled data underwent meticulous annotation on a per ECG cycle basis, with matching labeling documents furnished.In this dataset, 11 main representative heart rate categories are included, namely Normal beat (N), Atrial premature beat (A), Aberrated atrial premature beat (a), Ventricular escape beat (E), Atrial escape beat (e), Fusion of the ventricular and normal beat (F), Nodal (junctional) premature beat (J), Nodal (junctional) escape beat (j), Left bundle branch block beat (L), Right bundle branch block beat (R), and Premature ventricular contraction (V).In the study, these same 11 representative heart rate categories were selected for study.The types and number of each are shown in Table 1. It is clear from the characteristics of the dataset that arrhythmias are often comorbid, more than one arrhythmia may be present in a given arrhythmia patient.Using the MIT-BIH dataset as an example, we performed a statistical analysis using histograms of marginal distributions. As shown in Fig. 2, the edge distribution histogram visualizes the types of different heartbeats contained in all 43 available samples of the MIT-BIH dataset.The horizontal axis is the sample number and the vertical axis is the label, while the number of beat types contained in each sample and the total number of samples corresponding to each beat type are summed below and to the right of the scatter plot.For example, Sample 1 (S1) contains three types of beat labels, 0, 1 and 10, which correspond to the three categories of Normal beat (N), Atrial premature beat (A), and Premature ventricular contraction (V) in Table 1.In Sample 12 and Sample 18, there is only one type of heartbeat, all heartbeats are labeled as 0. While in Samples 6,11,20,22,23,26,29,31 and 37, as many as five different types of labeled heartbeats are included. Also, we can see from Fig. 2 that the different categories are extremely unevenly distributed.For example, there are 36 samples containing heartbeats with label 0, but only Sample 37 contains heartbeats with label 4. The number of different heartbeats is also extremely unbalanced, and the exact number of heartbeats in different categories is shown in Table 1. In summary, the target dataset in this study has the following characteristics. (1) The data have multiple categories.There are 11 different categories of heartbeats. (2) The data have multiple labels.There may be at most 5 labels at the same time and at least 1 label. (3) The data of various categories exhibits an extremely skewed distribution. (4) The quantities across the diverse data categories manifest severe imbalance. To achieve excellent training and testing results, in the research, the following work is carried out to address the above characteristics of the dataset. Data segmentation We utilize the Wave From Database (WFDB) module to process the database, WFDB is used to read annotation files and find the R peak location [2,37], including waveforms, amplitudes, periods, and associated labels, to facilitate the subsequent training.The data preprocessing comprises two primary components.First, extraction is performed.We extract the complete heart waves by an individual as the original data set.Then, segmentation is performed.In the processing of heartbeats, the features of a segment of heartbeat waves are mainly PQRST waves.Accordingly, for heartbeat segmentation, the R-peak constitutes the reference anchor, and a standard interval of 0.3s forward and 0.4s backward is used to separate the heartbeat into discrete instances.Such a beat of length 0.7s can effectively contain the cardinal traits of the heartbeat wave.Ultimately, the discrete heartbeats were mapped to the respective tags per the annotations within the original database.This resulted in several standard heartbeats with labels. Data balance In this research, we use the Borderline SMOTE and Cluster Centroids method to deal with the uneven distribution and an unbalanced number of data sets.Borderline-SMOTE (Borderline-Synthetic Minority Oversampling Technique) is an improved oversampling algorithm for SMOTE.The algorithm exclusively leverages the frontier minority instances for fabricating novel examples, thereby enhancing sample distribution [38].SMOTE represents an approach for interpolating between a limited set of sample categories to generate supplementary examples.For a minority class sample A, the nearest neighbor sample B is randomly selected, and A point C is randomly selected as a new minority class sample from the concatenation of A and B. Specifically, for a minority class sample x i use the k nearest neighbor method (the value of k needs to be specified in advance) to find the k minority class samples that are closest to x i , where the distance is defined as the Euclidean distance in the n-dimensional feature space between the samples.Subsequently, one of the k nearest neighbors are randomly chosen to generate a novel sample, the formula is depicted in ①. Where x is the selected k nearest neighbor and δ is a random quantity, δ ∈[0,1].Border-line SMOTE is an improved version of the SMOTE algorithm, which divides the sample into three categories."Noise": all knearest neighbors belong to the majority class; "Danger": more than half of k-nearest neighbors belong to the majority class; "Safe": more than half of k-nearest neighbors belong to the minority class.Border-line SMOTE randomly chooses a sample from the "Danger" condition, applying the SMOTE algorithm to produce a novel example.The "Danger" state denotes frontier minority instances more susceptible to misclassification.Therefore, Border-line SMOTE exclusively synthesizes examples from minority categories adjoining the "Border", whereas SMOTE uniformly handles all minority classes. Cluster Centroids is a method to reduce the number of target samples by k-means clustering, as shown in Fig. 3. First, the target samples are clustered into different classes by k-means method, and the centroids of the target data are calculated, then, the data farthest from the centroids are removed.Finally, the downsampled data are obtained.Therefore, each category will be synthesized with the centroids of the k-means method rather than the original examples.Cluster Centroids method provides an efficient way to represent the reduction in the number of samples in the data clusters, however, the approach requires the data to be grouped into clusters.Moreover, the centroid quantity should be defined to enable the downsampled clusters to represent the original groups. Data recombination Since in the original sample, there is often one same heartbeat repeated many times in a row, namely there will be a large amount of single-label data, which will affect the model training.To improve the adaptability of the model, we need to randomly reorganize the heartbeats in all samples in the model after they are completely broken up.Since at most 5 labels appear in the same sample in the original sample, we also combine the individual heartbeats in groups of 5 to form the new features.The new features have at least 1 label and may have at most 5 labels in different combinations.Since we want to identify 11 categories, there will in principle be C 1 11 + C 2 11 + C 3 11 + C 4 11 + C 5 11 = 1023 combinations in total.In the meantime, we also transformed the labels of the reorganized data by converting multi-category labels to multi-hot labels.In Fig. 4, we show some of the restructured multi-label data features and their multi-hot labels.After converting the data labels to multi-hot form, the presence or absence of 11 types of heartbeats in the feature is represented by an 11-bit binary number arranged from left to right.For example, in the 1 label, there are only normal heartbeats. Model structure In the SAR model construction part, the network model is designed to enable accurate identification of multi-label ECG data, and to ensure that all evaluation metrics are optimal while upholding the concept of science and effectiveness, the model structure is shown in Fig. 5. In the model, the multi-label data are first fed into the GlobalPooling layer to connect the data and the model and reduce the data dimensionality, and then, divided into four paths.Path 1, directly connected to SelfAttention Block, which contains one self-attention network layer; path 2, directly connected to ResNet Block after ZeroPadding, Convolution, and GlobalPooling layers; path 3, connected to SelfAttention Block and ResNet Block, and adding two residual connections; path 4 concatenate directly with the results of path 1, path 2 and path 3 without connecting any modules. In path 1, given the multi-labeled input data constitute one-dimensional temporal sequences, an effective self-attention architecture is employed, chiefly enabling the model to concentrate on and derive cardinal traits while discounting less relevant characteristics. In path 2, as the ResNet Block constitutes a deep neural network, the data may acquire an elongated feature vector post the ResNet Block layer.To regulate the ultimate vector length, a ZeroPadding layer is set at the beginning of path 2, padding with 0 to govern the post-convolution vector length.In the ResNet Block, we set 30 Convolution1D layers to extract the data features, in which several short-circuit connections are added to reduce the model complexity to reduce overfitting, and prevent gradient vanishing, as shown in Fig. 5. When paths 1, 2, 3, and 4 are concatenated, after several Dense, Dropout, and GlobalPooling layers, finally in the Dense layer, a Sigmoid activation function is used to predict the probability of each label from 0 to 10, and the prediction results in a probability value of 0-1 for each label. Experiment This work operates on a deep learning framework of Keras with Tensorflow as the backend.The workstation used consists of 6144 MB GPU (NVIDIA GeForce RTX-3070), Intel i7-11700 K processor (3.60 GHz) and 32 GB RAM. In the training process, we divided the data set into train set, validation set and test set.The number of data sets in each part is shown in Table 2. During training, loss is "binary cross-entropy", optimizer is "Adam", learning rate is 0.0001 and batch size is 512.Due to the limited computing power of the computer, we train a total of 100 epochs.The training history is shown in Fig. 6. To ensure a complete and balanced training of all training data and to evaluate the training effect in real-time, we used the 10-Fold cross-validation method. After training, we tested the trained model with untrained data from raw data, and selected some representative test results for simple visualization in Fig. 7. Evaluation metrics To evaluate SAR model's performance, we use various evaluation methods such as Hamming Loss, Ranking Loss, AUC, Coverage, Ranking Loss Where Yi is the complementary set of Yi in the label space, and (y ′ ,y ″ ) ∈ Yi * Yi.This metric examines sorting mistakes in the category label sequences, whereby irrelevant labels precede applicable ones.Lower values indicate enhanced performance, with the optimum being 0. ( AUC (Area Under Curve) is the area enclosed by ROC Curve and X coordinate axis.The full name of the ROC Curve is Receiver Operating Characteristic Curve.This curve is obtained with TPR(true positive rate) as the ordinate and FPR(false positive rate) as the abscissa. P * N represents all the positive and negative sample pairs.The positive and negative sample pairs represented by P p * N p , N p,p+Δp are the sample pairs composed of the positive sample with the predicted probability of [p,1] and the negative sample number with the predicted probability of [p, p + Δp].When [Δp] is small enough, it can be understood as the sample pair composed of the negative sample number of [p, p + Δp] and the prediction probability, and the positive sample predicted value is higher than the sample pair composed of the negative sample predicted value.When the negative sample is divided into several segments according to probability, the positive and negative sample pairs formed by P p * N p , N p,p+Δp are integrated, and all sample pairs whose positive sample predicted values are higher than the negative sample predicted values are obtained. (4) Coverage Where rank f (⋅,⋅) is the sorting function corresponding to the real-valued function f (⋅,⋅).This index probes the search depth essential for encompassing all pertinent labels within the sample's category label sequence.Lower values indicate superior performance. (5) Precision Precision is used to evaluate the accuracy rate of the detector based on the detection success.For all positive cases (TP + FP) judged by the model, the proportion of real cases (TP) among them.(6) Recall Recall is used to evaluate the detection coverage of the detector on all targets to be detected.For all positive cases (TP + FN) in the dataset, the proportion of positive cases (TP) is correctly judged by the model to all positive cases in the dataset.(7) F1-Score True positive (TP) indicates normal events classified as normal, true negative (TN) indicates abnormal events deemed abnormal, false positive (FP) represents abnormal events misidentified as normal, and false negative (FN) represents normal events wrongly marked as abnormal. To better demonstrate the test results, we counted the test results of each fold and average in the 10-fold cross-validation, as shown in Table 3. At the same time, to show the test situation of each label, and to understand which labels are better classified and which labels are average, we took out each category of labels separately for analysis, in which the ROC curve of each category are shown in Fig. 8, and the confusion matrix of each category is shown in Fig. 9. As can be seen from Figs. 8 and 9, when the effectiveness of each category of tag classification is evaluated separately, the two categories of tags with labels "1″ and "5″ are relatively weak among the 11 categories of tags, while the other categories of tags are relatively better. In addition, we also conducted comparative experiments to compare the variation trends of each metric under different thresholds to reflect the robustness of the model, as shown in Table 4 and Fig. 10. If all metrics are evaluated only at the threshold of 0.5, it is not enough to fully demonstrate the excellent performance of this model.Therefore, we calculate metrics at different thresholds.When the threshold increases from 0.5 to 0.8, all metrics tend to be basically stable, and the trend of performance decline is not obvious.When the threshold increases from 0.8 to 0.95, there is a relatively significant downward trend in all indicators.When the threshold value increases from 0.95 to 0.99, the indicators have a clear downward trend only.From the comparison experiment, it can be seen that the probability values of each label obtained from the model test are more accurate, which also reflects the scientific and robust nature of the model. Finally, we compared the research contents and results with those of other researchers, as shown in Table 5.In general, more categories and labels mean a more difficult task.Taken together, with the number of categories and labels greater than or equal to those of previous studies, the method used in our study still achieves superior results.82.7 % [3].Our model outperforms these previous models, suggesting that our approach is more effective for arrhythmia classification. Discussion Our findings reveal certain limitations in the performance of the SAR model, particularly in identifying data with labels 1 and 5.The ROC curve and confusion matrix analysis indicate suboptimal results for these specific labels, prompting further investigation into the underlying causes, as shown in Figs. 8 and 9. To gain insights, we conducted a detailed analysis by comparing the features and labels of the data specifically associated with labels 1 and 5. Given these circumstances, we turned our attention to the possibility of incorrect labeling in the dataset.To address this concern, we plan to collaborate with ECG specialists by visiting hospitals and seeking their expert opinion.Their insights and expertise will help determine whether this labeling issue is indeed present and, if confirmed, allow us to rectify it accordingly. The composition of the specifically the limitations imposed by the MIT-BIH database on the number of labels assigned to each sample, was also a factor in our next study.We included data samples with a maximum of 5 labels for training, validation, and testing.However, to validate the SAR model in a broader context, we plan to conduct experiments on a larger cohort and with real- world data. By validating the SAR model in a clinical setting and refining its performance based on real data, we aim to make a valuable contribution to the field, ultimately improving the accuracy and efficiency of arrhythmia diagnosis.Our future will involve testing the model using actual hospital data and optimizing the model based on the test results.This will help us to further validate the effectiveness of our model and make practical contributions to clinical practice. Conclusions Automatic diagnosis of arrhythmia has important research value, which can reduce and effectively improve detection efficiency and reduce the workload of clinicians. In this research, we innovatively built SAR, a model that can learn and predict arrhythmic ECG signals for 11 categories and up to 5 co-existing labels in the dataset, through testing, in which the average HammingLoss is 1.12 %, average RankingLoss is 1.17 %, average Micro F1-score is 98.46 %, average Micro AUC is 98.76 %, and average Coverage is 3.2762.The results are better than previous research results, which shows the scientific validity and robustness of the model.In the next step, we will continue to promote methodological research, test the model using actual hospital data, and optimize the model based on the test results, in an effort to make practical contributions to clinical practice. Fig. 1 . Fig. 1.Flow chart of data pre-processing, model training and result evaluation. L .Yang et al. L .Yang et al. Fig. 10 . Fig. 10.Line chart of each evaluation metrics under different thresholds.A: Hamming and ranking loss; B: Micro F1 and AUC, Macro F1 and AUC; C: Average precision and recall; D: Coverage. Sangha et al. trained convolutional neural networks to recognize six clinical labels defined by physicians covering rhythm and conduction disorders and a hidden gender [34].Sun et al. proposed a new integrated multi-label classification model that combines seven multi-label classification methods to generate a new classifier [7].(2) Multi-label classification network performance can be further improved.For example, Ge et al. designed a feature fusion based on multi-label correlation guidance of the ECG abnormal event detection model, the accuracy is 81.6 % [3].Cai et al. created and trained a new deep learning architecture to model and capture label correlation of graph convolutional network (GCN) for multi-label classification of 12-lead ECG, the final F1-score is 60.3 % [10].Osnabrugge et al. used a convolutional recurrent neural network (CRNN) to identify cardiac abnormalities in 2, 3, 4, 6, and 12 lead electrocardiogram numbers, the accuracy is 40 % [22]. Table 1 Types of arrhythmia and corresponding labels. Table 2 Sample parameters of each part. L.Yang et al. Table 3 Results of each metrics under 10-fold cross validation. Table 4 Evaluation metrics under different thresholds. Table 5 Comparison with other researches.
2023-10-30T15:03:07.238Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "670a30f54e5653f3014907342be4e08c473506bb", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "db6be147242c27d2c04585400837f24d8967f5e0", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
18148585
pes2o/s2orc
v3-fos-license
Cartas Ao Editor Rapid MRI of the lungs in children with pulmonary infections RM pulmonar rápida em crianças com infecções pulmonares Dear Editor, We read with interest the review article entitled " Chest magnetic resonance imaging: a protocol suggestion " by Hochhegger et al. (1). The authors have reviewed the technical aspects and suggested a protocol for performing chest MRI. The authors have also described three major clinical indications for MRI of the lungs: staging of lung tumors; evaluation of pulmonary vascular diseases; and investigation of pulmonary abnormalities in patients who should not be exposed to radiation. Radiation exposure is particularly more serious in children, as they are at a greater risk of experiencing harmful effects from radiation compared to adults (2). In our recent prospective study in 26 children with leukemia presenting with febrile neutropenia (3) , we evaluated role of rapid lung MRI in the detection of nodules, consolidation and ground glass opacity (GGO) in this population. The duration of all the four sequences combined in our study was less than 2 minutes. The findings of HRCT and MRI were compared, with HRCT as the standard of reference. No significant difference was observed between the two modalities by the McNemar test (p > 0.05). For the detection of nodules and consolidation by MRI, per-patient sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were all 100%. For the detection of GGO by MRI, per-patient sensitivity, specificity, PPV and NPV were 66.67%, 100%, 100% and 90.91%, respectively. The kappa test showed perfect agreement between MRI and CT scan for the detection of nodules and consolidation (κ = 1), and a substantial agreement in the detection of GGO by MRI and CT scan (κ = 0.755). The results of our study indicated that pulmonary MRI has great potential as a diagnostic modality for the detection of lung parenchymal findings in patients with febrile neu-tropenia. Similarly, we determined the diagnostic utility of rapid lung MRI for the detection of various pulmonary and mediastinal abnormalities in 75 children with suspected pulmonary infections (4). MRI demonstrated sensitivity, specificity, PPV, and NPV of 100% for detecting pulmonary consolidation, nodules (> 3 mm), cyst/ cavity, hyperinflation, pleural effusion, and lymph nodes. The kappa test showed almost perfect agreement between MRI and MDCT in detecting thoracic abnormalities (κ = 0.965). No statistically significant difference was observed between MRI and MDCT for detecting thoracic abnormalities by the McNemar … Dear Editor, We read with interest the review article entitled "Chest magnetic resonance imaging: a protocol suggestion" by Hochhegger et al. (1) .The authors have reviewed the technical aspects and suggested a protocol for performing chest MRI.The authors have also described three major clinical indications for MRI of the lungs: staging of lung tumors; evaluation of pulmonary vascular diseases; and investigation of pulmonary abnormalities in patients who should not be exposed to radiation.Radiation exposure is particularly more serious in children, as they are at a greater risk of experiencing harmful effects from radiation compared to adults (2) . In our recent prospective study in 26 children with leukemia presenting with febrile neutropenia (3) , we evaluated role of rapid lung MRI in the detection of nodules, consolidation and ground glass opacity (GGO) in this population.The duration of all the four sequences combined in our study was less than 2 minutes.The findings of HRCT and MRI were compared, with HRCT as the standard of reference.No significant difference was observed between the two modalities by the McNemar test (p > 0.05).For the detection of nodules and consolidation by MRI, per-patient sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were all 100%.For the detection of GGO by MRI, per-patient sensitivity, specificity, PPV and NPV were 66.67%, 100%, 100% and 90.91%, respectively.The kappa test showed perfect agreement between MRI and CT scan for the detection of nodules and consolidation (κ = 1), and a substantial agreement in the detection of GGO by MRI and CT scan (κ = 0.755).The results of our study indicated that pulmonary MRI has great potential as a diagnostic modality for the detection of lung parenchymal findings in patients with febrile neutropenia. Similarly, we determined the diagnostic utility of rapid lung MRI for the detection of various pulmonary and mediastinal ab-normalities in 75 children with suspected pulmonary infections (4) .MRI demonstrated sensitivity, specificity, PPV, and NPV of 100% for detecting pulmonary consolidation, nodules (> 3 mm), cyst/ cavity, hyperinflation, pleural effusion, and lymph nodes.The kappa test showed almost perfect agreement between MRI and MDCT in detecting thoracic abnormalities (κ = 0.965).No statistically significant difference was observed between MRI and MDCT for detecting thoracic abnormalities by the McNemar test (p = 0.125). As MRI does not have any radiation risks, it can be repeated to assess disease progression or regression without exposing the patients to radiation (as against performing the CT scan).We propose rapid lung MRI may also be used as an initial radiological investigation in patients with suspected pulmonary infections especially where repeated follow up imaging is required. Incidental diagnosis of struma ovarii through radioiodine wholebody scanning: incremental role of SPECT/CT Contribuição da SPECT/CT no diagnóstico incidental de struma ovarii em pesquisa de corpo inteiro com iodo-131 Dear Editor, A 76-year-old woman with papillary thyroid cancer (staging: pT3pN1pMx) was referred for radioiodine (I-131) therapy after total thyroidectomy.The thyroglobulin titer was elevated (190 ng/ mL) and thyroid stimulating hormone (TSH) levels remained suppressed despite thyroxine withdrawal.A radioiodine whole-body scan (WBS) evinced an area of intense pelvic uptake (Figure 1A), which corresponded to a heterogeneous pelvic mass posteriorly to uterus on fused single-photon emission computed tomography/ computed tomography (SPECT/CT) images (Figure 1B).The SPECT/CT findings suggested a diagnosis of struma ovarii.Complementary pelvic magnetic resonance imaging depicted a lobulated multicystic pelvic mass with a solid component, probably originating from the left ovary (Figures 1C and 1D).Total hysterectomy was performed, revealing a mature teratoma with thyroid tissue (struma ovarii).Five months after surgical resec-tion, the patient was treated with 7400 MBq (200 mCi) of I-131.A post-treatment radioiodine WBS evinced a cervical thyroidal remnant and a focal area of increased radioiodine uptake in the proximal diaphysis of the left femur, with no matching alteration on CT images (not shown).The TSH-stimulated thyroglobulin titer was 2.4 ng/mL (normal value, < 35.0 ng/mL).At month 3 of clinical follow-up, the patient had not presented any symptoms concerning the left lower limb and the thyroglobulin titer remained at a normal range. Images obtained in a radioiodine WBS are noisy and have low spatial resolution.It is therefore often difficult to do proper anatomical localization, basically due to the high-energy characteristics of radioiodine (1) .At our institution, we often resort to SPECT/CT to further evaluate cases with inconclusive findings on standard planar scintigraphic images.The role of SPECT/CT for the evaluation of patients with well-differentiated thyroid carcinoma has not yet been established, although a few studies have demonstrated its superiority in the localization and identification of metastatic lesions (2,3) .In the case presented here, we believe that SPECT/CT played an important role in the correct anatomical location of the pelvic focal uptake, as well as improving the
2018-05-08T18:18:52.891Z
2010-12-01T00:00:00.000
{ "year": 2010, "sha1": "38349958f454a4b7b07b02508b0d3492e083bc7c", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rb/v49n2/0100-3984-rb-49-02-0126b.pdf", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "f3b21b4534bb74b762cdfec95e7e0f6f200cd909", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256416429
pes2o/s2orc
v3-fos-license
Designing text representations for existing data using the TextFormats Specification Language TextFormats is a software system for efficient and user-friendly creation of text format specifications, accessible from multiple programming languages (C/C++, Python, Nim) and the Unix command line. To work with a format, a specification written in the TextFormats Specification Language (TFSL) must be created. The specification defines datatypes for each part of the format. The syntax for datatype definitions in TextFormats specifications is based on the text representation. Thus this system is well suited for the description of existing formats. However, when creating a new text format for representing existing data, the user may use different possible definitions, based on the type of value and the representation choices. This study explores the possible definition syntax in the TextFormats Specification Language to be used for creating text representations of scalar values (e.g. string, numeric value, boolean) and compound data structures (e.g. array, mapping). The results of the analysis are presented systematically, together with examples for each each type of different values that can be represented, and usage advices. This study explores the possible definition syntax in the TextFormats Specification Language to be used for creating text representations of scalar values (e.g. string, numeric value, boolean) and compound data structures (e.g. array, mapping). The results of the analysis are presented systematically, together with examples for each each type of different values that can be represented, and usage advices. TextFormats | Datatype definition | Parser | Text representation | Data format | Domain specific language | Software library | Format specification | File format | Data type | Text format | Tutorial Correspondence: giorgio.gonnella@uni-goettingen.de TextFormats (Gonnella, 2022) is a software library and a set of tools for defining text formats. Although it was initially developed for the representation of bioinformatics formats, it is a generic software which can be applied to a variety of fields. Once a format has been defined using the domain-specific language TFSL (TextFormats Specification Language), the TextFormats specification can be used for parsing the format, as well as writing data in the format, using the APIs for several programming languages (Nim, Python, C/C++) or in scripts, using the provided Unix command-line tools. In TFSL, various kinds of definitions are used to describe different types of data textual representations, as shown in Table 1. These definitions are dependent not only on the type of value but also on other characteristics of the data and its representation. For example they depend on the set of possible values: e.g. for a string, one uses constant for a single value, values for a small set of values and regex for all strings matching a regular expression. Also different kind of definitions can sometimes be used for the same set of strings (e.g. values, regex and regexes). For some compound data values, the kind of definition to use depends on if and how semantic and datatype of the composing elements is coded in the text representation (e.g. composed_of vs. labeled_list or tagged_list). Thus it is interesting to investigate the use cases of these different definition kinds, depending on the type of represented value. It is important to note that there is not always a one-to-one correspondence between the type of represented value and its text representation. For instance, the text representation "I" could represent the string "I" (e.g. as the first person singular pronoun in English) or represent the integer value expressed as a Roman numeral, which is more commonly represented using Arabic numerals ("1"). This paper presents a systematic exploration of the different types of values that can be represented in portions of a text format and reports the corresponding TextFormats definitions required to accurately describe the data in a specification. For each type of value it includes practical examples of TFSL definitions to illustrate the concepts presented. Types of Data Values In the context of text formats, various type of values can be represented. One important distinction to consider is whether a data point logically represents an indivisible, atomic unit or can be broken down into multiple components. If it is the former, it is considered a scalar value, and if it is the latter, it is considered a compound value. Compound values consist of multiple components, each of which can be a scalar or itself also a compound value. Compound data types provide a way to organize data into a complex structure, allowing for the representation of structured, multi-layered information. For instance, a compound data type could consist of several numerical values, each representing a different data point, or it could consist of multiple components, each representing a different aspect of the data. The important feature of compound data types is that they allow for the representation of multi-faceted information in a way that can be parsed and analyzed in a meaningful manner. Thus the semantics and data type of each component must be known to the parser, either as an external convention, or through metadata included in the text representation itself (e.g. tags). Definitions for Scalar Values Numeric values. Several kind of datatype definitions are dedicated to numerical variables. In order to define numerical values, the following criteria are to be considered. First, if the value is an integer or not. Second, what is the range of the possible values. Third, because they are internally represented differently in many contexts (e.g. C), if a value is an integer, it shall be considered if it is signed or not. Any value. If every valid value is acceptable, as long as it fits in the range of the data type in the programming language or environment in which the data is used, then the predefined integer, unsigned_integer and float are used. Since they are predefined, they do not consist in a definition, but the key (e.g. integer) is simply inserted in the specification in the position where the definition would be required. This is done usually, in a compound datatype or an alias, as in the following examples: The default is to include the limits in the accepted values, but these can be excluded using (min|max)_excluded. This is mostly useful only for floating point values, since for integers it suffices to increase or decrease the limit by 1. -"I": 1 -"II": 2 -"III": 3 In other cases (such as if any value in a range or all numeric value must be representable), the parsing or formatting of the values cannot be directly defined in TextFormats. Thus the data type must be defined as a string (e.g. by a regex) and handled by the calling code, as in the following example: datatypes: num9: {regex: "^[MDCLXVI]+$"} Boolean variables. Booleans are variables that can only contain one of two values: true or false. Their representation is usually a pair of strings such as "T" and "F", or "0" and "1". Single representations. The following definition shows how to define a type for a boolean variable, with a string value for each of the two decoded values (true or false). These representations can be provided using an values definition and a mapping: In some cases, it is easier to split a regex into multiple pieces, which are defined separately. While the universe of matching strings remains the same, if a single or multiple regexes are used, the use of multiple regexes has the advantage that the single regexes could be more readable, but also that each piece can be treated differently, when the strings are mapped to given values: datatypes: str5: regexes: Empty strings. If the empty string shall also be handled, its value can be provided using empty (this option is available for any kind of definition). The option has the highest priority, thus also if e.g. a regular expression matching also an empty string is provided, the empty string case is handled as defined in the empty option. E.g. in the following case, the empty string results in a decoded value 0: datatypes: str6: {regex: "\d * ", empty: "0"} Structured strings. In some cases, although a value shall be decoded as string, and not further parsed into smaller elements, it has an internal structure. In this case it can be useful to create a definition (e.g. for a compound datatype, as explained below) and then let TextFormats know that the definition shall only be used for validation, but not for parsing, using the as_string: true option. For example the following definition of a string (containing unsigned integers and '.' separating them) uses a relatively complex regular expression: The following definition is equivalent, but more readable: datatypes: str8: list_of: {regex: unsigned_integer} splitted_by: "." min_length: 1 as_string: true Acronyms. By default, string are just encoded as the string itself, i.e. parsing involves validation, but no modification of the value itself. In some cases a different string should be present in the string representation compared to the decoded value: e.g. one could want to expand an acronym. For this a decoded mapping is used: datatypes: str9: values: -"USA": "United States" -"UK": "United Kingdom" If multiple encoded values are decoded to the same decoded value (always for regular expressions), then canonical encoded forms must be specified, so that the encoder knows which one shall be used. E.g.: datatypes: str10: regexes: -"U(SA?|sa)": "United States" -"U[Kk]": "United Kingdom" canonical: -"USA": "United States" -"UK": "United Kingdom" Definitions for Compound Values There are several different types of compound data. Hereby we distinguish 3 cases, which are handled separately: It is worth noticing that the names of the data types for these kinds of compound values depend on the context, such as programming language, and on the underlying data structure, i.e. how the data is stored in memory. Compound Data Compatibility in TextFormats. Compound values can include other compound values as components. This hierarchical structure can be represented using a tree, with the depth potentially being indefinite and marked in the text representation using indentation or nested pairs of parentheses. However, for a format to be compatible with the current implementation of TextFormats, it must represent a regular language (with the possible exception of parts of the format handled by an external library, i.e. currently JSON). As a result, the definition tree in compound data types must be known at definition time and circular definitions are not allowed in TFSL. Lists, Arrays, Sequences, Sets. We consider here compound data values, where the elements are ordered (the order may, but must not, be meaningful), but they are all considered semantically equivalent and the element data type and set of representable values is not dependent on their position in the list. If the order matters, the compound values are often stored in an array or list data structure (e.g. a linked list) depending on the underlying data structure and the context (e.g. programming language), and are thus called lists (e.g. Python), arrays (e.g. C, Python) or sequences (e.g. Nim, YAML). Values of this kind are described in TextFormats using list_of datatype definitions. The definition of compound values where the semantics of the elements does not differ among the elements is done in TextFormats using the list_of key, as in the following example: datatypes: list3: list_of: {regex: [A-Z]} Element separators. Often the parsing of the single elements is made possible by a separator string between the elements, which does not occur in the elements itself. This is specified using the option splitted_by: datatypes: list4: list_of: unsigned_integer splitted_by: "," Separator escaping. In some cases, however, the separating string can also be present in the elements itself, e.g. by escaping it. In this case the separator option is used in the definition, and a e.g. a regular expression is used for defining the elements: datatypes: list5: list_of: regex: "(\\\:|[A-Za-z0-9 _]) * " # allows : escaped by \ separator: ":" Given the definition above, for example, the string elem 1:elem2:elem_3:elem\:\:4 would be parsed into the four elements elem1, elem2, elem_3 and elem\:\:4. Fixed length elements. In case the elements of a list have all the same length, a separator is generally not necessary. However, if one is present, it may be also present in the element text itself, since there is no risk of confusion. In this case the separator option is used, e.g.: According to the previous definition, each element has the size 3, thus it does not matter that the separator is possibly included in the elements. For example the string 001:0:::002:2:1:112:::: would be parsed into the six elements 001, 0::, 002, 2:1, 112 and :::. Enclosing strings. In some cases, constant strings are present before the first and/of after the last element of a list, for example an opening and a closing bracket: datatypes: list7: list_of: unsigned_integer splitted_by: "," prefix: "(" suffix: ")" This definition allows to parse a string representation like (1,2,3,4). ) is it possible to decode given strings to predefined lists. In case multiple representations of the same value are given, the canonical option must define which one shall be used for encoding, as in the following example: datatypes: list10: values: "a": ["a"] "1a": ["a"] "2a": ["a", "a"] "3a": ["a", "a", "a"] empty: [] canonical: {"1a": ["a"]} Sets and Multisets. Sometimes a different kind of collection is available in case the order of the elements is not important, and different kind of data structures are used for representing them in memory, such as hash tables. This kind of collections are available as e.g. sets in Python and hash sets in Nim. There is no special handling for sets in TextFormats. This means that the elements of sets are regarded as a list. Thus, equivalence operations which disregard the order of the elements must be implemented externally. Similarly, the uniqueness of the set elements (vs. multisets) must be validated externally. Heterogeneous lists. A one_of definition can be combined with a list_of definition to implement lists of elements which have different types, which is not dependent on the positional order of the element and is not explicitly annotated by a key or typecode. Instead, it is the formatting of the element itself which reveals the type. For example, the following defines a list containing either integers or single upcase characters: datatypes: list11: list_of: one_of: integer regex: "[A-Z]" splitted_by: "," An example of string which can be handled by the previous definition is: 1,-3,A,5,B,-2. Mappings, Dictionaries, Associative arrays. In another type of compound values, the semantics and data type of the elements can differ, and different instances may contain or not some of the elements, or sometimes contain multiple elements of the same type. Such compound values are usually stored in open associative data structures, with different names and underlying data structures, such as dictionaries (Python), tables (Nim), hash tables (C), objects (JSON) or maps (YAML). In a text format, this kind of data can be represented in different ways. For example, the semantic and data type can be specified explicitly in the text representation, or be implicit by the format of the text itself. Depending on this, the kind of definition to be used in TextFormats differs (e.g. list_of with one_of elements, labeled_list or tagged_list). Semantics by format. If the semantics of the elements of a list is determined by the format, a list_of definition can be given, in which the element is defined using a one_of definition. This is the same case illustrated above under the paragraph heterogeneous lists. In this case, which does not occur very often in practice, the result of parsing and the data to be passed to the encoding function must be a list. Thus some external preprocessing or postprocessing will be necessary to transform the data to or from a mapping. Key/value pairs. In many cases, a collection contains elements of different type, and the semantics of each elements is given explicitly, alongside the value of the element. Thus, each element is present as a tuple of keys and values. Although a list_of could be used also for this case, TextFormats offers a specialized kind of list definition for it, in case the set of possible keys and their associated data types are known in advance. In such cases a labeled_list definition is used, as in the following example: datatypes: map1: labeled_list: rank: unsigned_integer name: string splitted_by: ";" The set of possible keys and the datatypes of the values for each of the keys are given under the labeled_list key, as a mapping. An internal_separator string can be specified, separating the key from the value (the default is :). The internal separator cannot be empty and cannot occur in the keys, but it can occur in the values. This condition is generally met in formats which implement key/values lists. Single-instance keys. Names are by default allowed to present multiple times in the list. For this reason, the elements values are always given in the decoded value as lists. In some cases, all or some of names can only be present once. This can be enforced by listing them under the single key: datatypes: map2: labeled_list: rank: unsigned_integer name: string splitted_by: ";" single: [rank, name] Required keys. Also, by default, some names may be absent in the set of elements. If some of the names must be present, they are listed under the required key: datatypes: map3: labeled_list: rank: unsigned_integer name: string splitted_by: ";" internal_separator: "=" required: [name] SAM-style tags. In some cases the values of a collection are each accompanied by a name and typecode, i.e. as triples value/name/typecode. A prominent example for this are SAM-style tags, which often included in recent bioinformatics formats, e.g. GFA (GFA Format Specification Working Group, 2016, 2018) and VCF (Danecek et al., 2011), after their original definition for use in the SAM format (Li et al., 2009). The difference with the key/value case is that the name defines the semantic of the value, but not all names (differently from labeled values lists) must be defined in advance. Since the name is not necessarily predefined, the type must be explicitly given, thus a typecode is present in the text representation. Each typecode is associated to a datatype definition. For these case, the tagged_list definition key is used, under which all available datatypes codes and the associated datatype definitions are given under the tagged_list key as a mapping. An example of tagged list definition is given here: datatypes: map4: tagged_list: i: integer f: float tagname: "[A-Z]" internal_separator: "." splitted_by: ";" The definition given above can e.g. handle the string representation A.i.12;B.f.1.3, which is parsed into the two elements A with the value 12 and B with the value 1.3. Generalized tags. The valid names and their formatting is specified using a regular expression. The internal separator key has a default value (colon, :) and it must be a non-empty string. It cannot be present in tagnames and type codes (but can be present in values). This restriction is reasonable and e.g. met in the in SAM tags. Predefined representations. Using decoding mappings and/or a default decoded value (see below) is it possible to decode given strings to predefined mappings/dictionaries. In case a data value has multiple string representations, the canonical one must be specified, which is then used for encoding. Objects, Structs. In this section we handle collections of values where each instance contain the same set of elements (with possible exceptions), representing different aspects of the data. Each of the element has its own data type and semantics. This kind of compound values is represented in structs (C, Python) or instances of classes (Python, Nim). Other possible representations are those mentioned in the previous paragraph (mappings, hash tables), eventually adding validations, making sure that all and only the correct elements are present. In TextFormats the description of this kind of data is perfomed using composed_of definitions. Under the definition key, a list is of tuples is given, each one as name and definition. Note that this is a list (thus thein YAML) and not a mapping, since the order of the elements is important, as it defines which element is which: datatypes: obj1: composed_of: first: unsigned_integer second: float third: {regex: "[A-Za-z0-9]"} splitted_by: " " Enclosing strings. As for lists, composed_of definitions can include a prefix and/or a suffix option, which define enclosing strings (e.g. parentheses) before the first element and/or after the last element. Element separators. The splitted_by and separator options are used for describing how to separate the single elements of the compound value. These have the same kind of usage already illustrated in the Lists section, i.e. splitted_by is used for separators which cannot occur in the elements text, while separator is used otherwise, e.g. when the escaped separator can occur in the elements or the element size is recognized by their format, e.g. fixed-length elements. Multiple separators. In some cases, different separators are used between different pairs of elements. In this case, they can be specified as additional constant elements and hidden in the decoded dictionary using the hide_constants option, as in the following example: Optional separated elements. Some of the elements can be optional, i.e. sometimes absent from the sequence of elements. In case the sequence is splitted by a non-empty separator string, an empty element can be recognized by the presence of this separator. In this case the empty option is used in the elements definitions, as in the following example: Optional internal elements. In the case, some internal element can be missing (together with its associated separator, or when no separator is used) but there is no ambuiguity, e.g. because the following element of the sequence has a type that allows it to be distinguished from the optional element, or because the total number of elements changes depending on the presence or absence of the optional element. In this case the user must provide multiple alternative definitions of the structure (i.e. with and without the optional element) using a one_of definition. Furthermore, in order to provide the same set of values for all instances of the object or struct, the implicit option can be used. Unions. In some cases an element of a format can be expressed in multiple different ways. In dynamically typed languages such as Python, any variable can store this kind of values. In C, such values could be e.g. stored as unions, and in Nim as variant objects. In TextFormats the type of such values can be defined defined using definitions of type one_of. E.g. the following allows to represent an unsigned integer value, as a number, if it is >= 1, otherwise a floating point: Note that the content of the one_of key is a YAML list; the order of the elements in the list defines the order or precedence of the definitions (the first which applies is used). Conclusions In this paper, the representation of different kind of data in text formats, as specified using the library TextFormats, has been analysed. Thereby it was demonstrated that most types of values that can be used in programming languages such as C and Python, can also be represented in TextFormats. In TextFormats specifications, the same type of value is sometimes represented using a different kind of datatype definition. This is true for both scalar values and compound values. For example, in the above examples, boolean values are sometimes represented using definitions of kind values, sometimes regexes or even constant, depending on what is their representation in the format. Among the examples for compound values, e.g., collections of tagged elements are sometimes represented using list_of, but in other cases using the specialized list definition keys tagged_list and labeled_list. The reason for this is that the TextFormat syntax for datatype definitions is oriented to the text representation and not to the type of represented value. The systematic review of the definition types based on the type of value is particularly useful when defining a new format and complements the TFSL syntax manual included in the library documentation, which is more useful when a specification is written for a format which already exists.
2023-02-01T06:42:51.424Z
2023-01-31T00:00:00.000
{ "year": 2023, "sha1": "2ef539d5e47a3d8a05872bfeaa20b8fa1727fd62", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2ef539d5e47a3d8a05872bfeaa20b8fa1727fd62", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231720348
pes2o/s2orc
v3-fos-license
Viral infections and their relationship to neurological disorders The chronic dysfunction of neuronal cells, both central and peripheral, a characteristic of neurological disorders, may be caused by irreversible damage and cell death. In 2016, more than 276 million cases of neurological disorders were reported worldwide. Moreover, neurological disorders are the second leading cause of death. Generally, the etiology of neurological diseases is not fully understood. Recent studies have related the onset of neurological disorders to viral infections, which may cause neurological symptoms or lead to immune responses that trigger these pathological signs. Currently, this relationship is mostly based on epidemiological data on infections and seroprevalence of patients who present with neurological disorders. The number of studies aiming to elucidate the mechanism of action by which viral infections may directly or indirectly contribute to the development of neurological disorders has been increasing over the years but these studies are still scarce. Comprehending the pathogenesis of these diseases and exploring novel theories may favor the development of new strategies for diagnosis and therapy in the future. Therefore, the objective of the present study was to review the main pieces of evidence for the relationship between viral infection and neurological disorders such as Alzheimer’s disease, Parkinson’s disease, Guillain-Barré syndrome, multiple sclerosis, and epilepsy. Viruses belonging to the families Herpesviridae, Orthomyxoviridae, Flaviviridae, and Retroviridae have been reported to be involved in one or more of these conditions. Also, neurological symptoms and the future impact of infection with SARS-CoV-2, a member of the family Coronaviridae that is responsible for the COVID-19 pandemic that started in late 2019, are reported and discussed. Introduction Neurological disorders (NDs) are among the most significant public health challenges in today's society, and they are mainly associated with the aging of the population [1]. NDs are the leading cause of disability-adjusted life years (DALYs), with approximately 276 million cases [2]. The continuous dysfunction provoked by NDs triggers degeneration and consequent cell death in the nervous system [3]. Although neurological disorders have a multifactorial etiology, most of them have a strong genetic and environmental association [4]. Recently, some studies have also associated NDs such as multiple sclerosis (MS), amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), Alzheimer's disease (AD), Guillain Barré syndrome (GBS), and epilepsy with viral infections [5][6][7][8][9]. Viral diseases are widely distributed, easily transmitted, and challenging to control. Thus, the hypothesis of viral agents as possible triggers of NDs makes them more impactful [10]. The viral etiology of some neuroinfections is well described in the literature, especially those related to neurotropic viruses such as poliovirus, coxsackievirus, and enterovirus 71 (EV71). However, our understanding of the relationship between other viral infections and the development of neurological diseases is still limited. In addition, current ND challenges include the lack of reliable biomarkers for early diagnosis and effective preventive strategies and treatments [11]. In this context, this work relates NDs, such as AD, PD, GBS, MS, and epilepsy, to viral infections. Moreover, we discuss the possible neurological impact of SARS-CoV-2 infection, which is caused by the new coronavirus responsible for the current COVID-19 pandemic. The relationship between neurological disorders and infectious etiology The first report of a central nervous system (CNS) infection with a consequent ND was by Bowery et al. in 1992 [12]. Subsequently, enterovirus (EV) and human herpesvirus (a) A direct crossing may be possible when cells of monocyte-macrophage/microglia lineage are infected by the pathogens and carry them through the BBB, reaching the CNS. This mechanism is also called "Trojan horse" because the microorganism eludes the immune system defense by using these cells to move from the bloodstream to the brain. The transport of pathogens to CNS is favored by inflammation, which is typically observed in neurological disorders. During the inflammation process, inflammatory molecules are released, triggering the activation of infected leukocytes. The postcapillary venule is attacked by the infected leukocytes, which encircle the endothelial and parenchyma basement membranes. Next, these cells enter the CNS by crossing the BBB. Another mechanism used by pathological agents is to impair the BBB and reach the CNS directly, using the porous capillaries of the choroid plexus. In various neurological diseases, the BBB is damaged, which favors the entry of pathogens into the brain through the bloodstream. (b) Neurotropic viruses may enter the CNS through retrograde axonal transport. These pathogens infect the peripheral nerve that creates a link from the skin and the mucosa to the sensory, motor, and olfactory neurons. In neuronal cells, viruses can replicate and infect adjacent cells. Source: adapted from De Chiara et al. (2012) [16] (HHV) infections were found to be associated with ALS [13], Japanese encephalitis (JE) virus and Influenza virus with PD [14], herpes simplex virus type 1 (HSV-1) and Chlamydia pneumoniae with AD [15], and Epstein-Barr virus (EBV), varicella-zoster virus (VZV), cytomegalovirus (CMV), HHV-6, and HHV-7 with MS [16]. Although studies have shown that MDs begin in the CNS, the brain-periphery relationship may influence the development and progression of these disorders [17]. These disorders may be caused directly by infection of the CNS by specific pathogens or indirectly, through the host response to the infection. In case of the direct damage, some pathogens can cross the intact blood-brain barrier (BBB), causing severe encephalitis or acute infections that can be fatal or progress to chronic diseases [18]. Also, aging can make the CNS more vulnerable to infectious agents due to changes in the BBB, increased oxidative stress, and less energy production [19]. In the indirect-damage mechanism, various factors may be involved, such as the accumulation of protein aggregates, high levels of oxidative stress, alterations in autophagic mechanisms, synaptopathy, and neuronal destruction [16]. Fig. 1 illustrates the cascade of immune responses produced by the body against infections of the CNS and their deleterious effects. In addition to the direct infection of CNS via blood and the BBB, other possible pathways involve monocyte-macrophage/microglia cells that can cross the BBB or interand trans-neuronal transfer in peripheral neurons ( Fig. 2A). Human immunodeficiency virus (HIV), for example, crosses the BBB by infecting blood leukocytes and, subsequently, microglia [20]. Although HIV is not capable of attacking neurons directly, this infection provokes an increase in inflammatory cytokines and viral proteins that indirectly harm these cells [16]. The neurotropism of some viruses is established through the nerves rooted in mucosa and skin. Infection is dependent on the recognition of receptors in sensory, motor, and olfactory neurons and retrograde axonal transport through axonal microtubules [21]. Certain viruses reach the brain through the olfactory system, which connects peripheral areas and with the CNS. Triplets and vagus nerves may also be an entry point for some viruses in intranasal infections [22]. After entering the CNS, viruses spread from cell to cell by being released in the synaptic cleft or by merging with neighboring neurons. The reinfection of peripheral tissue is possible for some viruses, which travel through anterograde transport and are released into the synaptic cleft [23]. The spread of neurotropic viruses in the CNS is illustrated in Fig. 2B. Several viruses have been associated with NDs, as described below. The effects of viral infections on the pathophysiology of NDs are summarized in Table 1. Neurological disorders and viral infections Alzheimer's disease (AD) AD, described for the first time in 1906, is the most common type of dementia, and in approximately 95% of the cases, it occurs after 60 years of age. In young individuals, 13% of cases show an autosomal dominant pattern of inheritance. An increased amount of beta-amyloid (Aβ) is found in the brains of AD patients. This overproduction may be related to the mutation of the genes encoding presenilins I and II (PSEN1 and PSNE2) and amyloid precursor protein (APP). In addition to these mutations, early-onset AD may also be related to mutations in apolipoprotein E (apoE) and tau protein genes [24]. The etiology of late-onset AD is often associated with a complex synergy of factors, such as the susceptibility to multiple genes (the E4 allele of the apoE gene, for example) and environmental factors [25]. ApoE is essential for repairing damage to neurons by redistributing lipids to axons and regenerating Schwann cells, restoring synaptic-dendritic connections [26]. One of the main characteristics of AD is the accumulation of Aβ peptide in the brain. Multiple forms of this peptide are derived from cleavage of the APP, the expression of which increases during cell stress [27]. The homeostasis of the CNS depend on the levels of Aβ in the brain, which assist in vital processes such as synapsis, calcium homeostasis, neurogenesis, the antioxidant system, and metal ion capture [28]. An altered level of Aβ peptide leads to the formation of amyloid fibrils. In a cascade of events, amyloid fibrils trigger amyloid plaques and formation of neurofibrillary tangles (NFTs), causing a loss of synapses and neuronal death [29]. Another factor associated with AD is the tau protein, which contributes to the assembly and stabilization of microtubules [30] and is important in the regulation of plasticity and synaptic function [31]. Under physiological conditions, phosphorylation of tau proteins for binding to microtubules occurs in a balanced way. However, when they are hyperphosphorylated, tau proteins undergo conformational changes leading to the formation of NFTs, destabilization of associated microtubules, synaptic damage, and neurodegeneration [29]. In addition to amyloid plaques and NFTs, the presence of extensive oxidative stress and dysregulation of calcium homeostasis are also characteristic of an AD patient's brain [32]. Aβ can promote cellular calcium overload, inducing associated oxidative stress and formation of pores in the cell membrane [33]. Oxidative stress can increase Aβ production and tau hyperphosphorylation, promoting the onset and progression of AD [34]. Other events may influence the pathogenesis of AD, such as defective autophagy [35], mitochondrial dysfunction [36], synaptic dysfunction [37], and neuroinflammation [38]. AD causes several types of tissue damage, including brain atrophy, loss of neurons, and amyloid angiopathy [39]. Tests for the presence of microorganisms in the nervous system of patients with AD manifestations have yielded positive results for fungi [40,41], bacteria [15], and viruses such as CMV, HSV-1, HHV-6, and hepatitis C virus (HCV). Unlike the brains of young patients, postmortem examination of the brains of elderly people with AD has shown them to be positive for HSV-1 DNA [42]. Seropositivity for HSV has already been associated with the development of AD in other studies [43][44][45]. Reactivation of HSV-1 in the CNS has been suggested to be the main connection between HSV-1 infection and AD development. This reactivation triggers an inflammatory process, causing damage to the cells, along with formation of amyloid plaques and NFTs [7]. The HSV-1 glycoprotein B (gB) is 67% identical to the Aβ peptide. In an in vitro study, gB promoted the development of Aβ fibrils in primary cortical neurons, causing cytotoxicity [46]. Decreased Aβ clearance and the accumulation of amyloid plaques in AD can impair cell autophagy [47]. Neuroblastoma cells infected with HSV-1 also produce hyperphosphorylated tau protein [48]. ApoE seems to have a strong correlation with HSV-1 lip infection in the peripheral nervous system, with the E4 allele being present in 60% of those infected [42]. In mice infected with HSV-1, viral DNA concentrations in the brain were 13.7 times higher in apoE +/+ wild-type mice than in apoE -/-knockout mice. Also, HSV-1 infection induces the expression of cytokines and proinflammatory molecules that can cause oxidative damage [49]. In human neuroblastoma cells infected with HSV-1, experimentally induced oxidative stress has been shown Epilepsy Tropism for glial cells [110] GBS Important antigen-antibody reaction Polyclonal B cell activation Reactivation of a latent infection [155,156,162,172] MS The latency established by HHV-6A in oligodendrocytes may contribute to, or even trigger an autoimmune reaction that leads to myelin impairment. Affecting the repairing process of myelin in the brain by infecting OPCs Coronaviridae Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) Cells in the respiratory tract, most likely type II pneumocytes in the lungs, goblet secretory cells in the nasal passages, and the absorptive enterocytes in the intestines Suggested neurotropism to brain cells (due to high expression of ACE2 receptors in this organ) *SARS-CoV-2 infection may trigger encephalitis, seizure (or focal status epilepticus), meningitis, acute cerebrovascular diseases, impaired consciousness, skeletal muscle symptoms, agitation, confusion, and signs of corticospinal tract dysfunction *This infection may trigger immune-mediated processes, which may lead to GBS. *SARS-CoV-2 infection is likely to trigger demyelination similar to MS. *SARS-CoV-2 causes a cytokine storm, which may trigger acute necrotizing hemorrhagic encephalopathy and BBB disruption. [195-203, 205, 211-213] AD, Alzheimer's disease; PD, Parkinson's disease; GBS, Guillain-Barré Syndrome; MS, multiple sclerosis; HERVs, human endogenous retrovirus; MOG, myelin oligodendrocyte glycoprotein; OPCs, oligodendrocyte progenitor cells; BBB, blood-brain barrier; CNS, central nervous system; NFTs, neurofibrillary tangles; apoE4, apolipoprotein E4; TNFα, tumor necrosis factor alpha; IFN-γ, interferon gamma; GM-1, gangliosidosis 1; IL-6, interleukin 6; anti-MAG, anti-myelin-associated glycoprotein; SNpc, substantia nigra pars compacta; Tat, transativator of transcription; Nef, negative regulatory factor * All neurological effects of SARS-CoV-2 infection described in the table are based on isolated cases or studies based on a small group of patients infected. Further investigation must be conducted to clarify the neurological effects of SARS-CoV-2 infection. Also, long-term monitoring of patients is necessary to verify its impact on neuronal function and its possible impact on the development of neurological diseases. to significantly increase the accumulation of intracellular Aβ, inhibit Aβ secretion, and potentiate the accumulation of autophagic compartments within the cell [50]. Another HHV related to AD is CMV. Studies have demonstrated that individuals with higher levels of CMV IgG have more significant cognitive decline and a higher risk of developing AD [51][52][53]. Also, 93% of brains with vascular dementia that were examined postmortem were positive for CMV [54]. CMV-specific CD8 + T cells produce increased amounts of proinflammatory IFN-γ and decreased levels of the anti-inflammatory cytokines IL-2 and IL-4, with a potential shift to a proinflammatory cytokine profile in elderly people [55]. Serum levels of CMV-specific IgG have been shown to be significantly associated with NFTs. An increase in IFN-γ was also detected in the cerebrospinal fluid (CSF) of more than 80% of subjects who were positive for CMV [56]. HHV-6 rarely causes serious CNS complications; however, its ability to establish latency in the brain with possible reactivation under conditions of immunosuppression may relate this virus to AD development [57]. HHV-6 has been found in the brain [58] and leukocytes of AD patients, being significantly associated with the development of the disease and the cognitive decline of these individuals [52]. It has been suggested that HHV-6 deregulates autophagy and activates a stress response in the endoplasmic reticulum in various types of cells, particularly in astrocytes. The reduction of autophagy increases the production of Aβ and activates the stress response in the endoplasmic reticulum, promoting hyperphosphorylation of the tau protein [59]. Studies have also suggested a relationship between HCV and AD. It has been proposed that this occurs thorough direct viral infection in the brain or through cerebral or systemic inflammation [60]. In the first hypothesis, HCV can infect monocytes/macrophages, cross the BBB, and provoke the secretion of large amounts of cytokines (TNF-α, IL-6), causing cytotoxicity in the brain tissue. In the second hypothesis, HCV activates the immune system, triggering excessive systemic or local inflammation [61]. Parkinson's disease (PD) PD is a neurodegenerative motor disease, initially described by James Parkinson as "paralysis agitans" [62]. Approximately 10 million people are living with PD worldwide. Four percent of PD patients are under the age of 50, and the incidence of this disease increases with age [63]. An epidemiological study based on a North American population has suggested that, by 2030, over 1.2 million people will be living with PD [64]. This disease is characterized by motor changes, cognitive impairment, and autonomic dysfunction [65]. The damage caused by PD is related to dopaminergic neuron degeneration that occurs in the nigrostriatal pathway. The reduction of striatal dopamine modulation is also responsible for disease signs [66]. These alterations happen not only in the substantia nigra (SN) but also in the dorsal motor nucleus of the vagus and peripheral neurons [67]. Lewy bodies are formed mainly by α-synuclein, neurofilament proteins, and ubiquitin. It is suspected that presynaptic α-synuclein is the main protein involved in the formation of these bodies. Genetic or epigenetic factors may be responsible for their appearance in neurons. The PD development is related to the protein aggregates formed inside the nerve cells and their location in the brain. That is, if Lewy bodies are located in the nigrostriatal pathway, they would be related to extrapyramidal manifestations; in autonomic ganglia, postural hypotension; in the limbic cortex, psychosis; and, in the neocortex, cognitive decline [68,69]. A D620N mutation in vacuolar sorting protein 35 (VPS35) causes subcellular retromer complex dysfunction; therefore, it is believed that it may affect the pathogenesis of PD. An alteration in retromer cargo molecule trafficking, a reduction of cell survival, and alteration of α-synuclein processing were observed when the D620N mutation was present. The retromer complex is also used by viral and bacterial pathogens to aid in their assembly, replication, and movement within the cell and as a mechanism to avoid the destruction that may be triggered by the cell defense machinery [70]. After the H1N1 pandemic in 1918, the number of cases of post-encephalitic parkinsonism and lethargic encephalitis increased. Based on this fact, a "dual-hit hypothesis" was suggested regarding PD pathogenesis. It was proposed that microorganisms may enter the host via the intestinal mucosa and attack the nervous system. These neurotropic agents may infect the substantia nigra pars compacta (SNpc) and trigger neurodegenerative events [71]. Although rare, experimental evidence has shown that some influenza A viruses are neurotropic, moving into the nervous system following systemic infection [72][73][74]. H5N1 influenza virus was used to infect the CNS of mice. A continuing inflammatory response in the animals' brains was demonstrated after the viral infection, which may have induced degeneration of dopaminergic neurons [8]. The H1N1 influenza virus does not appear to be neurotropic in mice, suggesting that the peripheral immune response activated after an infection is probably responsible for the secondary inflammation observed in the CNS [75]. HHV infection is also associated with PD development. Elevated serological test values and the presence of serum inflammatory cytokines and α-synuclein support this theory. Moreover, studies suggest a relationship between viral load and the severity of PD symptoms [76]. Scientists have proposed a role of molecular mimicry between HSV-1 (region Ul4222-36) and α-synuclein (αsyn100-114). In the membrane of SNpc dopaminergic neurons, this phenomenon may trigger aggregation of α-synuclein and subsequent neuronal degeneration. A similar mechanism is observed for EBV, with molecular mimicry between a repeat region in latent membrane protein 1 (LMP1) encoded by EBV and the C-terminal region of α-synuclein, inducing its oligomerization [77]. HSV-1 infection may be associated with secretion of TNF-α, which is known to be involved in PD pathogenesis. It has been reported that dopaminergic neurons are very susceptible to TNF-α, which may affect the cells' plasticity, and that neuronal death can occur in response to TNF-α binding to its receptors [78]. Parkinsonism symptoms were described in a patient who demonstrated HHV-6 infection reactivation after transplantation. Brain injuries may indicate parainfectious cytotoxic changes, direct CNS invasion, or immunologically mediated mechanisms [79]. Immunological reactivation may also be related to the development of PD in CMV infection. Dendritic cells, which preferentially secrete proinflammatory cytokines, are in higher numbers in patients with CMV and PD than in patients with CMV without PD [80]. In addition, the possible immunogens presented by these cells may be derived from dopaminergic neurons, triggering an autoimmune response to neuromelanin [81]. Studies suggest that patients who have previously been infected with HCV are more likely to develop PD. HCV may replicate in the CNS, triggering a higher prevalence of mental illness in chronic HCV patients than in the general population [82]. Dopaminergic neurotoxicity after HCV infection has been observed in co-cultured neuron-glial cells from rats. Also, it has been suggested that HCV infection induces positive regulation of ICAM-1 (intercellular adhesion molecule 1) and RANTES (regulated on activation, normal T expressed, and secreted) chemokines [83]. Tissue inhibitor of metalloproteinases 1 (TIMP-1), which is responsible for neuronal survival, is downregulated by HCV [71]. The entry and replication of this virus in the CNS may be facilitated by the high level of expression of HCV receptors in the brain microvascular endothelium [84]. Despite this fact, no correlation was found between HCV infection and PD development when more than one million patients were studied in the USA [85]. The discrepancy in the results was suggested by the substantial difference in the geographic areas of the studies, considering the prevalence of the infection, pathogenic profile of the genotype, variability of extrahepatic manifestation, and association with comorbidities. Patients infected with HIV may develop parkinsonian features. This movement disorder may be triggered by a cascade of events caused by the infection, such as basal ganglia dysfunction, BBB alteration, chronic neuroinflammation, and neurodegeneration [86]. Post-mortem autopsies demonstrated signs of HIV in the brain, mostly in inflammatory infiltrates and glial cells, and a higher prevalence of α-synuclein in SNpc [87]. DJ1 regulates the production of reactive oxygen species (ROS) and dopamine transmission in neurons, whereas leucine-rich repeat kinase 2 (LRRK2) mediates neuroinflammation and neuronal damage. Studies suggest that HIV infection may influence DJ1 and LRRK2 levels [88]. Epilepsy Epilepsy is an ND characterized by the rapid occurrence of epileptic seizures due to abnormal or excessive brain/ neuronal activity [89]. An individual can also be diagnosed with epilepsy if he or she experiences an unprovoked or reflexive seizure and has at least a 60% chance of developing another seizure in the next 10 years [90]. Approximately 20% of all epilepsy cases are caused by acute CNS insults, 11% by cerebrovascular accident, 6% by traumatic brain injury, and 4% by infections [91]. Changes associated with post-traumatic epilepsy (PTE) include hemosiderin deposition with an incompletely formed wall of gliosis [92] and persistent BBB disruption. A correlation is also found between late-poststroke seizures and BBB disruption [93]. Post-injury epilepsy develops most commonly in the temporal and frontal lobes [94]. PTE and some infections cause an initial lesion, and if it is outside of the temporal lobes, it may result in seizures due to mesial temporal sclerosis (MTS) [94,95]. After an injury, the latency time for the development of seizures may vary, suggesting possible variability in mechanisms of epileptogenesis [91]. Surgical specimens from patients with epilepsy show common pathological features that may be relevant to the epileptogenic process, such as astrocytes activated at the BBB, inflammatory cellular infiltrates, extravasation of blood, severe injury, disruption of the BBB with encephalitis, and involvement of frontal or temporal lobes. The majority of these features are associated with inflammatory responses [91,94]. Inflammation in the CNS may participate in the progression of epileptogenesis as well as in the induction of seizures [96]. The progression of seizures depends on several factors. This cascade of events includes exacerbated generation of inflammatory factors, such as prostaglandin E2 (PGE2), IL-1β, IL-6, and TNF-α, and activation of inflammatory mediators, cyclooxygenase (COX)-2, and nuclear factor kappa B (NF-κB), for example [97]. Depending on its receptor, TNF-α may act as a pro-convulsant via TNF receptor 1 (TNFR1) or an anti-convulsant through TNF receptor 2 (TNFR2) [98]. Similar to other proinflammatory cytokines, such as IL-1β and TNF-α, IL-6 signaling may activate NF-κB transcriptional signaling and induce the synthesis of PGE2 by COX-2. These physiological changes assist in regulating immune and inflammatory responses [99]. The overexpression of TNF-α or IL-6 in mice leads to chronic inflammation in the brain, predisposing to seizures [100]. A great diversity of viruses has been associated with epilepsy [13,101,102]. One hypothesis suggests that common childhood viral infections can generate acute and chronic inflammatory processes in the CNS, which increases BBB permeability and neuronal excitability [103]. Acute seizures and epilepsy have been linked to HHV-6 infections, especially in children [104]. The number of HHV-6 infections associated with mesial temporal or temporal lobe epilepsy range from 9.1% to 55.6% [5,105]. However, many HHV-6 infections are not associated with epilepsy, and the association of this virus with ND is controversial [106][107][108][109]. HHV-6 has a tropism for glial cells, and as the nasal cavity is constituted of oligodendrocyte progenitor cells (OPCs), the virus can replicate and cause a significant increase in the production of IL-6, chemokine ligand 1 (CCL-1), and CCL-5 [110]. HVS-1 is the leading agent of viral encephalitis, with an incidence of 2 or 3 cases per million people, per year [111]. Studies suggest that encephalitis caused by HSV-1 replication increases the likelihood of spontaneous seizures and epilepsy by approximately 20%. This fact is due to the involvement of the frontotemporal cortex, including the hippocampus, elevated CSF opening pressure, and signs of cerebral hernia [112,113]. When viral replication is activated during latency, HSV-1 can ascend through the trigeminal and olfactory nerves to the frontal and temporal lobes, spreading to other regions of the brain [114]. HSV-1 infections trigger an inflammatory response, recruiting activated leukocytes, which, when repeated continuously, may cause brain tissue damage and neurological sequelae [111]. Less commonly, CMV and EBV may also cause nonparaneoplastic autoimmune encephalitis, which has been related to late-onset epilepsy. Antineuronal autoantibodies were detected in 48 of 113 patients with epilepsy and suspected autoimmune encephalitis [115]. The relationship between epilepsy and congenital CMV infection indicates that the 37% of the patients developed epilepsy at approximately 20 months of age [116]. CMV infection may affect the fetus directly via virally encoded gene products that may impair vital cellular processes, such as the cell cycle, cellular proliferation, and apoptosis, or induce inflammatory responses, trigger vascular injury, and promote evasion of host immune responses [117]. Increased expression of late CMV genes has been reported in individuals with intractable epilepsy, in addition to higher levels of CMV-IgG and CMV-IgM, highly sensitive C-reactive protein (Hs-CRP) and IL-6, suggesting increased viral replication and inflammatory responses in these patients [118]. It is known that several arboviruses can cause meningitis, encephalitis, and encephalomyelitis [119]. Verma and Varathanaj [120] reported a case of epilepsia partialis continua associated with dengue virus (DENV) encephalitis. Although seizures occur in approximately 47% of encephalitis cases caused by DENV, it is not possible to establish a causal relationship between encephalitis and the epileptic condition. According to Guabiraba et al. [121], there is currently no specific in vivo model that can demonstrate the relationship of epileptic manifestations and the pathogenesis of DENV infection. Trials were conducted in AG129 mice, which are deficient in interferon (IFN) types I and II and are highly susceptible to DENV infection [122]. When these animals were infected intraperitoneally with a neurotropic strain of DENV-2, 100% paralysis and lethality was demonstrated [123]. The prevalence and incidence of epilepsy and seizures among HIV patients are higher than in the general population. About 5 to 10% of HIV-positive patients in developed countries present with seizures or epilepsy [124]. It is known that HIV can invade neural tissue; however, there is still no proof of the relationship between the damage caused by the virus and seizures. Factors that may be associated with seizures and epilepsy in HIV-positive patients include the course of the disease and the establishment of acquired immunodeficiency syndrome (AIDS), opportunistic infections, and metabolic disorders [125]. Also, HIV infection can induce the formation of autoantibodies, causing neuronal death, with increased glutamate exocytosis and decreased recapture. Glutamate depletion is associated with the activation of calcium channels stimulated by phosphorylation of N-methyl-D-aspartate receptors by kinases arising from the activation of IL-1 receptors, causing neuronal hyperexcitability, with a consequent decrease in the seizure threshold [126][127][128]. Multiple sclerosis (MS) MS is an immune-mediated disease in which the myelin sheath of CNS neurons is injured and the communication between the muscles and the brain is progressively interrupted [129]. The International Advisory Committee on Clinical Trials of MS classifies four basic courses for this disease: clinically isolated syndrome, relapsing-remitting, secondary progressive, and primary progressive. The most frequent kind of MS is relapsing-remitting MS (RRMS) [130]. Patients may exhibit cognitive deficits, such as difficulties in processing information and impairment of working memory and attention, as well as balance, locomotion, and fine motor control [131,132]. Spasticity is a common symptom in MS patients. This condition is characterized by hyperreflexia, spasms, poor muscle tone, and pain, causing severe functional disability, which compromise the quality of life of these patients [133]. The prevalence of MS varies worldwide, reaching 12.8 out of every 100,000 inhabitants in Asia [134], 290 in Canada, 203 in the United Kingdom, 189 in Sweden, and 3.2 in Ecuador [135]. Although the etiology of MS is still uncertain, immunological, genetic, and environmental risk factors have been proposed. Immunological factors that trigger MS are related to T cells and antibodies, which are autoreactive in these patients. Inhibitory molecules, which generally regulate adaptive system activation, are impaired and are not able to suppress uncontrolled immune responses in MS patients. As a consequence, the chronic inflammation process leads to further damage [136]. Family clustering and specific genetic characteristics are being examined as risk factors for MS. While the general population shows a 0.1% risk of recurrence, first-degree relatives of MS index cases have up to a 50-fold greater risk of developing the disease. Moreover, smoking, body mass index, and vitamin D are among the environmental factors that may influence the onset of this disease [137,138]. Although the etiology of MS is multifactorial, viral infections are mentioned as one of the disease's environmental risk factors. When analyzing the relationship between HHV-6 and EBV infections and MS, higher titers of antibodies against these viruses and a higher seroprevalence were found in patients with the disease when compared to healthy pairedcontrol subjects [139]. The first evidence correlating EBV infection and MS was the fact that MS patients' B lymphocytes carry and transport EBV antigens [140]. Subsequently, other evidence was suggested, such as genetic susceptibility and EBV infection, as a higher risk of MS was found in individuals with infectious mononucleosis (IM) [141,142]. The indirect effect of EBV on MS onset may be related to the activation of silent human endogenous retrovirus W (HERV-W) [143]. In vitro and in vivo studies demonstrated that the envelope protein (Env) of HERV-W may trigger inflammatory responses and cause cytotoxicity, as well as cell death [144,145]. Another hypothesis suggested a cascade of events evolving EBV, B cells, T cells, and inflammation processes. In healthy seropositive individuals, the immune system manages to regulate the memory B cells against the latent virus, so there are no further complications. However, in individuals who are genetically predisposed to MS, memory B cells can cross the BBB and trigger an inflammatory response in the CNS and, consequently, germinalcenter-like structures. T cells may be activated at this point, and the infected cells, although latent and with limited viral gene expression, may act as antigen-presenting cells. After differentiation, some infected memory B cells can trigger the EBV replicative cycle and the production of virions. In diseases such as MS, microglia and astrocytes are chronically activated, causing neurotoxicity [146]. Myelin oligodendrocyte glycoprotein (MOG) is an essential glycoprotein involved in the myelination process in CNS nerves. MOG is also responsible for ensuring the structural integrity of the myelin sheath [147]. Changes in MOG have been experimentally associated with B cells infected with EBV, which convert the destructive processing of MOG into productive processing. This conversion facilitates the cross-presentation of the pathogenic MOG epitope (residues [40][41][42][43][44][45][46][47][48] to autoaggressive cytotoxic T cells [148]. Additionally, studies have shown that during primary EBV infection, this virus may induce an increase in BBB permeability, which allows pre-existing polyclonal antibody-producing B cells to penetrate the CNS. This event could explain the lower levels of EBV-specific IgG antibodies in the CNS compared to IgG produced against other viruses [149,150]. When relating HHV-6A and HHV-6B with MS, HHV-6A was more prevalent in serum and urine samples of patients with MS than HHV-6B [151,152]. The first murine model of HHV-6-induced brain infection was developed by Reynaud et al. [153]. First, these researchers studied different transgenic mouse lines and their ability to express the receptor for HHV-6, CD46. Further results showed that HHV-6A, but not HHV-6B, triggered the expression of viral transcripts in primary brain glial cultures from CD46-expressing mice. HHV-6B DNA did not persist in the brain, decreasing rapidly after the infection, while HHV-6A DNA levels remained high for up to 9 months. Immunohistological analysis showed the infiltration of lymphocytes in the periventricular region of mice infected with HHV-6A. Moreover, this virus triggered production of proinflammatory chemokines such as CC-chemokine ligand 2 (CCL2), CC-chemokine ligand 5 (CCL5), and C-X-C motif chemokine ligand 10 (CXCL10). A recent study measured IgG reactivity against HHV-6A and HHV-6B immediate-early protein 1 (IE1A and IE1B) and showed a positive association between IgG response against IE1A and an increased risk of developing MS in the future. In contrast, a negative association between the IgG response against IE1B and MS was demonstrated. Therefore, this study supports the role of HHV-6A in the etiology of MS by showing an increase in serological response against the immediate-early protein of this virus [154]. Oligodendrocytes are myelin-producing cells that are targeted by the immune system of patients with MS. The latency established by HHV-6A in oligodendrocytes may contribute, or even trigger, this unwanted autoimmune reaction, which leads to myelin impairment [155]. In patients with MS, in addition to the ongoing destruction of myelin, impairment in myelin repair by differentiating OPC is observed [156]. Guillain-Barré syndrome (GBS) GBS is characterized by a dysfunction in the peripheral nerve, which suggests that immune and inflammatory mechanisms are involved [157]. Its main clinical manifestations are the absence of reflexes, paresthesia with sensory loss, and motor weakness [158]. Classification of GBS into subtypes depends on the underlying pathology, clinical presentation, and neurophysiological features. The most common subtypes include the following: acute inflammatory demyelinating polyradiculoneuropathy (AIDP), acute axonal motor neuropathy (AMAN), acute motor-sensory axonal polyneuropathy (AMSAN), and Miller-Fisher syndrome (MFS) [159]. In GBS, antibodies and inflammatory cells produced in response to infections cross-react with epitopes on peripheral nerves and roots, leading to demyelination or axonal damage [160]. Macrophages initiate damage to the peripheral nervous system (PNS) by producing and secreting matrix metalloproteinases and nitric oxide. As a consequence, activated T cells release proinflammatory cytokines such as TNF-α [161]. The humoral response is initiated through activation of B cells, and antigen-antibody interactions can activate the complement system, resulting in membrane attack complex (MAC) formation and leading to nerve cell membrane damage and destruction [162]. Multiple antecedents and potentially triggering events have been reported. The association with infections has been established for Campylobacter jejuni, Mycoplasma pneumoniae, Haemophilus influenzae, and the viruses CMV, EBV, influenza A virus, and Zika virus [163]. Some genetic and environmental factors that affect the susceptibility of individuals to the disease have also been described [164]. Cases of patients with a combination of HSV-1 and GBS are suggested by the occurrence of molecular mimicry and high serum anti-GQ1b IgG antibody titers, causing inflammatory nerve damage [165][166][167]. Infection with HSV-1 could cause a change in the ganglioside composition of neuronal and glial cell surfaces, followed by the activation of autoantibodies in patients with antiganglioside antibodies [168]. Molecular mimicry is also proposed between CMV and GBS, which is the most frequent infectious etiology of GB, described for the first time by Klemola et al. [169]. Patients with CMV who develop GBS have high levels of anti-GM2 antibodies in their CSF and serum. In these people, carbohydrate structures similar to the GM2 ganglioside may induce antiganglioside antibodies [170]. Also, autoantibodies against moesin, which is crucial for myelination, have been demonstrated in 83% of patients with CMV-GBS. This may be due to six consecutive amino acids that are identical in moesin and CMV phosphoprotein 85 [171]. Although studies have associated HHV-6 infection with GBS development, this theory is generally based on minimal observations such as significantly higher antibody titers to HHV-6 in GBS patients compared to control groups [172]. This persistence of HHV-6 antibodies in the serum can be due to a stronger antigen-antibody reaction or to polyclonal B cell activation. Reactivation of latent HHV-6 infection has also been considered, but the influence of HHV-6 in GBS etiology is still inconclusive due to the lack of experimental studies [162]. The neurological involvement of EBV is also unusual, but it should be treated as a post-infection disease due to the abnormal immunological response observed [173]. Grose and Feorino [174] described five cases of GBS with high levels of antibodies to EBV, even in the absence of IM. Multivariate analysis showed that of 154 GBS patients, 10% had serologic evidence of recent EBV infection [6]. It has been suggested that the virus has a predilection for B lymphocytes and that it activates polyclonal B cells with increased production of immunoglobulins [175]. Other studies have suggested that EBV may infect endothelial cells and trigger vascular damage or cause vessel inflammation mediated by the immune complex, which could trigger the development of GBS [162]. The envelope of the influenza A virus consists of a lipid bilayer containing several glycoproteins, such as neuraminidase (NA) and haemagglutinin (HA). Therefore, anti-glycolipid antibodies may be produced during influenza virus infection because of possible molecular mimicry between glycoproteins of influenza viruses and glycolipids localized in human peripheral nerves [176]. Infectious hepatitis has been associated with GBS etiology. The case of a patient with manifested distal paresis of both legs and arms with areflexia and paresthesia was studied by De Klippel and collaborators [177]. Scientists believe the disease onset occurred during the pre-convalescent phase of an acute HCV infection, when the level of liver enzymes was consistently and rapidly normalized and signs of fibrosis were found in this organ. GBS may also occur in patients with chronic HCV infection, albeit rarely [178]. The reactivation of the virus or its intense replication may trigger the development of GBS [179]. A case of severe GBS was related to chronic active HCV infection and mixed cryoglobulinemia (MC) [180]. Other findings, such as immune complex accumulation in the vascular endothelium and vasculitis over the nerve, may explain the relationship between the infection and GBS onset [162]. A few cases of peripheral neuropathy secondary to chronic HCV infection have been described. These cases were often associated with cryoglobulinemia or with anti-myelin-associated-glycoprotein (MAG) antibodies [181]. Scientific studies were performed on the relationship between DENV infection and GBS. The infection may directly influence the disease or trigger postinfectious autoimmune responses that might lead to GBS [171]. Several studies on DENV infection have demonstrated abnormal immune responses, including cytokine and chemokine production, complement activation, and immune cell activation. Shah [182] suggested that proinflammatory cytokines that participate in the immune response in dengue fever might play a causal role in the etiopathogenesis of GBS. This infection may cause the generation of a complex immune response, with high levels of TNFα, IL-2, and IFNγ, as well as an inversion of the CD4:CD8 ratio [183]. Also, autoimmune responses may be involved, mainly in the pathogenesis of the severe phase of dengue. Patients with dengue can produce antibodies that cross-react with platelets and endothelial cells. After DENV infection, antibodies against nonstructural protein 1 (anti-NS1) are generated. Studies suggest these antibodies may influence the crossreactivity of endothelial cells, which play a crucial role in the development of neurological disease [184]. One of the proposed mechanisms for GBS in HIV-1-infected patients includes a direct action of an HIV-1 neurotropic strain on the nerves. Another theory is based on an autoimmune response, in which abnormal immunoregulation is followed by the formation of antibodies against myelin [185]. HIV can cause direct and indirect neurotoxic effects on the CNS and PNS. The relationship between GBS and the stage of HIV infection is also unclear. The authors characterized the GBS as an indication of early HIV infection or seroconversion [186]. However, GBS has been reported in chronic HIV-1 infection cases or as a complication of immune reconstitution inflammatory syndrome in severely immunocompromised patients [187]. The HIV-1 infection may alter the integrity of the BBB through the action of the viral proteins Tat, gp120, and Nef [188]. In a study using a murine model, exposure to HIV-1 envelope protein gp120 caused swelling and increased TNFα levels in the sciatic nerve trunk. These findings suggest that HIV infection may cause nerve damage [189]. SARS-CoV-2, the virus responsible for COVID-19 pandemic, and its neurological impact SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) is the virus responsible for the disease COVID-19 [190], a global pandemic that started in late 2019 and in a few months has affected 250 countries around the world [191]. Coronaviruses (CoVs) typically cause respiratory disease in humans; however, some studies suggest an association with neurological symptoms. Two different coronaviruses that caused epidemic infections in the past, SARS-CoV, and Middle East respiratory syndrome-associated coronavirus (MERS-CoV), have triggered neurological harm in isolated cases [192]. Patients infected by these viruses have developed neurologic symptoms, such as neuropathy, myopathy, Bickerstaff brainstem encephalitis (BBE), and GBS, two to three weeks after the appearance of typical symptoms [193,194]. The causality cannot be proven, since these findings were reported in isolated cases and with a small number of patients. On the other hand, some scientists have suggested that the neurological manifestations of MERS might have been neglected and underdiagnosed [193]. In Wuhan, China, a study was performed during the COVID-19 outbreak with 214 patients. The results showed that 36.4% of the patients with severe disease exhibited neurologic manifestations, such as acute cerebrovascular disease, impairment of consciousness, and skeletal muscle symptoms [195]. A study performed in Strasbourg, France, demonstrated that 84% of COVID-19 patients in intensive care units (ICU) who had respiratory difficulties also showed neurological symptoms such as agitation, confusion, and signs of corticospinal tract dysfunction [196]. It has been suggested that these signs of neurological damage might be caused by severe hypoxemia and hypoxia, an inflammatory process triggered by SARS-CoV-2 infection, or by virus infiltration and spread in the brain [197]. The SARS-CoV-2 infection begins with the spike protein S1 binding to the host receptor ACE2 (angiotensin-converting enzyme 2). The human brain expresses ACE2 at a high level, which may allow the virus to invade the CNS [198]. Xiang et al. [199] reported the first confirmed case of encephalitis caused by SARS-CoV-2. The presence of this virus in the CSF of this patient was confirmed by genome sequencing. The first case of meningitis associated with COVID-19 was reported by Moriguchi et al. [200]. Although nasopharyngeal swabs obtained from this patient were negative, the infection was confirmed by viral RNA detection in the spinal fluid. COVID-19 triggers increased production of inflammatory cells, and along with them, high levels of inflammatory cytokines, which induce immune-mediated processes [201], and this is one of the proposed explanations for GBS symptoms [202]. Sedaghat and Karimi [203] reported the development of GBS in a 65-year-old male COVID-19 patient two weeks after developing cough and fever. In another case study, the first GBS symptoms overlapped with the period of SARS-CoV-2 infection, making investigators unsure about the causal connection between the two [204]. A patient with well-controlled post-encephalitic epilepsy was infected with SARS-CoV-2 and presented with focal status epilepticus in the early stage of the disease. The 78-yearold patient was seizure-free for more than two years, and based on a historical correlation of symptoms, it was suggested that SARS-CoV-2 might have triggered the seizures [205]. The relationship between COVID-19 and epilepsy is still unknown, and conclusions will depend on new reports and updates from clinicians [206]. A study of 90 brain autopsy samples from patients with NDs (mostly MS) and healthy controls showed that 48% of the samples contained human coronavirus RNA [207]. However, further studies should be performed to determine if the presence of virus in human brains is opportunistic or disease-associated. It has been suggested that SARS-CoV-2 infection is likely to trigger demyelination similar to MS. Based on this fact, periodic neurological assessments, such as auditory brainstem responses and neuroimaging, should be carried out on recovered COVID-19 patients to follow up any signs of dysfunction [199]. When comparing COVID-19 with past viral pandemics, a concern about neuropsychiatric sequelae emerges. Previous outbreaks of virus infection have triggered long-term neurodegenerative effects such as encephalopathy, psychosis, demyelinating processes, and neuromuscular dysfunction, weeks, or even months after the patient's recovery [208]. Therefore, Troyer et al. [209] have also emphasized the need for long-term monitoring of patients who were once infected with SARS-CoV-2. Several neurodegenerative diseases, such as AD, PD, and MS, are related to high levels of cytokines/chemokines and other chronic neuroinflammation effects [210]. In this way, the cytokine storm triggered by COVID-19 and BBB disruption could affect the CNS and cause the onset of these diseases [211,212]. However, further studies should be conducted to investigate the involvement of SARS-CoV-2 infection in neurodegenerative diseases [211]. Conclusion In this review, we have discussed a number of studies that relate viral infections to the development of neurological disorders. It is important to consider that viruses are responsible for various epidemics and even pandemics, and some of them can cause irreversible damage to the nervous system. This work demonstrates that considerable attention should be given to the relationship between viral infections and NDs. The inclusion of viruses in the etiology and diagnosis of diseases of the nervous system would have a positive impact on the management and treatment of disabling and potentially lethal complications. New studies that investigate the mechanism of action of the viruses in these pathologies should be encouraged, aiming mainly at the development of novel control and intervention therapies.
2021-01-29T05:37:21.345Z
2021-01-27T00:00:00.000
{ "year": 2021, "sha1": "f9b03e829815f249eb118f7dfb55dd666030c69c", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00705-021-04959-6.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f9b03e829815f249eb118f7dfb55dd666030c69c", "s2fieldsofstudy": [ "Medicine", "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
56371631
pes2o/s2orc
v3-fos-license
Cortical Specification of a Fast Fourier Transform Supports a Convolution Model of Visual Perception Currently, the full extent of the role Fourier analysis plays in biological vision is unclear. Although we have examples of sensory organs that perform Fourier transforms, e.g. the lens of the eye and the cochlear, to date there is no direct empirical evidence for its implementation in cortical architecture. However, there does exist intriguing theoretical evidence that suggests a role for the Fourier transform in a primate’s primary visual cortex (area V1) which emerges from recent developments in our knowledge of contextual modulation. This paper proposes a new Fourier transform and a specification of how this transform has a natural implementation in cortical architecture. The significance of this new Fourier transform and its specification in neural circuitry is that it provides a plausible explanation for previously unexplained observable properties of the primate vision system. Introduction Currently, the full extent of the role Fourier analysis plays in biological vision is unclear.Although we have examples of sensory organs that perform Fourier transforms, e.g. the lens of the eye and the cochlear, to date there is no direct empirical evidence for its implementation in cortical architecture.However, there does exist intriguing theoretical evidence that suggests a role for the Fourier transform in a primate's primary visual cortex (area V1) which emerges from recent developments in our knowledge of contextual modulation.This paper proposes a new Fourier transform and a specification of how this transform has a natural implementation in cortical architecture.The significance of this new Fourier transform and its specification in neural circuitry is that it provides a plausible explanation for previously unexplained observable properties of the primate vision system.1.0.0.1 The spatial response properties, such as orientation tuning and spatial frequency tuning, of neurons in area V1 have been known for some time (Schiller et al., 1976).For a while, it was generally accepted that these tuning functions of receptive fields are largely context-independent (De Valois et al., 1979).However, later research has demonstrated contextual influences from the region close to the receptive field (Sceniak et al., 2001); (Cavanaugh et al., 2002); (Bair & Movshen, 2004).Moreover, it has been found that this near surround region of a receptive field can modify receptive field responses through suppression (Blakemore & Tobin, 1972) and by cross-orientation facilitation effects (Sillito & Jones, 1996); (Cavanaugh et al., 2002); (Kimura & Ohzawa, 2009).It has also been demonstrated that long-range contextual modulation is as robust a feature of neural function in area V1 as the extensively studied receptive field properties of this area (Lamme, 1995).Since that time, the evidence for long-range contextual modulation continues to grow, e.g.(Zipser et al., 1996); (Lamme et al., 1998); (Lee et al., 1998). 1.0.0.2 Concurrent with this research establishing the empirical evidence for contextual modulation has been research aimed at developing functional models of V1 that are consistent with the empirical evidence.In the early 1980's the concept of convolution was employed by David Marr (Marr & Hildreth, 1980) as a model that accounted for considerable observable properties of the human vision system.Since that time further theoretical and empirical evidence has been mounting that supports such a model.In particular, it has been shown that response properties of neurons in area V1 are modeled by convolution of the input image with a family of Gabor functions (Sanger, 1988).Further research has demonstrated that the upper layers of area V1 are modeled well by a bank of Gabor filters (Grigorescu et al., 2003); (Huang et al., 2008); (Lee & Choe, 2003); (Ursine et al., 2004); (Tang et al., 2007).A related, but alternative, approach to the Gabor response functions to model simple and complex cells of V1 is the use of Gaussian derivatives (Huang et al., 2009).The common denominator of these contextual modulation models is long-range convolution.However, the issue of accepting these state of the art computational models of contextual modulation as plausible functional models of Layer 2/3 of V1 thus becomes one of addressing the cortical convolution conundrum, more specifically: how are the large scale convolutions required by such models accounted for in cortical architecture?1.0.0.3This paper's goal is to address the cortical convolution conundrum.In the process, we will propose a new fast Fourier transform, named Generalised Overarching SHIA Fast Fourier Transform (GOSH-FFT) and argue: • GOSH-FFT has a natural implementation in the cortical architecture of visual area V1, and • Its implementation provides a plausible cortical mechanism to account for the convolutions implied by long-range contextual modulation. The rest of this paper is organised as follows: Section 2 provides a description of key neurophysiological and mathematical concepts underpinning the main thrust of this paper.Section 3 describes the Generalised Overarching SHIA Fast Fourier Transform (GOSH-FFT).Section 4 proposes a new interpretation of the physiology of long-range intrinsic connections and reinterprets previously introduced physiological concepts to propose a plausible cortical implementation of GOSH-FFT.Section 5 discusses various implications of the novel material of this paper.Section 6 summarises and concludes the paper.Section 7 is an appendix that contains a MatLab-like pseudo-code description of GOSH-FFT and a mathematical proof of GOSH-FFT. 1974).The spatial and temporal frequency tuning preferences of neurons in V1 can also be measured.The neuron's response properties measured via the receptive fields resemble spatially localized filters with a preferred orientation and spatial frequency (Schiller et al., 1976); (Foster et al., 1985); (Mikami et al., 1986); (Edwards et al., 1995) or spatio-temporal energy (Basole et al., 2003); (Basole et al., 2006). 2.1.0.4 The orientation preference of neurons can be mapped using optical imaging techniques and neurological studies, which show good agreement with single cell measurements (Blasdel, 1992); and groups of neurons which act as a single unit.It has been experimentally shown that this single unit activity of large groups of single cells are composed of 10 4 (first order approximation) interconnected cells even in one local V1 column (Siegel, 1990). The advantages of modeling large scale neuron activity which exhibit cohort macroscopic organisation was shown by (Sirovitch et al., 1996).No model was presented but organising principles for analyzing and viewing data were presented.These techniques have revealed an intricate structure to the orientation preference map in layers 2/3.A critical feature of these structures is the orientation pinwheel (local map), in which the orientation preference of the neuronal population changes through the entire range of 180 degrees of orientations over the 360 degrees of polar range of the circular pinwheel.At the centre of the pinwheel is the singularity, which is the point at which lines of iso-orientation preference meet (Obermayer & Blasdel, 1993). 2.1.0.5 The cortex is often called the iso-cortex because of the repeated structures of which it is comprised (Douglas & Martin, 1991).The smallest scale of structure is the minicolumn which, in the monkey, consists of 30 adjacent pyramidal cell shafts in layers 2/3 packed within a diameter of 23 µm (Peters & Sethares, 1996).There are approximately 20 cell bodies within a minicolumn in layers 2/3.The next largest physical scale in V1 at which repeated structures occur is the cortical column (Lund et al., 2003).The cortical column is 200 µm in diameter and is the scale at which long-range patchy connections terminate.A number of anatomical and functional markers repeat at a larger scale of 400 µm.These include the distance between CO blobs, the approximate periodicity of the orientation preference map, and the spatial scale of a single ocular dominance band (Lund et al., 2003).Orientation pinwheels are also of approximately this spatial scale.Each of these functional markers has been shown to be closely related to the system of patchy connections, in which like response preference connects to like, and the inter-patch distance in V1 has this same periodicity of 400 µm (Bartfeld & Grinvald, 1992); (Malach et al., 1993); (Bosking et al., 1997).The largest spatial scale is V1 itself, which is some 4 cm wide in the monkey.There are of the order of 10,000 CO blobs in layers 2/3 of V1 (Murphy et al., 1998), and ocular dominance bands of 120 in number (Horton & Hocking, 1998), suggesting that the multiple response property maps with periodicity of 400 µm repeat around 10,000 times over layers 2/3 of V1.The input connections from the LGN arborize at a range of scales within layer 4C of V1.These inputs are arranged in block-like structures at the approximate scale of an ocular dominance band in layer 4C, but at a finer scale of approximately one column in layer 4C (Fitzpatrick et al., 1985).Further fine scale arborizations occur at approximately the scale of one minicolumn in layer 4A.At the global scale of the cortex, inputs from the LGN are organized into a retinotopic mapping of the visual field (Rolls and Cowey 1970;(Tootell et al., 1988).Connectivity into the layers 2/3 of V1 occurs via a number of anatomical routes, apart from the well described feedforward connections from layer 4 (Fitzpatrick et al., 1985).Other routes of information transfer include extra-striate feedback (Rockland et al., 1994); (Rockland & Vanhoesen, 1994); (Angelucci et al., 2002), long-range intrinsic fibres within V1 (Blasdel et al., 1985), as well as feedback from V1 to the lateral geniculate nucleus (Marrocco et al., 1982); (Briggs & Usrey, 2007), and diffusion of visual signal in the retina (Kruger et al., 1975); (Berry et al., 1999). 2.1.0.6 The finest scale of axonal projections within V1 are the short-range intrinsic connections that provide connectivity between neurons up to the range approximated by an ocular dominance column width, or 400 µm.Within V1, long-range patchy connections extend for 3 mm within the supra-granular layers (Stettler et al., 2002) and long-range connections within the infra-granular layers extend for up to 6 mm (Rockland & Knutson, 2001).V1 also receives feedback from at least nine extra-striate areas (Rockland & Vanhoesen, 1994).Extra-striate feedback is considered by most researchers to be the primary source of long-range horizontal interactions measured in V1 (Alexander & Wright, 2006).These feedback connections are fast conducting myelinated cortico-cortical fibres, and while they traverse distances of up to 10 cm in the monkey, the transmission delays are of the same order as intrinsic short and long-range axons within V1 (Bringuier et al., 1999); (Girard et al., 2001).These feedback connections are often in register with the intrinsic patchy system within V1, depending on the area of origin (Angelucci et al., 2002); (Lund et al., 2003).The middle temporal (MT) visual area will serve here as brief illustration of the role of extra-striate feedback in V1.Receptive field sizes in MT are about 10 times larger than in V1 at all eccentricities (Albright & Desimone, 1987).Small focal injections of tracer into V1 indicate that the sizes of the feedback fields from MT to V1 are 21-fold larger than the aggregate receptive size of the V1 injection sites (Angelucci et al., 2002).These feedback connections are an obvious substrate for the integration of global signals into V1 (Bullier, 2001).The local-global map hypothesis (Alexander et al., 2004) of V1 posits a non-local influence on the structure of local maps in V1.This hypothesis states that the global visual map in V1 is remapped to the local map scale in V1 in the form of a map of response properties, e.g.orientation, and in the case of the monkey, spatial frequency preference and colour selectivity.These local maps tile the surface of V1 and each receive inputs from a large extent of the visual field.So rather than the local map being simply a map of primitive visual features that apply to a point in visual space, the local map is a map of primitive visual features as they arise in the organisation of the visual field and become relevant to a location in visual space.As the maximum range of contextual modulation in V1 approaches the size of the visual field (Alexander & Wright, 2006), the local organisation of response properties can be influenced by the functional properties of the global visual field. Mathematical background Fundamental to the Fourier transform proposed in this paper is the Spiral Honeycomb Image Algebra (SHIA).This is a data structure that embodies important properties of the natural visual constraints imposed by the primate eye (Sheridan et al., 2000).In particular, SHIA has a discrete, finite and bounded domain which mimics the distribution of photo receptors on the retinal field.The underlying geometry of the SHIA is a hexagonal or rectangular lattice.In the former case, each hexagon has a designated positive integer address expressed in base seven.The numbered hexagons form clusters of super-hexagons of size 7 n .These self-similar super-hexagons tile the plane in a recursively modular manner.As an example, a super-hexagon of size 7 2 = 49 and its concomitant addressing scheme is displayed in Fig. 1 (a).The importance of the SHIA addressing scheme is that it facilitates primitive image transformations of translation, rotation and scaling.One of these transformations that has proven to be of particular relevance to the Fourier transform is one that provides rotation and scaling.It is referred to as mapping M10 in the notation of SHIA.The critical observation to make in regard to the effect of M10 is that it produces multiple 'near' copies at reduced resolution of the input image.This transform will play a critical role in the proposed FFT. 2.2.0.8 The origin of what is now called a Fourier transform dates back to 1807 when Jean Baptiste Joseph Fourier defined the notion of representing a function as a trigonometric series.The discrete version of a Fourier transform (DFT) for a one-dimensional signal is defined as: for u = 0, ...N − 1, where f (x) is a real valued function, N represents the number of elements in the signal and j 2 = −1. The effect of this transform is to capture the spatial relationships inherent in the signal f(x) and express these relationships as the sum of sinusoidal function (frequency components). Similarly, the discrete version of an inverse Fourier transform (IDFT) for a one-dimensional signal is defined as: (2) for x = 0, ...N − 1, where F(u) is the Fourier transform of the real valued function f (x), N represents the number of elements in the signal and j 2 = −1. The effect of this inverse Fourier transform is to take a signal in frequency domain back to the spatial domain. 2.2.0.9 Prior to the invention of the digital computer, the Fourier series was employed as a purely analytic tool.However, since that time, the development of a class of computationally efficient algorithms, known as fast Fourier transforms (FFT), has meant the notion has become a useful computational tool (VanLoan, 1992).One of the most attractive computational properties of the FFT is its ability to process signals at higher resolution with a minimal increase in cost to complexity.Today, most of us benefit from fast Fourier transforms every day without even knowing it as these algorithms power a vast range of electronic technology such as digital cameras and cell phones. 2.2.0.10 The relevance of a fast Fourier transform to this paper is its relationship to the notion of convolution.The convolution of two functions f (x) and g(x) is denoted by f (x) * g(x) and its discrete definition is A well known result to researchers in the field of signal processing is the Convolution Theorem, which relates convolution in the spatial domain to convolution in the frequency domain.For two functions, f (x) and g(x), let F(x) and G(x) represent the Fourier transform of f (x) and g(x) respectively.The Convolution Theorem states that, In other words, the convolution of two functions in the spatial domain can be achieved by the multiplication of the functions in the frequency domain. Generalised Overarching SHIA fast Fourier transform (GOSH-FFT) In this section we propose a new fast Fourier transform that, as we will see later, possesses the potential to be implemented in cortical architecture and thereby address the cortical convolution conundrum.Associated with SHIA, as described in Section 2.2, is a Cooley-Tucky type fast Fourier transform, named Generalised Overarching SHIA Fast Fourier Transform (GOSH-FFT).This novel fast Fourier transform employs the transform M10, as described in Section 2.2, as the critical mechanism that turns a Fourier transform into a fast Fourier transform. 3.0.0.11 Suppose an image is represented on a SHIA of size 7 n , where For(i:0:k) 1. Apply M to the input; 2. Perform a discrete Fourier transform over a sequence of sub images of size 7 m ; 3. Apply the inverse of M i locally. A special case of GOSH-FFT was initially described in (Sheridan, 2007), with m = 1.The significance of the initial work was that it demonstrated the intrinsic connection between the Fourier transform and primitive image transformations of translation, rotation and scaling.It also turns out that another special case of GOSH-FFT, when n = 2m, will play a critical role in the core hypothesis of this paper.This special case, named Particular SHIA FFT (PaSH-FFT), is illustrated in Fig. 3. A complete statement of Algorithm 3.0.0.11 is written in MatLab-like pseudo-code and can be found in Section 7 along with a mathematical proof that GOSH-FFT delivers a Fourier transform. Cortical implementation of contextual modulation In Section 1, we reviewed state of the art models of contextual modulation and concluded that these models implied the cortical convolution conundrum.We further motivate this conundrum by observing that as a consequence of Equation 3, a convolution of the entire visual field requires every minicolumn in Layer 2/3 of area V1 to receive an input from every other minicolumn of that layer.As there just are not enough connections to convolve the visual field in one cortical step, through before being output as a convolved value.With the cortical convolution conundrum thus fully formulated, in this section we will establish a specification of a sufficient sequence of steps to address the issue.This specification will unfold in three steps.First, we will discuss how the SHIA transform M10 manifests in cortical architecture.We will then employ this manifestation to demonstrate how neural circuitry accommodates PaSH-FFT.Lastly, we will show how the cortical manifestation of PaSH-FFT supports long-range convolution. Cortical manifestation of M10 A critical component of the fast Fourier transform, PaSH-FFT, is the transform M10.Consequently, it is an imperative of our argument that the redistribution properties of M10 be accounted for in the neural circuitry of the visual system.To this end we now argue that the required effects of M10 are accounted for by the long-range properties of patchy connections It has been argued that the orientation pinwheel comprises a unitary organisational structure or local map in layer 2/3 of area V1 (Hubel & Wiesel, 1974); (Bartfeld & Grinvald, 1992); (Blasdel, 1992).When four pinwheels are reflected about their common borders, a saddle point arises at the centre of the four pinwheels.See Fig. 4. In the macaque, the preferred response properties of V1 neurons can be influenced by activity from a wide extent of the visual field.A review of contextual modulation in the monkey demonstrated contextual modulation in V1 from long-ranges in the visual field (Alexander & Wright, 2006).The review was compiled from a number of experimental paradigms, including visual stimulation with long lines while the neuron's receptive field is occluded (Fiorani et al., 1992), surround only textures (Rossi et al., 2001) and colour patches placed distally to the neuron's receptive field (Wachtler et al., 2003).It was shown that the maximum range of contextual modulation measurable in V1 approaches a large extent of the visual field relative to a neuron's receptive field size or the local cortical magnification factor.Some experimental paradigms, such as the curve tracing effect (Roelfsema & Lamme, 1998); (Khayat et al., 2004), relative luminance (Kinoshita & Komatsu, 2001), and texture defined boundaries (Lee et al., 1998) show excitatory contextual modulation with 'tuning curves' that are flat out to the maximum distance tested.The functional connectivity that underlies this long-range contextual modulation in the monkey is likely to involve cortico-cortical feedback from higher visual areas working in concert with long-range intrinsic patchy connectivity.In the monkey, the feedback connections to V1 from higher visual areas incorporate inputs from a very large extent of the visual field (Angelucci et al., 2002); (Lund et al., 2003). 4.1.0.14 In the analysis that follows, the combination of patchy intrinsic connections and patchy feedback connections are therefore assumed to enable transfer of visual information at ranges approaching the global scale of the visual field.Moreover, we assume that the quantity and distribution of these connections are adequate to deliver the effects of transform M10 at the scale of the visual field. Cortical manifestation of PaSH-FFT The next step in accounting for global convolution in cortical circuitry is to explore how PaSH-FFT manifests itself in cortical architecture.The raw data, at the lowest level of PaSH-FFT, are complex numbers that must be multiplied and added.The first issue to address is to justify our assumption that the operations being performed by a neuron could be represented as complex arithmetical operations on complex numbers.Specifically, PaSH-FFT requires that a neuron can be regarded as a mechanism capable of representing and manipulating complex numbers in accordance with the arithmetical operations of addition and multiplication.There are many ways in which to interpret neuronal function in terms of complex addition and multiplication.The model presented by (MacLennan, 1999) is adequate for the purposes of this paper, where it is shown how the representation of complex numbers can be encoded as the rate and relative phase of axonal impulse.From this encoding, complex multiplication is associated with the strength of a synaptic connection as the signal passes through it and complex addition is associated with the summing of the neuronal inputs. Thus at the lowest level of computation in our model, we assume that the operation being performed by a neuron can be represented as complex addition and multiplication. 4.2.0.15 In area V1, each neuron makes use of information available to it in real time.There is evidence that contextual information is projected to widespread regions in V1 in an anticipatory manner.Since the spatial changes in the visual field tend to be predictable from previous visual inputs, anticipatory contextual inputs can arrive in time to be integrated in an adaptive manner with ongoing feedforward input.In order to express the properties of widespread contextual integration in a more formal manner, however, we will use the mathematical convenience of assuming that each of the distinct mathematical processes to be described occurs in a step-wise fashion.This more constrained approach allows not only each distinct part of the process to be formulated, but also formulates the inter-relationships between the various sub-processes.Although it is claimed that this approach is appropriate for the purposes of this paper, it must be acknowledged that the question of how such "contextual integration" actually occurs in the neuronal system remains open. 4.2.0.16 At the finest scale of connectivity via short-range intrinsic connections, each neuron of a local map is treated as if it were connected to every other neuron minicolumn of that local map.While this is not literally true, considerations of poly-synaptic interactions at this local scale, and the real-time, anticipatory nature of visual processing means that it is a reasonable approximation of the functional connectivity.Consequently, we can assume that each neuron in a local map can sum the outputs of all other neurons in that local map which have been multiplied by unique complex numbers.We call such a collection of parallel computations a local computation.See Fig. 5, which is a schematic diagram of a local computation. 4.2.0.17Although it is commonly accepted that the cortex has a massively parallel architecture, currently there exists no comprehensive model to describe these dynamics.The absence of such a model means that in any particular cortical process, we cannot be sure which aspects of the process are parallel and which are intrinsically sequential.We will employ the following notation to show how the inherently sequential steps of PaSH-FFT can be mapped into neural circuitry.Let the symbol ⊙ denote the composition of two local computations as follows: given arbitrary local computations A and B to operate on a signal in sequence let Note that the operator is to the right of the input signal it operates on, which is enclosed in left and right parenthesis (). 4.2.0.21 With these concepts in hand, we can now identify the sequential steps of PaSH-FFT.In this special case, the size of the input signal is the square of the size of the local computation and represents two iterations of GOSH-FFT, as described in Section 3. The identification of the sequential steps also suggests the sequence of connections that the input signal must traverse.We now illustrate this with PaSH-FFT, given an input signal s, then the application of PaSH-FFT would be expressed as follows: Given the assumed neural parallelism, a count of the number of components on the right hand side of the equals sign in Equation 5, reveals that a Fourier transform of the entire visual field can be completed by the signal traversing a sequential path connecting five neurons.Likewise, an inverse Fourier transform can be delivered in cortical circuitry as follows: (s)inversePaSH − FFT =(s)P ⊙ I ⊙ P ⊙ I ⊙ P (6) Cortical manifestation of convolution We now progress to the issue of how convolution could be implemented in cortical architecture.To this end, we describe the various computational constraints imposed by the computational requirements of convolution and argue that the known cortical architecture satisfies these constraints. 4.3.0.22 The key to the solution of the convolution problem in the neurological domain is provided by the Convolution Theorem, the same one employed by numerous digital signal processing applications.This theorem was discussed in Section 2.2.The importance of the theorem is that the convolution of two functions in the spatial domain can be achieved by the multiplication of the functions in the frequency domain.The implications of this theorem to the cortical convolution conundrum are significant.In our model, the components of the Fourier transform of the function the input signal is to be convolved with are represented by connection weights.Then, once the input signal has been transformed to the frequency domain the required convolutions can be performed by mere multiplications.In cortical terms, each component of the signal, in the frequency domain, must traverse a connection to one more neuron to achieve the desired multiplication.However, the resulting convolution, in the frequency domain, must be transformed back to the spatial domain to complete the convolution.This is achieved with an inverse Fourier transform.Accordingly, the sequence of connections along the path that terminates in the output of a convolved value in the spatial domain is thus given by: It is assumed that each component of the input signal traverses parallel paths along the network.Thus, the net time cost to complete a convolution is equivalent to the time required for a component of the input signal to traverse a path connecting 10 neurons.This path is composed of five short-range intrinsic connections and five long-range connections. Analysis The plausibility of the cortical model of convolution proposed in this paper is fundamentally predicated on the assumptions made in its formulation.Consequently, we summarise these assumptions along with the arguments offered to justify them before we provide an analysis of the model's parameterisation: 1.The number of long-range patchy connections is adequate to achieve a redistribution of the global signal via transform M10.This was argued in Section 4.1 and heavily relied on a conclusion based on a review article reported in (Alexander & Wright, 2006). 2. A first order approximation of the number of minicolumns in a local map as 10,000.This was discussed in Section 2.1 and relied on the work reported in (Siegel, 1990). 3. The number of short-range intrinsic connections is adequate to consider each local map as being fully connected.This was discussed in Section 4.2 and relied on the work reported in (Siegel, 1990). 4. A first order approximation of the number of minicolumns in the global map is 10, 000 2 = 100 million.This was discussed in Section 2.1 and was based on the work reported in (Murphy et al., 1998) and Assumption 2. 4.4.0.24 The first assumption is possibly the most critical as it establishes the fundamental architectural relationship between the local and global maps and is essential to PaSH-FFT.The second two assumptions implied that a Fourier transform of the portion of the signal represented in a local map would be completed by each component of the signal traversing one cortical connection and that on completion of the first iteration of PaSH-FFT, the global signal consists of 10,000 local discrete Fourier transforms each of which is at the scale of a local map.Then, on completion of the second iteration of PaSH-FFT, the 10,000 local Fourier transforms would be transformed into a global Fourier transform of size 10, 000 2 = 100 million, which by the fourth assumption represents the size of the global signal.From this we are able to assert that the input spatial signal would be transformed to frequency space at a cost of the signal traversing a path connecting four neurons.With the signal in frequency space, we employed 194 Fourier Transform Applications www.intechopen.comthe Convolution Theorem to assert that with each component of the global signal traversing one additional connection, the state of the signal would represent a convolved signal in frequency space.This assertion was predicated on the assumption that the weight of each of these last connections represented the Fourier weight of the appropriate gaussian.The final step was to transform the convolved signal back from frequency space to the spatial domain.This was achieved with the inverse PaSH-FFT, which would be completed at the additional cost of the signal traversing a path connecting a further five neurons.Putting these three steps together, we arrived at a total path length of 10 connections for the global input signal to be transformed into a representation of global convolution of the visual field.We also note that this analysis accounted for a single global convolution of the input signal.However, there will be many global convolutions required, possibly up to one for every orientation preference and spatial frequency preference represented in a local map.Although the input spatial signal needs only to be transformed into the frequency domain once, each distinct convolution would require a distinct set of parallel paths to transform the signal back into the spatial domain.Consequently, the multiple convolutions would not necessarily result in a longer path.Accordingly, we assert that the transform, PaSH-FFT, with appropriate parameterisation, would deliver a global convolution of the visual field.Moreover, this output signal is generated within the required time constraints imposed by observed contextual modulation.Given our assumptions, the lowest number of iterations required to complete a Fourier transform is two.Consequently, 10 represents the length of the shortest path (see equation 7) possible to deliver a global convolution via PaSH-FFT. Discussion The signal processing literature describes many different types of fast Fourier transforms (FFT).Although any one of them represents an alternative candidate to PaSH-FFT, the problem to address is accounting for how they might be implemented within the known constraints of cortical architecture.All fast Fourier transforms need to rearrange components between their intermediate steps of multiply and add.PaSH-FFT derives its rearrangements of components with the transform M10 that, as argued, is compatible with the distribution and quantity of long-range cortical connections.If any other FFT could be substituted for PaSH-FFT in the model, one would need to account for the rearrangement phase of that FFT within the known connectivity of area V1. 5.0.0.25 Another issue worthy of some discussion pertains to the Fourier transform and the absence of empirical evidence that would irrefutably demonstrate its cortical implementation.Part of the explanation for this lack of evidence may be provided by the role the Fourier transform plays in the vision process as suggested by this paper.That is, PaSH-FFT was shown to be a means to an end (convolution), not the end itself.Consequently, the question of finding neurons through empirical experimentation that measures response properties of neurons that closely model the profile of a Fourier transform may remain unanswered for some time to come.need computation-like synchronisation or state update.Synchronisation can be provided by considering "Small World" relationships.(Gao et al., 2001) have shown that a "Small World" network needs only a small fraction of long-range couplings to obtain a great improvement in both stochastic resonance and synchronisation in network connectivity of bistable oscillators.We suggest that the known topology of the visual cortex (Zeki, 1993) if considered as a "Small World" network can provide the foregoing benefits.They would be consistent with the long-range and short-range connectivities of V1 to retinal neurons which have the required bistable oscillator condition provided by on-centre or off-centre neurons responses to light and dark and including those with colour opponency properties.The long and short-range selectivity for connections can be dynamic based on the neuron threshold levels and spatial frequency channels (Dudkin, 1992).The system updates a neuronal state only when new information indicates a change in the input signal. 5.0.0.27 The cortical implementation of PaSH-FFT was discussed in Section 4.2 where it was argued that the known connectivity of area V1 was sufficient to support its cortical implementation. It was then argued that this implementation could deliver the required convolution in a 'small' number of sequential steps.However, the argument did not rule out the possibility of an alternative mechanism that would deliver the required convolution in fewer steps than PaSH-FFT.It would appear that without a sufficiently developed model of the brain's parallelism, it is unlikely that a mathematical proof of a lower bound for the minimum number of sequential steps could be produced.Currently, the only bound that we can be sure of is that the required convolution could not be completed in one step.The question of determining the minimum lower bound remains an open question. 5.0.0.28 The role of the frequency domain was at the heart of the solution to the cortical convolution conundrum proposed in this paper.However, the possibility of performing the convolution in the spatial domain without resorting to the frequency domain cannot be ruled out by any argument presented in this paper.Although it is unclear how this could be accomplished without resorting to a highly asymmetric model of the distribution of the connectivity of long-range connections.In any case, the search for an explanation of how the dynamic reconfiguration implied by the analysis of this paper is actually accomplished is likely to provide many different conjectures along the way.One possible avenue in this endevour might be provided by further tracer experiments such as those reported in (Angelucci et al., 2002). Summary and conclusion This paper reviewed the evidence for long-range contextual modulation and concluded that it implied cortical convolution at the scale of the visual field.This resulted in the need to address the problem of how such long-range convolution could be accounted for with known cortical connectivity and within known time constraints.The paper proposed a solution to the problem that emerged from a mathematical analysis of cortical connectivity to account for the implied constraints of long-range convolution.In particular, it was argued that the known distribution of the long-range patchy connections and extrastriate connections is adequate to provide the means by which the global visual signal can be transformed into frequency space where the convolution can be performed.The main thrust of the argument was that • represents a plausible cortical mechanism to account for long-range contextual modulation; • suggests a theoretical explanation of how the brain might be wired to achieve large scale Fourier analysis; • opens up the possibility of explaining other cortical processes via frequency space computations. It is the conclusion of this paper that the processing of the visual signal in the frequency domain via a fast Fourier transform plays a fundamental role in primate vision. Appendix In this appendix, we present the pseudo-code for GOSH-FFT and a mathematical proof of GOSH-FFT. Appendix A This section presents a formal statement of GOSH-FFT in MatLab-like pseudo-code. Notation: x = Complex array specifying the input signal base = 7 α , where α is an integer greater than zero n = 7 β , where β/α , is an integer greater than zero.This section presents the mathematical proof of GOSH-FFT. Notation: The symbol % will be employed to mean modular arithmetic.Let B = 7 m , where m is a positive integer.N = B p , where p is a positive integer. Let M denote the compound transform M10 m from SHIA. Fig. 1 . Fig. 1.Displays the two-level addressing scheme of SHIA: (a) Hexagonal and (b) Rectangular.In the latter case, each rectangle has a designated positive integer address expressed in base five.An example of this addressing scheme is displayed in Fig.1 (b). Fig. 2 (a) displays an image represented in a four level SHIA, size is 7 4 = 2401.Fig. 2 (b) represents the effect of applying M10 2 = M100 to this image. Fig. 2 . Fig. 2. Displays (a) an image of a duck represented on a four-level SHIA; (b) the result of applying SHIA transform M10 twice to the image displayed in (a).There are four observable effects: 1) multiple near copies of the input image (a), 2) each copy is rotated by the same angle, 3) each copy is scaled by the same amount, 4) applying M10 twice to the image displayed in (b) results in the image displayed in (a). 187 Cortical Specification of a Fast Fourier Transform Supports a Convolution Model of Visual Perception www.intechopen.com Fig. 3. Displays the results of applying the special case of GOSH-FFT, that is PaSH-FFT, to image of Fig. 2 (a), with n=4 and m=2.The four sub figures display intermediate results of PaSH-FFT: (a) on completion of first iteration of PaSH-FFT to Fig. 2; (b) Fourier transform on completion of second iteration; (c) on completion of first iteration of inverse PaSH-FFT; (d) Inverse PaSH-FFT on completion of second iteration. 189Cortical Specification of a Fast Fourier Transform Supports a Convolution Model of Visual Perception www.intechopen.combetween columns of Layer 2/3 and similarly patchy extra-striate feedback connections to area V1. 4.1.0.12 Fig. 4 . Fig. 4. Displays a schematic diagram of the pinwheel like structures of visual area V1, extracted from Figure 10 page 43 of Bruce et al. (2003). Fig. 5 . Fig. 5. Displays a schematic diagram of a computational unit.The circles represent neurons and the straight lines connecting the circles represent cortical connections.Each neuron depicted at the top of the figure outputs a value x i .The neuron depicted at the bottom of the figure inputs the sum of each x i multiplied by weight w i .input signal is the frequency domain and the weights are associated with a set of inverse primitive roots of unity, then the resulting local computation is an inverse Fourier transform, denoted I. (See Equation 2 for a definition of an inverse Fourier transform.)If the input signal is a frequency domain and the weights represent Fourier components, then the resulting local computation is a convolution in the frequency domain, denoted C. (See Equations 3 and 4.)Table1provides a summary of this notation. implementation of PaSH-FFT in cortical architecture is a highly simplistic model of the parallelism inherent in the cortex.The model employed did not take into account at least two well accepted features of this parallel architecture.First, the system itself somehow synchronises the flow of the signal.Second, the cortex does not 195 Cortical Specification of a Fast Fourier Transform Supports a Convolution Model of Visual Perception www.intechopen.com www.intechopen.comthese long-range connections facilitated the transformation of the signal into and out of the frequency domain via a new fast Fourier transform named PaSH-FFT.A mathematical proof of the most general form of this FFT, GOSH-FFT, was provided in the appendix along with MatLab-like pseudo-code to facilitate the implementation of GOSH-FFT in computer software.6.0.0.29 It was shown that, to a first order approximation, a cortical implementation of PaSH-FFT could account for the large scale convolution implied by known models of contextual modulation.The significance of PaSH-FFT is that it: F , ..., f B−1 n−1 , denote a sequence of N points in the input signal.The proof is by induction on p.When p=1, GOSH-FFT is simply a DFT.Assume the GOSH-FFT computes a Fourier transform for all levels less than p.M −1 ( f x%B x/B )= f 0 0 ,..., f 0 n−1 ,..., f B−1 0 ,..., f B−1 n−1 .We now have B sub-signals,each of which is composed of n points.Then, by the induction hypothesis, we can apply GOSH-FFT to obtain B individual transforms of the B sub-signals; u)) = e((u/B)+(u%B)n)= 1, e n ,...,e (B−1)n , e 1 , e n+1 ,...,e (B−1)n+1 ,...,e n−1 , e 2n−1 ,...,e Bn−1 Perform a local DFT on each of the n groups of B points.The general term is: (u/B)%N e q(u/B)+((u%B)n))%N ((u/B)%N e (u%B)rBn%N e q((u/B)+((u%B))n))%N Table1provides a summary of this notation.At the next higher scale of connectivity each local map is assumed to have access to ongoing activity of every other local map via long-range patchy connections and striate-extrastriate interactions.These provide the means by which the results of local computation can be transported to another local map as input for a further local computation.We denote such a projection as P to represent the class of transformations (as described in Section 4.1).
2019-02-13T14:05:16.677Z
2012-04-25T00:00:00.000
{ "year": 2012, "sha1": "a75386b2667d998cb6cbaa3512606fa49e7a1b4f", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/36429", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "057dfef3854125eeae15fc655b3d6a11797efae9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science" ] }
1873125
pes2o/s2orc
v3-fos-license
Dystrophin makes a new connection ![Figure][1] The tidy microtubule lattice of a normal muscle (left) turns into a tangle when dystrophin is missing (right). Dystrophin, the protein absent in Duchenne muscular dystrophy, is better at networking than researchers realized. The protein's links to two kinds of cytoskeletal B ak and Bax are killers that can dispatch a cell in no timeonce they switch on. Mérino et al. have discovered that two different explanations for the activation of these apoptosis-promoting proteins are both partly right. Once Bak and Bax fl ip on, the cell is done for. Mitochondria begin to leak, spilling apoptosis-stimulating molecules that eventually cause cell death. Proteins that carry the BH3 domain, like Bim, trigger apoptosis, but researchers have clashed over how these proteins work. Some scientists argue that certain BH3 proteins turn on Bak and Bax by direct binding. Other researchers support an indirect activation model, in which BH3 proteins neutralize pro-life molecules such as Bcl-2 and Bcl-x L , which normally suppress Bax and Bak. So far, in vitro studies have been inconclusive. Mérino et al. performed the fi rst in vivo study on Bim, engineering mice to produce the protein with various modifi cations to its BH3 domain. Bim normally controls blood cell homeostasis, so the researchers used white cell counts and spleen weight as gauges of cell suicide. If the indirect hypothesis is correct, you'd expect that mice carrying Bim versions unable to grab and switch off all the pro-survival proteins would show less apoptosis than normal. But if only indirect activation is important, you'd expect that Bim variants that can neutralize all of the pro-survival molecules but can't activate Bax would have normal levels of apoptosis. However, cell death decreases in both animals. The researchers conclude that both routes are necessary to explain how Bim engages Bax and Bak. The fi ndings might help refi ne cancer drugs that emulate BH3-carrying proteins. Dystrophin makes a new connection D ystrophin, the protein absent in Duchenne muscular dystrophy, is better at networking than researchers realized. The protein's links to two kinds of cytoskeletal components are well known, and now Prins et al. demonstrate that dystrophin also fastens to microtubules. With its fragile plasma membrane, a muscle cell lacking dystrophin can die from mechanical stress. Although dystrophin is a middleweight in comparison, it resembles the heavyweight cytolinker proteins that keep cells in shape by hitching membranespanning proteins to the cytoskeleton. Previous studies have shown that it hooks up with transmembrane proteins and two parts of the cytoskeleton-intermediate fi laments and actin-but researchers haven't demonstrated a connection to microtubules. Prins et al. showed that dystrophin and microtubules coincide at costameres, portions of the cytoskeleton that reinforce the plasma membrane in muscle cells. The researchers also observed that microtubules and dystrophin sediment together if dystrophin sports a putative microtubule-binding domain, but not if the domain is missing. Dystrophin also stabilizes microtubules forced to depolymerize by cold. And when cells lack the protein, microtubule networks snarl. The fi ndings suggest that dystrophin does function like the cytolinkers, providing structural support for muscle cells in part by stabilizing or organizing microtubules. As a result, muddled microtubules might be responsible for some of the defects of muscular dystrophy. Prins, K.W., et al. 2009. J. Cell Biol. doi:10.1083. When a sperm wriggles, its fl agellum undulates symmetrically like a crawling snake. When Chlamydomonas zips along, by contrast, it appears to do the breast stroke, as its twin fl agella reach forward and then pull back. Chlamydomonas' fl agella sport the standard "9 + 2" structure of nine microtubule doublets surrounding a central pair. The trick has been to fi nd a structural imbalance in these fl agella that could explain why the movements of extension and retraction are asymmetric. Chlamydomonas does the lopsided wave Bui et al. used electron cryotomography to take a close look at the fl agellum and found that microtubule doublet 1 was the oddball. Dynein arms typically link neighboring microtubule doublets. When these arms pull, adjacent doublets slide past one another, and the fl agellum fl exes. But doublet 1's inner dynein arm is missing one of the molecules that is usually present in other doublets. Bui et al. hypothesize that this difference slows the sliding between doublet 1 and its neighbors, doublets 2 and 9. The researchers also found more interconnections between microtubule pairs-including a strong bridge between doublets 1 and 2-than previously identifi ed. The fi nal picture, the team concludes, is that doublets 9, 1, and 2 differ from the three doublets that lie on the opposite side of the shaft. That disparity might yield asymmetrical motion when the alga swims. When Bim can't hook to Bax (right), reduced apoptosis leads to a larger spleen than in controls (left). The arrow indicates where a crossbridge extends from doublet 1 of Chlamydomonas' fl agellum.
2017-05-24T13:28:59.937Z
2009-08-10T00:00:00.000
{ "year": 2009, "sha1": "9b04d09309d899ec0e879f6056995cf665e62e6a", "oa_license": null, "oa_url": "https://rupress.org/jcb/article-pdf/186/3/300/1065239/jcb_1863iti.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9b04d09309d899ec0e879f6056995cf665e62e6a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
240424248
pes2o/s2orc
v3-fos-license
Another influenza season in the shadow of the COVID-19 pandemic Flu season is upon us at the ominous milestone of more than 722,000 US deaths from COVID-19. T he 2020 to 2021 infl uenza season took a backstage to the COVID-19 pandemic, when the COVID-19 vaccines were in their initial stages of distribution in the northern hemisphere. Although only 50% to 55% of US adults received the 2020 to 2021 infl uenza vaccination, 1,2 infl uenza activity was very low compared with prior seasons, 1,2 certainly the result of behavioral measures instituted to mitigate the COVID-19 pandemic. With the current 2021 to 2022 infl uenza season coinciding with another increase of COVID-19 cases, lower COVID-19 vaccine uptake and relaxed mitigation measures in some areas of the United States have resulted in vaccine breakthroughs, increased hospitalizations, and an ominous milestone of more than 722,000 deaths. 3 Vaccinations, in general, are helping ease the strain of the upcoming infl uenza season, with an estimated 62% of Americans experiencing immunity against COVID-19 as a result of prior infection or immunization. 4 year was that vaccine administration to nonpregnant adults should be after August and ideally before the end of October to optimize vaccine protection during the expected seasonal epidemics. 6 This new recommendation is expected to continue into the future. • A history of severe allergic reaction to IIV4s, RIV4, or LAIV4 other than urticaria (such as angioedema, respiratory distress, lightheadedness, or recurrent emesis) or requiring epinephrine or emergency medical intervention is now considered a precaution, not a contraindication for ccI-IV4. Similarly, a history of severe allergic reaction to IIV4s, ccIIV4, or LAIV4 other than the aforementioned reactions is now considered a precaution, not a contraindication for RIV4. These patients should be vaccinated in an inpatient or outpatient medical setting, supervised by a healthcare provider who is able to recognize and manage such reactions. 6 ■ OTHER INFLUENZA VACCINATION RECOMMENDATIONS Other relevant issues pertaining to infl uenza vaccination during the ongoing COVID-19 pandemic have been outlined. [6][7][8] Infl uenza vaccine recipients and those who administer these vaccines should recognize that vaccine side effects can mimic COVID-19. 7 Nevertheless, those who develop fever after vaccination should stay home until they defervesce for 24 hours without the use of antipyretics. 7 Importantly, if fever persists or new respiratory symptoms develop, patients should contact their healthcare provider. 7 In a nonprobability-based, convenience sample of 698 US adults infected with SARS-CoV-2 and 2,437 uninfected adults, 65.9% of those infected experienced longterm symptoms lasting > 4 weeks while 42.9% of those uninfected reported such symptoms, representing an emerging public health concern. 8 This may impact infl uenza vaccine uptake, as well as recognition of infl uenza-like illness; deferring infl uenza vaccination until resolution of another acute viral illnesses, such as COVID-19 is generally recommended. 9 Safe vaccination practice calls for postponing infl uenza vaccination for those in quarantine after CO-VID-19 exposure or in isolation after mild COVID-19 illness for 10 days, and after severe COVID-19 illness for 20 days. 6 ■ COVID-19 AND INFLUENZA COINFECTION With several common clinical features of infl uenza and COVID-19, the overlap of the two epidemics occurring at the same time can complicate diagnosis, treatment, and prognosis. 10 Although a small proportion of COV-ID-19 patients are coinfected with infl uenza, the risk for high-risk individuals is of concern. 10 While both have some distinct features (Table 1), 11,12 they can be hard to distinguish. ■ VACCINE EFFICACY Safety and effi cacy of the infl uenza vaccination for pregnant women has been documented, and a recent study noted 91.5% effi cacy of transfer of antibodies in preventing hospitalization of newborns and infants, in whom the vaccine is not approved before 6 months of age. 13 Another recent study has shown safety and humoral immunogenicity of messenger ribonucleic acid COVID-19 vaccines in maternal sera, as well as cord blood and breast milk, indicating transfer of immunity to neonates. 14 A recent study showed that COVID-19 vaccination of healthcare workers reduces the risk of COVID-19 in members of their households. 15 Indirect effects of infl uenza vaccination have been shown to be greater than direct effects, with 4 to 7 times the infl uenza cases prevented in non-vaccinated compared with vaccinated individuals, and complications including infl uenza-associated deaths among the unvaccinated elderly reduced by a factor of 20 to 30. 16 Researchers have been evaluating both infl uenza and COVID-19 vaccination effi cacy in how they decrease risk of infection and reduce disease severity in breakthrough infections. 17 Currently approved or emergently authorizedfor-use COVID-19 vaccines trigger innate, durable immunity, although the emergence of protein variants could potentially limit effi cacy. 18 ■ DISCLOSURES The author reports no relevant fi nancial relationships which, in the context of his contributions, could be perceived as a potential confl ict of interest.
2021-11-03T13:09:47.017Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "564d1b8ccc715d3741d88d4e3f1f07240022c405", "oa_license": null, "oa_url": "https://www.ccjm.org/content/ccjom/88/11/594.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "564d1b8ccc715d3741d88d4e3f1f07240022c405", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237535183
pes2o/s2orc
v3-fos-license
Connectivity alterations in autism reflect functional idiosyncrasy Autism spectrum disorder (ASD) is commonly understood as an alteration of brain networks, yet case-control analyses against typically-developing controls (TD) have yielded inconsistent results. Here, we devised a novel approach to profile the inter-individual variability in functional network organization and tested whether such idiosyncrasy contributes to connectivity alterations in ASD. Studying a multi-centric dataset with 157 ASD and 172 TD, we obtained robust evidence for increased idiosyncrasy in ASD relative to TD in default mode, somatomotor and attention networks, but also reduced idiosyncrasy in lateral temporal cortices. Idiosyncrasy increased with age and significantly correlated with symptom severity in ASD. Furthermore, while patterns of functional idiosyncrasy were not correlated with ASD-related cortical thickness alterations, they co-localized with the expression patterns of ASD risk genes. Notably, we could demonstrate that patterns of atypical idiosyncrasy in ASD closely overlapped with connectivity alterations that are measurable with conventional case-control designs and may, thus, be a principal driver of inconsistency in the autism connectomics literature. These findings support important interactions between inter-individual heterogeneity in autism and functional signatures. Our findings provide novel biomarkers to study atypical brain development and may consolidate prior research findings on the variable nature of connectome level anomalies in autism. A utism spectrum disorder (ASD) is one of the most common and persistent neurodevelopmental conditions. Behaviorally diagnosed on the basis of clinical observations and standardized tools assessing atypical communication, social interaction, and sometimes restricted and repetitive behaviors and interests 1 , the broad umbrella term of ASD has resulted in a steady increase in autism prevalence 2 . This increase in diagnostic sensitivity has on the other hand led to increasing recognition of the heterogeneity of diagnosed individuals [3][4][5] , and challenges for specificity 6 . This high variability is present at the phenotypic level of behavioral symptoms and at the level of genetic mechanisms previously associated with ASD [7][8][9] , and renders the study of autism particularly challenging. As etiology and pathophysiology remain largely unclear and similarly heterogeneous, efforts have increasingly shifted to neuroimaging techniques to identify intermediary autism phenotypes 10,11 . It is hoped that these can potentially consolidate molecular perturbations and behavioral perspectives on ASD and identify biomarkers of symptom severity. Fueled by the increased availability of data sharing initiatives [12][13][14][15] , numerous neuroimaging studies based on restingstate functional magnetic resonance imaging (rs-fMRI) have indicated that autistic individuals often present with a mosaic pattern of connectivity alterations between distributed cortical regions relative to typically developing (TD) controls 12,[16][17][18][19] . These connectivity alterations often manifest in the form of connectivity reductions in both higher order association cortices as well as sensory and motor regions, and sometimes co-occur with patches of connectivity increases between cortical and subcortical nodes 20 . However, other research has also emphasized (i) little overlap between reported results, (ii) variable patterns of hyper/hypo-connectivity, and (iii) an impact of preprocessing choices as well as subject-specific head motion and other confounds on observed findings [20][21][22][23][24][25] . Inconsistent findings have also been attributed to the use of conventional case-control designs in connectomics research in autism, which assume within-group homogeneity 26,27 . In addition to efforts that attempt to address this heterogeneity by subtyping ASD individuals into more homogeneous groups 4,5 , nascent literature has emphasized the importance to study interindividual variability of functional connectivity patterns in ASD compared to TD [28][29][30] . Interindividual variability in connectivity may logically follow the interindividual variability of activation previously demonstrated in perceptual and motor domains 31,32 . This body of work suggests that such idiosyncrasy may be an important feature of functional connectome organization in ASD, with greater variability in functional topography among ASD individuals relative to TD 32 . At the group level, this may potentially impact the analysis of connectivity differences between ASD and TD when assuming an identical alignment between the functional and structural domains among individuals. In other words, anatomical alignment does not guarantee correspondence of intrinsic functional profiles. Ignoring this phenomenon may lead to losing subjectspecific features of network organization at the group level 33,34 . In ASD, given the highly idiosyncratic nature of the functional connectome, this is even more pronounced 29 , leading to spurious differences in connectivity that might be better explained when taking into consideration this heterogeneity 35,36 . Although recent work has suggested an idiosyncratic organization of the functional connectome in ASD [28][29][30] , here we expand these approaches in several important ways. First, we developed a novel multi-marker profiling of idiosyncrasy, based on measures of spatial variability, connectome manifold analysis as well as probabilistic approaches to characterize the uncertainty of subject-specific functional topographies. These descriptors comprehensively profiled differences in idiosyncrasy between ASD and TD and provided the basis for an assessment of associations to age and symptom severity. To furthermore identify structural and potential molecular factors that give rise to the spatial patterns of ASD-related network idiosyncrasy, we correlated idiosyncrasy findings in ASD against MRI-based cortical thickness and curvature findings as well as postmortem gene expression information. Indeed, prior research has demonstrated atypical cortical development in ASD 37,38 , with genetic risk factors likely to play a major role in brain anatomy and connectivity abnormalities 39 . Finally, we tested our main hypothesis and assessed how idiosyncrasy may relate to connectivity alterations in ASD vis-a-vis healthy controls observed at the group level 40,41 . Specifically, we conducted a group-level analysis to study functional connectivity differences between ASD and TD, capitalizing on prior graph theoretical measures 22 , with and without considering idiosyncrasy. Results We studied idiosyncrasy based on rs-fMRI data from both waves of the Autism Brain Imaging Data Exchange (ABIDE I and II) 12,13 , a multisite data-sharing initiative. Specific site inclusion criteria and rigorous data quality control as in prior work 11,42,43 resulted in a total of 329 participants (157/172 ASD/TD) from five different sites (see Supplementary Tables 1, 2). Our image processing strategy involved the mapping of functional signals to cortical surfaces as well as surface-based spherical alignment 44 , on which functional connectivity matrices were calculated at a single-subject level. Diffusion map embedding, a nonlinear dimensionality reduction technique that projects regions into a low-dimensional space governed by similarity in connectivity profiles 45,46 identified a common low-dimensional manifold where individual embeddings were clustered into seven intrinsic connectivity networks (ICNs) using a Gaussian mixture model. Connectivity idiosyncrasy was characterized with two complementary features, namely the analysis of spatial shifting on the cortical surface meshes and the analysis of dispersion in connectome-based manifolds. Descriptors were computed relative to a reference embedding (and its corresponding clustering) built by averaging all individual connectivity matrices (see Fig. 1a). Findings corrected for the site, age, and sex unless otherwise specified. Further information about the dataset, image processing, and idiosyncrasy descriptors is provided in the Methods section. Idiosyncrasy is characterized by the shifting of functional networks in physical and embedding spaces. Idiosyncrasy was assessed through the quantification of surface distance (SD) and diffusion distance (DD). In brief, SD is the geodesic distance from a given point to the closest point in the corresponding reference network (see Fig. 1a). These geodesic distances were calculated along the cortical surface using Dijkstra's algorithm 3,46 . DD, on the other hand, profiles idiosyncrasy in terms of the similarity in the connectivity patterns across individuals and with the canonical reference in the embedding space (see Idiosyncrasy descriptors section). Both SD and DDs are widely used measures that prior studies have shown to accurately capture differences in spatial and embedding domains [47][48][49][50] , respectively. Here, we leveraged these measures to provide a careful and complementary depiction of idiosyncrasy in terms of spatial shifting and connectivity similarity. Both approaches showed increased idiosyncrasy in ASD relative to TD in medial and lateral prefrontal regions in both hemispheres, with DD showing more marked effects bilaterally in the precuneus and angular gyrus (see Fig. 1b). Interestingly, ASD also showed bilateral reductions in idiosyncrasy compared to TD in lateral temporal cortices. We complemented the surface-based analysis with an assessment of idiosyncrasy at the network level to determine if observed differences are related to spatial variability or to differences in the size of the ICNs. Here, we computed mean surface distance (MSD) to compare the locations of each ICN in each individual to its corresponding reference network (see Supplementary Fig. 1). Higher MSD values indicate that a given individual network deviates from the corresponding reference network. In both ASD and TD, the visual network (VN) showed the least idiosyncratic organization (i.e., lowest MSD), whereas the ventral attention network (VAN) had the most idiosyncratic organization (i.e., highest MSD). These results indicate that idiosyncrasy is network-specific. Comparing groups, we found significant differences after FDR correction in the dorsal attention (DAN, p = 0.005), default mode (DMN, p = 0.003), somatomotor (SMN, p = 0.009), and VAN (p = 0.002) networks, with ASD showing increased MSD relative to TD. Across the whole cortex, individuals with ASD also showed higher spatial shifting than TD (p = 0.002). Similar findings were also obtained when quantifying spatial shifting using Dice and Jaccard overlap COMMUNICATIONS BIOLOGY | https://doi.org/10.1038/s42003-021-02572-6 ARTICLE COMMUNICATIONS BIOLOGY | (2021) 4:1078 | https://doi.org/10.1038/s42003-021-02572-6 | www.nature.com/commsbio measures. As higher MSD (and lower Dice/Jaccard) may also indicate that individual networks span larger/smaller portions of the cortex relative to the reference network because of hyper-or hypo-connectivity, we also analyzed between-group differences in network size. Notably, however, we did not find significant differences suggesting that findings were due to idiosyncrasy rather than connectivity differences per se (see Supplementary Table 3). Finally, all these measures are computed relative to a reference embedding that may have an impact on our results. In order to assess the robustness of our results to the reference embedding, we used bootstrapping to build different reference embeddings based on 50% of the samples in our dataset. Groupwise distributions of average Dice, Jaccard scores, and MSD across the whole cortex did not show any overlap (see Supplementary Fig. 2), indicating that our results are robust to the reference embedding. We further contextualized idiosyncrasy using a probabilistic framework for each ICNs as shown in Fig. 2. Qualitatively, we observed more spreading of the spatial probability maps in ASD at the group-level, particularly in DMN and SMN (see Fig. 2a). As shown in Fig. 2b, this spreading is manifested as higher entropy in ASD (e.g., the same cortical location is assigned to different ICNs across individuals). Increases in ASD were highest in the SMN (Cohen's d = 0.290), followed by VAN (d = 0.209), DAN (d = 0.203), and DMN (d = 0.188). On the other hand, the limbic system (LSN, d = −0.282) showed lower entropy in ASD (see Fig. 2c). We could observe high correlations of group-wise entropy differences with the corresponding differences in SD (r = 0.512, p spin < 0.001) and DD (r = 0.459, p spin < 0.001) (see Fig. 2d), even after accounting for spatial autocorrelation using nonparametric spin tests 51 . In accordance with the SD and DD findings, we observed lower entropy in ASD relative to TD in the lateral temporal lobe. Moreover, we also analyzed the potential link between idiosyncrasy and the hierarchical organization of the cortex. Specifically, we assessed the emergence of idiosyncrasy along the principal connectivity gradient 46 . As shown in Supplementary Fig. 3, idiosyncrasy increased following the principal gradient, from lowest in sensory/motor regions to highest in transmodal cortices. Although the study site was included as a covariate in our main analyses, we repeated the SD and DD analyses for each site separately. Supplementary Figs. 4 and 5, respectively display global and site-specific SD and DD differences between TD and ASD. Despite variability in findings across sites, the overall direction of findings was relatively consistent across most of the included sites, particularly those with the highest numbers of subjects i.e., NYU, USM, and PITT (see Supplementary Table 2). Idiosyncrasy association to age and symptom severity. When analyzing age effects (see Fig. 3 and Supplementary Fig. 6), we found significant associations to idiosyncrasy in the DAN (p < 0.001/0.001 for SD/DD), LSN (p = 0.089/0.001), SMN (p < 0.001 for both SD and DD), VAN (p = 0.016/0.001), and VN (p < 0.001). On the other hand, we found no significant relationship between DMN (p = 0.551/0.423) and FPN (p = 0.143/ 0.087). Overall, there was a significant effect of age on shifting in cortical and embedding spaces (p < 0.001), manifested in increasing SD and DD. These results indicate that idiosyncrasy increases with age. Nevertheless, ASD and TD showed similar slopes and there were no significant group-level interactions. We further assessed these results repeating our surface-based analysis using only the children (i.e., age <18 years) in our dataset and only the adults (i.e., age ≥18). Overall, results from these analyses, reported in Supplementary Fig. 7, were consistent with the findings obtained when using all individuals in our dataset. Nonetheless, when only using adults, the cluster in the temporal lobe showing higher idiosyncrasy in TD was relatively larger with respect to the cluster found in children. We also investigated the association of our idiosyncrasy descriptors with ASD symptom severity based on the Autism Diagnostic Observation Schedule (ADOS). Specifically, we tested . Statistical significance is indicated with *, **, and ***, respectively denoting p < 0.05, p < 0.01, and p < 0.001 after FDR correction across seven networks for age and, for CSS, across the four different networks. Shaded areas around the regression lines denote a 95% confidence interval. DAN dorsal attention network, DMN default mode network, SMN somatomotor network, VAN ventral attention network. whether ADOS calibrated severity scores (CSS) were associated with SD and DDs (see Fig. 3b). The descriptors were computed for the networks that showed the highest idiosyncrasy and for the entire cortex. After correcting for multiple comparisons, significant associations were found in the default mode and attention networks, whereas SMN showed no significant association with CSS. Across the whole cortex, we found a significant association of CSS with DD (r = 0.208, p = 0.016) but not with SD (r = 0.113, p = 0.198). From these results, we can see that increasing idiosyncrasy is related to symptom severity. Associations to cortical morphology and gene expression patterns. Several studies 3,37,43,52-55 have reported morphological alterations in ASD relative to TD. Cortical thickness changes were overall consistent with morphological anomalies reported in the literature, showing a mix of frontal and midline parietal cortical thickening sometimes together with patches of cortical thinning in temporal regions 37,43,[52][53][54]56 . Importantly, we also inspected the relationship of the functional idiosyncrasy descriptors with changes in cortical morphology, by running spatial correlation analyses between group-wise differences in cortical thickness and mean curvature index to those in SD and DD. To account for spatial autocorrelations, we used spin tests with 1000 permutations 51 . As shown in Fig. 4a, we found no significant associations between cortical thickness and either measure of idiosyncrasy. Similar results were found when using a mean curvature index as a descriptor of cortical morphology, as shown in Supplementary Fig. 8. These results suggest that differences in functional idiosyncrasy are not spatially overlapping with potential alterations in cortical morphology in the ASD sample studied here. Furthermore, we explored potential neurobiological correlates of idiosyncrasy in ASD. Idiosyncrasy maps obtained using SD and SD were correlated with postmortem gene expression maps from six donors provided by the Allen Institute for Brain Sciences (AIBS) 57 . Significant genes were identified by spatially correlating their expression patterns with our maps of idiosyncrasy based on spin tests with 1000 permutations, across the six postmortem cortical brain samples. Only genes that were significantly associated with idiosyncrasy and consistently expressed across all six donors (average inter-donor correlation ≥0.5) were considered for further analysis 58 . Selected genes (see Supplementary Table 4) were tested for developmental expression analysis across different developmental time windows 59 , from early fetal to young adulthood, and disease enrichment analysis. Developmental gene expression analysis highlighted associations of our idiosyncrasy descriptors with genes expressed in the brain from early infancy onwards (see Fig. 4b), and across several brain regions comprising the cerebellum, cortex, and striatum. Significant gene expressions were predominantly found in adolescence and young adulthood. Furthermore, we performed disease enrichment analysis to investigate the relationship between the strength of the association derived for each gene expression (with respect to our idiosyncrasy map) and a set of differential gene expression signatures in ASD, schizophrenia, and bipolar disorder. As shown in Fig. 4c and Supplementary Fig. 9, this analysis revealed that cortical patterns of idiosyncrasy were more strongly associated with differential gene expression 60 in ASD (t = 42.270/28.099, p < 0.001 for SD/DD) than in schizophrenia (t = 18.548/14.192, p < 0.001) or bipolar disorder (t = 0.577/−2.014, p = 0.564/0.044). Connectivity alterations reflect idiosyncrasy. Prior research has suggested connectivity alterations in ASD relative to controls, but patterns of findings have overall not been consistent 28,29 . Here, we examined the spatial relationship between idiosyncrasy (in terms of SD and DD) and overall connectivity alterations, quantified using degree centrality (DC; see below for findings using eigenvector centrality). DC provides an unbiased depiction of the functional connectome that assigns each cortical location the number of connections exceeding a predefined threshold, set here to 0.2 22 . The relationships of degree centrality with surface and diffusion distances are shown in Fig. 5a and reported in Table 1 for each ICN. DC showed strong correlations with both SD (r = 0.468, p < 0.001) and DD (r = 0.413, p < 0.001). For DC, positive/negative values indicate hyper/hypo-connectivity in ASD and higher/lower spatial deviations in surface and diffusion distances relative to TD. These results show that connectivity alterations in ASD are significantly associated with idiosyncrasy. Regions that exhibit hyper-connectivity (i.e., higher DC) in ASD show increased spatial deviation from the locations of the canonical networks. At the network level, SD was associated with DC in FPN, DMN, and VAN, whereas DD was associated with DC in all networks except the limbic system. Given the relationship of idiosyncrasy with DC, we set out to investigate the role of idiosyncrasy in the connectivity alterations observed in previous work. We first analyzed the differences in DC between ASD and TD and then repeated the same analysis controlling for idiosyncrasy. That is, we used both SD and DD as additional covariates in our analysis. As shown in Fig. 5b, the number of clusters showing significant differences is considerably reduced, with only one small region in the left frontal lobe remaining. Findings were replicated using a different centrality measure (i.e., eigenvector centrality 61 ), as shown in Supplementary Fig. 10 and Supplementary Table 5. Eigenvector centrality assigns each node its corresponding entry in the eigenvector with the largest eigenvalue of the connectivity matrix. With eigenvector centrality, none of the regions showing significant differences in connectivity survived after controlling for idiosyncrasy. Altogether, these findings suggest that identifiable connectivity alterations in conventional ASD to control comparisons do, at least in part, emanate as a result of the high variability in the spatial locations of the ICNs. Discussion Neurodevelopment is a complex yet coordinated process shaping the anatomy and function of multiple brain networks, with important variability across individuals. Characterizing this variability may add precision in the study of typical development and may advance our understanding of atypical neurodevelopment in diverse indications such as ASD 62,63 . Multiple studies have previously reported atypical functional connectivity in ASD, contributing to the overall notion of ASD as a disorder of brain networks 11,12,22 . However, there have also been reports questioning the consistency of findings, both in terms of which networks are involved and in terms of the directionality of findings 23,25,64 . Beyond an increasing recognition on the impact of preprocessing choices and sample inclusion criteria 4,6,25 , a growing research line is hinting at a more variable and idiosyncratic organization of the functional connectome in ASD as a potential contributor to these inconsistent findings [28][29][30] . In essence, idiosyncrasy describes an increased spatial variability in the mapping between functional network organization and brain anatomy. Here, we set out to (i) characterize such idiosyncratic network organization in ASD, using novel metrics that capture network variation in both physical and topological spaces, (ii) examine associations to age and symptom severity, (iii) explore morphological and genetic associations, and (iv) investigate how idiosyncrasy may contribute to functional connectivity alterations commonly seen in ASD to control case-comparison studies. In short, our findings suggest that ASD presents with a mosaic of Fig. 4 Associations to cortical morphology and gene expression patterns. a Correlation of group-wise differences in surface (top) and diffusion (bottom) distances with cortical thickness, and comparison of empirical correlation with the null distribution obtained using 1000 spin permutation tests to account for spatial autocorrelation. b Developmental cortical enrichment, showing enrichment mainly in the cerebellum, cortex, and striatum (left), specifically in adolescence and young adulthood (right). In the left panel, the size of hexagon rings represents the proportion of genes specifically expressed in a particular tissue at a particular developmental stage. Varying stringencies for enrichment with respect to specificity index threshold (pSI) are represented by the size of hexagons going from least (outer hexagon) to most specific (center hexagon) (pSI = 0.05, 0.01, 0.001, and 0.0001, respectively) 59 idiosyncrasy alterations relative to TD, with mainly increases in ASD, together with focal decreases in network idiosyncrasy in lateral temporal regions. Idiosyncrasy was found to relate to both age and symptom load as measured with ADOS CSS [65][66][67] , and the spatial topography of ASD-related network idiosyncrasy strongly correlated with the expression of autism risk genes. Notably, we could also show that idiosyncrasy contributes to ASD versus TD connectivity differences that are detectable with typical casecontrol analysis, motivating future research strategies that consider patterns of idiosyncrasy in their analyses. Core to our work were two complementary approaches to quantify functional idiosyncrasy, with one approach operating in the spatial domain and another one in connectivity-determined manifold spaces. Both approaches converged in showing that while functional network organization is idiosyncratic in both TD and ASD, the latter showed a mosaic of mainly increases in network idiosyncrasy across multiple functional systems together with patches of idiosyncrasy reductions. In the spatial domain, we compared individual network locations to a canonical reference connectome, built by averaging all individual connectivity matrices in our dataset. This highlighted that several networks (i. e., DAN, DMN, SMN, and VAN) were shifted in ASD from the typical locations of their corresponding reference networks. Complementing idiosyncrasy profiling in the spatial domain, we characterized idiosyncrasy in a connectivity-informed manifold space. Such manifolds provide coordinate systems based on intrinsic network organization and are, thus, decoupled from the underlying anatomy 46,68,69 . In several recent studies, our group and others capitalized on manifold spaces to represent structural and functional connectome information [70][71][72] , to assess structurefunction coupling 72 , and to study typical and atypical connectome organization 11,42,71,73,74 . Unlike their spatial counterparts, manifold-based idiosyncrasy measures tap into intersubject correlations 75 , and in turn, provide a metric sensitive to regional connectivity, as well as similarity in connectivity to other regions 76 . This descriptor converged with our SD analysis, in that it pointed to a spatially varying pattern of idiosyncrasy, with ASD showing mainly increased idiosyncrasy across multiple networks, encompassing sensory as well as higher-order networks. In addition to the convergence in findings across these two descriptors, we could cross-validate our findings using an entropy-based descriptor at the network level. This approach provided an independent probabilistic context to understand our findings in terms of interindividual spatial network uncertainty (i.e., networks with high interindividual variability show high entropy), supporting increased idiosyncrasy in ASD in multiple networks relative to TD. Note that these descriptors were used to study idiosyncrasy at the cortical level. Besides cortical regions, however, subcortical areas and subcortico-cortical interactions play an important role in ASD 20,42 . The incorporation of subcortical regions may have important implications for our idiosyncrasy descriptors, and it may further enrich the description of network hypo/hyper-connectivity observed in group-level contrast analysis. With the exception of distance-based measures (i.e., SD and MSD), our descriptors could be easily extended to incorporate and account for differences in subcortical connectivity patterns. The functional networks found to be idiosyncratic in our analyses have been consistently shown to diverge in analyses that compared functional connectivity in ASD relative to TD at the group level. Indeed, several studies have reported connectivity alterations in ASD individuals relative to TD in the DAN 77-79 , VAN 78,80 , 12,20,22,82 . Our findings show that the degree of spatial shifting, irrespective of the cohort (i.e., in both ASD and TD), is distributed across the putative functional hierarchy, affecting primary sensory, unimodal association, and attentional, as well as higher-order transmodal systems such as the DMN. Interestingly, and previously unreported in ASD, our two idiosyncrasy measures also pointed to reduced idiosyncrasy in a region encompassing the lateral temporal lobe in ASD. A prior rs-fMRI study in neurotypical individuals 83 found the lateral temporal cortex to be among the areas with the highest intersubject variability in intrinsic functional connectivity. Moreover, lateral temporal cortical areas have previously been suggested to show abnormal structural connectivity in ASD in very young children at risk for ASD 84 . In that study, lower structural network efficiency of primary and secondary auditory cortices was related to autism risk in children as young as 6 months old, and network inefficiencies were related to symptom load at a later follow-up 84 . The authors suggested that atypical organization in sensory systems in autism may manifest early, and potentially cascade into the organization of higher order networks-a finding in line with the sensory first hypothesis of autism and other neurodevelopmental disorders [85][86][87] . Although our findings are overall indicative of a relatively broad functional perturbation affecting many networks, using the principal connectivity gradient as a model of human cortical hierarchical organization, we were able to demonstrate an overall higher increase in idiosyncrasy in transmodal networks compared to sensory/motor networks, showing that increased idiosyncrasy in ASD is preferentially located in cortical regions with the most variability among neurotypical individuals, with the exception of the lateral temporal areas 83 . An increasing body of neuroimaging work has shown that primate and human cortical microstructure and function generally follows sensory-transmodal hierarchies 46,71,88,89 , recapitulating earlier models of primate cortical organization 90,91 . Further evidence on a hierarchical organization is supplied by electrographic and neuroimaging studies, showing similar gradients of temporal hierarchies in the primate cortex that follow sensorytransmodal hierarchies 88,92 . Local alterations at specific nodes along these hierarchies could ultimately affect integrative and heteromodal networks, such as the DMN, disproportionately and manifest as increased idiosyncrasy in these networks. Findings of increased idiosyncratic organization in ASD are consistent with prior work reporting higher inconsistency in the incorporation of individual anatomical locations to the DMN and SMN 30 , and increased spatial shifting in DAN and VAN 28 . In our study, these four ICNs had a more idiosyncratic functional organization in ASD. Moreover, and similar to prior work, we found that idiosyncrasy increased with symptomatology, more specifically in social and communication difficulties, indicating that functional network reorganizations which diverge most from the normative group are reflected in more pronounced patterns of behavioral divergence on standardized testing. Nonetheless, these prior studies have largely overlooked the relationship of the underlying spatial topography to connectivity differences in ASD versus control populations. In 29 , it was shown that the existence of topographical distortions among individuals leads to a regression to the mean effect at the group level. In other words, the study of functional connectivity at the group level may be affected by latent misalignments between the functional organization and the underlying anatomy, potentially giving rise to spurious differences. Indeed, our analysis of functional connectivity alterations showed that idiosyncrasy is a potential confounder. Hyper-and hypo-connected regions found in ASD using degree and eigenvector centrality measures show great overlap with previous findings in multiple large-scale datasets using degree centrality 22 . However, after controlling for idiosyncrasy (using both surface and diffusion distances as covariates), differences were considerably reduced. A small patch with increased connectivity survived when using degree centrality, whereas with eigenvector centrality no connectivity differences were found. It is plausible that idiosyncratic reorganization in ASD breaks down the functional correspondence between homologous anatomical regions across individuals assumed in case-control studies, and thus challenges inference as well as the interpretation of previously reported connectivity differences. Intersubject variability in functional connectivity has been shown to be related to the variability in the position of functional regions even in normative populations 93 . This is closely related to an emerging literature on precision neuroimaging in healthy populations, where several studies have also shown specific within-subject features of network organization that do not manifest at the group level due to this effect 33,34,94,95 . In addition to potential spatial uncertainty, other findings have also shown that some of the connectivity alterations found in ASD are partially driven by short-term temporal variability 96 . Taken together, our findings suggest a marked influence of network idiosyncrasy on what is detectable with traditional case-control connectivity analyses. As such, they support the development of novel approaches to analyze connectivity differences at the group level, while also considering subject-specific variability, especially in atypical populations such as ASD. Cortex-wide correlation analyses revealed no significant associations between differences in cortical morphology (quantified via cortical thickness and mean curvature index) and our maps of idiosyncrasy, ruling out a systematic relationship between alterations in both brain structure and function. We note, however, that in our sample, the lateral temporal lobe showed subtle degrees of cortical thinning in ASD compared to TD, which is in line with prior studies 55 and spatially coincides with our findings of reduced idiosyncrasy in the lateral temporal areas in ASD. Albeit speculative, it is possible that cortical thinning in ASD in these regions may ultimately have downstream effects on functional connectivity (e.g., if the thinning relates to synaptic alterations and or subtle disconnection in the temporal lobe), and may thus relate to region-specific alterations in functional idiosyncrasy. The relationship of idiosyncrasy with morphology may be further investigated in future work using other MRI-derived measures, notably those sensitive to myeloarchitecture and tissue microstructure, which can be used in the study of structurefunction association based on depth specific variations in cortical microstructure and to track developmental change 71,97 . On the other hand, correlating idiosyncrasy measures with age indicated an age-related increase in both ASD and TD, with no significant differences in trajectories between groups. As such, our results point to an increased functional network idiosyncrasy in ASD relative to controls already present at an early age, with neither a considerable aggravation nor normalization throughout childhood development, adolescence, and early adulthood. Notably, while our inclusion criteria allowed the study of both children and adults with ASD and TD, our youngest participants were 5 years old. In light of emerging studies suggesting connectivity anomalies in very young children with autism 84 , it will therefore be of relevance to assess network idiosyncrasy in small kids and infants and to also model intraindividual trajectories longitudinally. This will offer a more precise understanding of early mechanisms contributing to idiosyncrasy, alongside a more direct mapping of intraindividual trajectories in idiosyncratic networks. Although our findings showing a mosaic pattern of increased and decreased idiosyncrasy warrant further investigation, a plausible explanation for this large-scale functional reorganization in ASD together with increased variability in both spatial and connectomebased network embeddings may relate to compensatory plasticity mechanisms and its imbalance in autism. By integrating genetic, cognitive, and neuroimaging findings, the so-called trigger threshold target model of autism 98 has postulated that ASD may relate to neurodevelopmental disturbances that trigger compensatory reallocations of neural resources in autism. As a result, intact regions assume functions from nearby impaired areas. To accommodate this shifting of competences, spatially adjacent networks might be required to adjust their locations and/or typical functional crosstalk, a phenomenon that may contribute to the observed increases in SD and DD in the cohort with autism. Since reallocations are likely to occur within the same hemisphere (e.g., involving spatially adjacent networks), this shifting may give rise to increased distortions in homotopic interhemispheric connectivity because it breaks the functional interhemispheric correspondence, which is in line with previous findings 29 . Supporting this shifting of competences, prior work has suggested abnormal cortical plasticity in ASD 99 , with multiple genetic factors involved in this process. In fact, most genetic risk factors associated with ASD appear to be implicated in synaptic plasticity and connectivity more generally 100,101 . Genetic influences on functional connectivity are well established in both adults, and TD children and adolescents [102][103][104] . Prior imaginggenetics studies have consistently demonstrated considerable heritability of resting-state functional networks 103,[105][106][107] . Moreover, these risk factors may be shared across a range of neuropsychiatric disorders 108 . The vast genetic diversity associated with ASD in conjunction with its high heritability 100,109 may therefore account for heterogeneity in connectivity alterations observed in ASD 110 . In this work, we investigated the relationship of idiosyncrasy with gene expression, showing that genes associated with idiosyncrasy differences were more strongly correlated with differential gene expression in ASD than in schizophrenia and bipolar disorder, which further highlights idiosyncrasy as an important feature of autism. Note, however, that gene expression used in this analysis is derived from adult postmortem data from a different dataset (i.e., Allen Human Brain Atlas [AHBA]), and our findings may thus only represent indirect associations that need to be confirmed as additional resources and datasets become available that offer both neuroimaging and gene expression data in the same ASD and control populations. Besides genetic factors, the environment also plays an important role in shaping functional network organization throughout development 103,111 . Equally, environmental factors have been suggested to be a risk of neurodevelopmental disorders such as ASD [112][113][114] , and environmental factors may also contribute to the observed functional network idiosyncrasy in ASD. Of note, idiosyncratic network organization could be identified in the absence of any goal-oriented task in the current study, purely based on a task-free functional imaging acquisition. Although changes in these patterns might occur under different task conditions or mental states, this idiosyncrasy can be seen as an inherent characteristic of ASD brain organization that may contribute to unconstrained cognitive processes, during routine behavior, as well as specific tasks 29 . Moreover, the emergence of idiosyncrasy may be related to the way ASD individuals interact with external environments. Altered interactions with the environment may account for individual differences. For example, given cognitive inflexibility that has been reported in ASD 115 , idiosyncratic functional reorganizations may stem from compensatory mechanisms developed to overcome cognitive and behavioral rigidity 116 . To conclude, our work characterized functional idiosyncrasy in spatial and connectivity-informed manifold dimensions of the functional connectome. Studying a large dataset of TD and ASD, our novel descriptors reliably captured differences in both groups, suggesting a mosaic pattern of idiosyncrasy increases and decreases in several functional networks in ASD. In addition to showing associations to age, symptom severity, as well as gene expression patterns, our findings notably indicated a marked relationship between idiosyncrasy and connectivity differences that can be identified using case-control analysis, which may consolidate some of the heterogeneity observed in previous studies in ASD and calls for the consideration of idiosyncrasy when studying the functional connectome in autism, since connectivity alterations may, at least partly, reflect an underlying idiosyncratic organization. Methods Participants and data acquisition. We studied rs-fMRI data from both waves of the openly-shared Autism Brain Imaging Data Exchange initiative (ABIDE I and II; http://fcon_1000.projects.nitrc.org/indi/abide) 12,13 . For our study, we selected those sites with ≥10 individuals per group and with both children and adults. After detailed quality control, only cases with acceptable T1-weighted (T1w) MRI, surface-extraction, and head motion in rs-fMRI were included in our analyses, resulting in a total of 329 subjects (157/172 ASD/TD, with mean ± SD age in years = 18.4 ± 8. Table 1. Individuals with ASD were diagnosed by an in-person interview with clinical experts and gold standard diagnostics of the ADOS 117 and/or Autism Diagnostic Interview-Revised (ADI-R) 118 . TD individuals did not have any history of mental disorders. For all groups, participants who had genetic disorders associated with autism (i.e., Fragile X), contraindications to MRI scanning, and pregnancy were excluded. The ABIDE data collections were performed in accordance with local Institutional Review Board guidelines, and data were fully anonymized. Written informed consent was obtained from all the participants. Detailed demographic information from participants included in our study are reported in Supplementary Table 2. Data preprocessing. T1w MRI data were preprocessed with FreeSurfer v5.1 44,119,120 . The pipeline performed automated bias field correction, registration to stereotaxic space, intensity normalization, skull-stripping, and tissue segmentation. White and pial surfaces were reconstructed using triangular surface tessellation and topology-corrected. Surfaces were inflated and spherically registered to fsaverage. For the rs-fMRI, we used preprocessed data previously made available by the Preprocessed Connectomes initiative (http://preprocessed-connectomesproject.org/abide). The preprocessing was performed with C-PAC (https://fcpindi.github.io) and included slice-time correction, head motion correction, skull stripping, and intensity normalization. The rs-fMRI data were de-trended and nuisance effects related to head motion, white matter, and cerebrospinal fluid signals were removed using CompCor 121 , followed by band-pass filtering (0.01-0.1 Hz). Finally, rs-fMRI and T1w data were coregistered in MNI152 space using linear and nonlinear transformations. Individual rs-fMRI data were mapped to the corresponding mid-thickness surfaces, resampled to the Conte69 template (https://github.com/Washington-University/Pipelines), and smoothed using a 5 mm full width at half maximum (FWHM) kernel. All segmentations and surfaces were visually inspected. Subjects with erroneous segmentations or framewise displacements greater than 0.3 mm were excluded from our analyses. Identification of intrinsic connectivity networks. To identify and quantify idiosyncrasy in functional network organization, we mapped the rs-fMRI data to a low-dimensional space using the following steps (see Fig. 1a). First, we built the connectivity matrices from the rs-fMRI time-series of each individual in our dataset using linear correlation coefficients. The connectivity matrices were based on a functional parcellation with 1000 labels 50 , Fisher's z-transformed and thresholded to only keep the 10% of the most similar entries per row 122 . We used diffusion mapping introduced in ref. 45 , as implemented in BrainSpace 122 , to embed the rs-fMRI data into a low-dimensional manifold. This approach is robust to noise and computationally efficient compared to other nonlinear manifold learning techniques 123,124 . Briefly, diffusion mapping embeds the data into a particular Euclidean space in which the usual Euclidean distance corresponds to the diffusion distance on the data at a given scale or diffusion time. In this new space, interconnected cortical regions are nonlinearly projected to fall close to each other, whereas weakly connected regions are mapped to distant locations in the eigenspace. For our study, the diffusion time was set to 1, and the α parameter, which controls the influence of the density of sampling points on the manifold (from maximal influence α = 0, to no influence at all α = 1) was set to α = 0.5 to retain the global relations between data points in the embedded space, following prior work 11,46,125 . Since diffusion maps capture the main structures of the data along a few cardinal dimensions, we selected the first 30 eigenvectors similar to a previous study, corresponding to the largest eigenvalues to represent each individual embedding 126 . To assess differences between TD and ASD, we averaged the connectivity matrices of all the individuals in our dataset to build a mean connectivity matrix, which was subsequently used to construct a reference embedding. This reference embedding was used as a representation of the canonical functional connectivity template. Because diffusion mapping may take the individual datasets into different Euclidean spaces, the standard Euclidean distance between the elements of these spaces is not meaningful. To bring the data into the same Euclidean space, we used a change of basis operator to map all the individual embeddings to the reference embedding 127 . In this way, we can compute the Euclidean distance within and between datasets, allowing us, therefore, to compare the individual diffusion maps to the reference embedding. Finally, to identify the ICNs, all embeddings (including the reference) were clustered into seven components using a Gaussian mixture model with a full covariance matrix. Each point in the embedding was assigned to the cluster corresponding to the highest a posteriori probability. The mixture model was initialized with the seven ICNs proposed in 128 . Idiosyncrasy descriptors. Two different approaches were proposed to characterize idiosyncrasy, namely: spatial-and manifold-based distance measures. For the spatial measure, we used SD, which was computed for each point as the geodesic distance to the closest point in the corresponding reference network 3,46 . The second measure to characterize idiosyncrasy is based on diffusion distance, which is approximated using the Euclidean distance in the eigenspace between points of each individual to the reference embedding, such that points that fall far apart from their corresponding reference points show a high difference in their original rs-fMRI time-series. Instead of computing the distance between pairs of points, however, we take advantage of the clustering and compute the diffusion distance from a given point to its closest point in the reference embedding that belongs to the same cluster (i.e., ICN). Let Ψ r c be the set of points of the reference embedding in cluster c, for each point ϕ c 2 Φ i c of the individual embedding i in the same cluster, our diffusion-based idiosyncrasy descriptor is then computed as follows: In this way, this descriptor captures the variability that exists in the connectivity patterns that characterize a specific ICN among all the individuals in our dataset. Furthermore, to assess that idiosyncrasy differences are not related to the size of the ICNs, which would rather indicate alterations in connectivity, we aimed to quantify the spatial variability at the network level by computing the overlap and the extent of shifting of each individual clustering from the canonical reference for each ICN. To do so, we used the Dice similarity coefficient 129 , which is in common use in neuroimaging research: 130 Jaccard index: and MSD: where A and B are respectively the reference and individual clusters corresponding to a specific network, |·| denotes cardinality, and d(a,B) is the geodesic distance between point a in cluster A to the closest point in cluster B. In this case, the idiosyncrasy of the individual functional connectomes is indicated by low Dice overlaps and high MSD from the reference clustering. The spatial variability existing in the network locations among individuals is reflected in the spreading of the ICN probability maps at the group level. The higher the spreading, the more idiosyncratic are the individuals in a given cohort. Therefore, we further characterized idiosyncrasy using a measure of uncertainty based on the entropy of the group-wise probability maps obtained from clustering. In other words, this descriptor measures how evenly the probability mass is spread among the different ICNs at each location. Entropy is minimized when most of the probability mass is concentrated on a particular network, indicating a location with very low variability among individuals (i.e., the location is assigned to the same ICN in most individuals). On the other hand, entropy is increased when the probability mass at a given location is spread among several ICNs (i.e., the location is assigned to different ICNs across individuals). Analysis of idiosyncrasy. Idiosyncrasy was quantified using surface and diffusion distance measures, which we used to perform the following analyses: • Assessing idiosyncrasy differences between ASD and TD. For the spatial descriptors of idiosyncrasy (i.e., Dice/Jaccard overlap and MSD), general linear models (GLM) predicting each of the idiosyncrasy measures based on group diagnosis were used to assess differences at the network level and cortex-wise. For the latter, overall Dice/Jaccard and MSD were computed as the weighted average of the corresponding scores for each ICN, using the size of the reference networks as weights. All results from our network-level analyses were corrected for multiple comparisons using Benjamini-Hochberg FDR correction 131 . GLMs were also used in the surface-based analysis to study the differences in DD and SDs. For entropy, network-wise differences were analyzed at the group level using twosample t-tests. • Age effects. To investigate age-specific differences in idiosyncrasy, the surface-based analysis to study differences in DD and SDs was further repeated for children (86 TD and 88 ASD individuals with age <18), and adults (86 TD and 69 ASD individuals with age ≥18) separately. • Association of idiosyncrasy with ASD symptomatology. Idiosyncrasy descriptors were correlated with ADOS CSS rather than raw ADOS scores since participants with different ages and language abilities undergo assessments using different ADOS modules. For the ABIDE sample used in our work, however, CSS (or the necessary information to derive them) were only available for a small subset of individuals. We therefore resorted to an approximation by using a proxy CSS approach based on social and communication ADOS scores 67 , which are available for all subjects. The proxy CSS were derived by mapping a subject's age, total ADOS score (social and communication), and ADOS module through a lookup table. Since ABIDE includes modules 2-4, we used the lookup table provided by 65 for modules 2-3, and the table provided by 66 for module 4. Results were corrected for multiple comparisons using the FDR procedure. For the correlations of idiosyncrasy with CSS, we z-scored the data with respect to TD and regressed out the effects of age, sex, and site prior to performing the correlations. • Associations to morphology. To assess whether there is an association between these morphological alterations in ASD and functional idiosyncrasy, we correlated group-wise differences in cortical thickness and mean curvature index with surface-based idiosyncrasy measures (i.e., using surface and diffusion distances). We accounted for spatial autocorrelations using nonparametric permutation tests (i.e., spin tests) 51 . Gene enrichment analysis. Many risk factors have been associated with neurodevelopmental disorders, with genetic factors playing an important role in the etiology of ASD 132,133 . We, therefore, aimed to investigate the genetic correlates of idiosyncrasy in ASD. Using a similar approach to Neurovault gene decoding tool 57,134 , coherent associations between our idiosyncrasy maps (i.e., t-maps of surface and diffusion distances) and postmortem gene expression patterns from the AIBS were measured to identify the set of genes with significant spatial overlaps. Significant genes were obtained by regressing each gene against our cortical map of idiosyncrasy (e.g., DD) for each donor and using a one-sample t-test to determine whether the slopes across all six donors were different from 0. To correct for multiple comparisons, the procedure was repeated by randomly rotating our maps of idiosyncrasy using 1000 spin permutations 51 , which were compared with the original t-statistic to assess gene significance. Gene expressions for all six donors in the AHBA dataset were obtained using abagen (https://github.com/rmarkello/abagen). Only genes that were consistently expressed across donors (i.e., average inter-donor correlation ≥0.5) were considered for our analyses 58 . Next, we carried out developmental gene expression analysis and disease enrichment analysis. In the former, we identified the genes whose expressions significantly overlapped with our idiosyncrasy maps. The identifiers of this final set of significant genes were then submitted to the cell-typespecific expression analysis (CSEA) developmental expression tool (http:// genetics.wustl.edu/jdlab/csea-tool-2/), where they were compared against developmental expression profiles from the BrainSpan dataset (http:// www.brainspan.org) to identify the developmental time windows across brain regions in which these genes are expressed. In the second analysis (i.e., disease enrichment), we used a recently published catalog of genes with differential expression information (i.e., fold change values) for autism, schizophrenia, and bipolar disorder 135 . Here, we used robust linear regression to assess the relationship between the t-statistics derived from the previous spatial analysis (i.e., denoting the association of gene expression with our idiosyncrasy map) and their corresponding log fold-changes in each neuropsychiatric disorder 136 . Results for schizophrenia and bipolar disorder were included as baselines, since these disorders share similar genetic variants with ASD 60 . Guanine-cytosine content was used as an additional covariate to control for possible effects related to genome size in microarray data 137,138 . Relation to degree centrality. Given the little consensus on the directionality of the connectivity alterations in ASD reported in the literature. Here, our purpose is to investigate the relationship between idiosyncrasy and connectivity alterations to elucidate the role of idiosyncrasy in these connectivity alterations. To study this putative association, we used two different measures of centrality, namely: degree and eigenvector centrality. The first measure is defined as the total number of connections whose linear product-moment correlation coefficients are above a predefined threshold used to eliminate connections with low temporal correlation attributable to signal noise 22,139 . Eigenvector centrality is based on the eigenvector with the largest eigenvalue of the connectivity matrix. Following prior work that used these measures to study connectivity alterations in ASD 22,140 , the threshold for our analyses was set to 0.2. Since idiosyncrasy is an inherent property that is also present in TD individuals (presumably in a lower degree than in ASD), we first analyzed the relationship of idiosyncrasy with hyper-and hypo-connectivity based on linear product-moment correlations of the statistical t-maps of degree and eigenvector centrality with those of DD and SDs using spin tests 51 . Positive degree centrality values would indicate hyper-connectivity in ASD, whereas negative values indicate hypo-connectivity. The same applies to our idiosyncrasy descriptors, with positive/negative SDs, for instance, pointing out higher/lower deviations from the canonical reference networks relative to TD. Then, we investigated the impact of idiosyncrasy in the potential connectivity alterations when ignoring this phenomenon. Surface-based analysis to find differences in connectivity between ASD and TD was performed based on degree centrality (or eigenvector centrality). This analysis was initially conducted without considering idiosyncrasy and then repeated controlling for idiosyncrasy by incorporating SD and DD as additional covariates to our GLMs. Statistics and reproducibility. Groupwise idiosyncrasy differences and correlational analyses controlled for the site, sex, and age effects. For analyses involving spatial idiosyncrasy descriptors (i.e., Dice/Jaccard overlap and SDs), the surface area was further included as a nuisance covariate. For all our surface-based analyses, threshold-free cluster enhancement (TFCE) was used with 10,000 permutations to correct for multiple comparisons across the cortical surfaces 141 . A significance level of 0.05 was used for all statistical tests. Network-level analyses, including associations between idiosyncrasy and CSS, were corrected for multiple comparisons using Benjamini-Hochberg FDR correction 131 . For the correlations of idiosyncrasy with CSS, the data was first z-scored with respect to TD and we regressed out the effects of age, sex, and site prior to performing the correlations. Correlation of cortical thickness and mean curvature index with our idiosyncrasy maps was carried out while accounting for spatial autocorrelations using nonparametric permutation tests 51 . For the gene enrichment analysis, significant genes were obtained by regressing each gene against our cortical map of idiosyncrasy for each donor and using a one-sample t-test to determine whether the slopes across all six donors were different from 0. We corrected for multiple comparisons by randomly rotating our maps of idiosyncrasy using 1000 spin permutations 5 . The reproducibility of idiosyncrasy differences found using the whole ABIDE data (n = 329) was assessed for each acquisition site separately (IP, n = 32; NYU, n = 126; PITT, n = 42; TCD, n = 37; USM, n = 92) based on surface and diffusion distances. This analysis was also repeated for children (n = 174, age <18) and adults (n = 155, age ≥18) separately. Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The imaging and phenotypic data were provided, in part, by the Autism Brain Imaging Data Exchange initiative (ABIDE-I and II; https://fcon_1000.projects.nitrc.org/indi/ abide). The specific subsets of data that were used in the present work are available from the authors upon request.
2021-09-17T06:17:24.095Z
2021-09-15T00:00:00.000
{ "year": 2021, "sha1": "518e3f204af62b2067400bebcd92e731bfead251", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42003-021-02572-6.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "81b88bb6549c54a10ae2ee2f1197d9e716868f59", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
9512871
pes2o/s2orc
v3-fos-license
DiscML: an R package for estimating evolutionary rates of discrete characters using maximum likelihood Background The study of discrete characters is crucial for the understanding of evolutionary processes. Even though great advances have been made in the analysis of nucleotide sequences, computer programs for non-DNA discrete characters are often dedicated to specific analyses and lack flexibility. Discrete characters often have different transition rate matrices, variable rates among sites and sometimes contain unobservable states. To obtain the ability to accurately estimate a variety of discrete characters, programs with sophisticated methodologies and flexible settings are desired. Results DiscML performs maximum likelihood estimation for evolutionary rates of discrete characters on a provided phylogeny with the options that correct for unobservable data, rate variations, and unknown prior root probabilities from the empirical data. It gives users options to customize the instantaneous transition rate matrices, or to choose pre-determined matrices from models such as birth-and-death (BD), birth-death-and-innovation (BDI), equal rates (ER), symmetric (SYM), general time-reversible (GTR) and all rates different (ARD). Moreover, we show application examples of DiscML on gene family data and on intron presence/absence data. Conclusion DiscML was developed as a unified R program for estimating evolutionary rates of discrete characters with no restriction on the number of character states, and with flexibility to use different transition models. DiscML is ideal for the analyses of binary (1s/0s) patterns, multi-gene families, and multistate discrete morphological characteristics. Electronic supplementary material The online version of this article (doi:10.1186/1471-2105-15-320) contains supplementary material, which is available to authorized users. Some of these programs have advanced (or realistic) features that are not implemented in other programs. For instance, the BayesTraits program implements adistribution for rate variation [8]. The GLOOME program allows the estimation of prior root probabilities of the character states [10,15]. The BadiRate program allows variable birth rates and death rates, and corrects for unobservable data [13]. Furthermore, many multistate characters do not necessarily evolve in a BD manner [16], and should therefore be modeled using transition rate matrices other than BD. In order to perform accurate rate estimation on a variety of discrete characters, we have developed a unified http://www.biomedcentral.com/1471-2105 /15/320 program DiscML by implementing the advanced features mentioned above as well as flexible options for transition rate matrices. Implementation DiscML estimates the evolutionary rates of discrete characters by fitting the distribution of all character states (the data) on a given phylogeny. The data need to be in a matrix format (vector format for a single site) as required in many other phylogenetic programs in R (see examples in Additional file 1). The provided phylogeny is required to have branch lengths, as branch lengths will be used as a relative time scale in the analysis. The evolutionary rates, transition rate matrices, and additional parameters discussed below will be optimized to maximize the likelihood of the data. The optimization is achieved using the PORT routines [17] implemented in the nlminb function in R. Implementation of rate variation in the analysis Rate variation among the character sites has long been recognized and implemented in DNA analyses [18], but has been missing from most analyses of non-DNA discrete characters (but see [8]). DiscML considers rate variation among the character sites by implementing a discrete distribution (with the option of alpha=TRUE). Estimation of prior root probabilities Most programs for the analysis of discrete characters assume only uniformly distributed prior root probabilities, e.g., π 1 = π 2 = .. = π a = 1 a , (a is the total number of character states). DiscML allows the estimation of prior root probabilities (π a ) for different character states (with the option of rootprobability=TRUE). Flexibility on both the transition model and the number of character states DiscML is flexible on both the size and type of the transition rate matrix (Q), which can be customized by users. This option could open the door for novel evolutionary analyses on different discrete characters. Several transition rate matrices are pre-determined in DiscML: model="ER" (equal rates, i.e., all entries in equation 1 are equal), model="SYM" (symmetric, i.e., α 1 = α 2 , β 1 = β 2 , γ 1 = γ 2 , ..), and model="ARD" (all rates different, i.e., all entries are free to vary). ER and SYM are reversible matrices, while ARD matrices are irreversible. Finally, all transition rate matrices (Qs) are calibrated [19], i.e., each Q satisfies so that the evolutionary rate parameter (μ) is the average number of transition events per site per evolutionary time unit [20]. Forced reversibility and flexible irreversible options When the prior root probabilities (π) for different character states are estimated, reversible transition matrices will no longer necessarily result in reversible evolutionary processes (because of potentially different probabilities of character states). Since it is sometimes of biological interest to assume reversibility (i.e., the expected x → y changes equal to the y → x changes), DiscML can allow forced reversibility by setting reversible=TRUE. In practice, reversibility is obtained by multiplying the corresponding root probabilities (equation 4) to the entries in reversible transition matrices, e.g., ER and SYM. Such a practice is conceptually the same with the general timereversible (GTR) DNA substitution model [21]. In Dis-cML, model="GTR" is equivalent to the combination of model="SYM" and reversible=TRUE. Similarly, when the prior root probabilities for different character states are estimated, forced reversibility can be applied to the BD related matrices (equation 5). In DiscML, the default setting is reversible=FALSE and users have the flexibility to conduct analysis by assuming irreversible evolutionary processes. Unlike in reversible processes, the root position can greatly affect the maximum likelihood calculation in irreversible cases [22,23]. Therefore, it is only meaningful to perform irreversible analysis on a rooted tree. If the provided phylogenetic tree is unrooted, DiscML will first reroot the tree by midpoint rooting, and perform analysis on the midpoint rooted tree. Correction for unobservable data Some characters may contain unobservable character states, which can only be inferred indirectly from the presence of observable states of the same characters in related taxa. Ancient characters can be lost from all examined extant taxa, and result in unobservable data. Dis-cML provides the option of zerocorrection=TRUE to calculate the likelihood conditional on a pattern being observable following [24], i.e., where L − is the likelihood of unobservable patterns. The correction for unobservable data (shown as '+0' in Table 1) is essential for systems such as gene family data due to the complete loss of some ancient genes, but not suitable for single-site analyses and for systems in which all character states are observable (e.g., nucleotide bases). Site and branch specific estimations Even though the default setting of DiscML is to perform rate estimation by fitting the distribution pattern of all character sites on a phylogeny, there is an option to perform rate estimation on individual sites (ind=TRUE). Individual rates can be graphically displayed using plotmu=TRUE. Furthermore, DiscML allows branch specific rate estimation, which can be specified using '$' on branches in the provided tree file. For instance, (((taxon1$1: 0.01, taxon2$1: 0.01)$3: 0.01, taxon3$2: 0.02)$3: 0.01, taxon4$2: 0.03); specifies three rates, one for the branches leading to taxon1 and taxon2 ($1), one for the branches leading to taxon3 and taxon4 ($2), and one for The parameter μ is the estimated evolutionary rate of the characters. "1s/0s only" indicates binary analysis by converting all non-zero characters to 1s using simplify=TRUE, '+0' indicates the correction for unobservable data using zerocorrection=TRUE, '+ ' indicates the implementation of a discrete distribution using alpha=TRUE, '+π ' indicates the estimation of prior root probabilities using rootprobability=TRUE, '+π REV ' indicates the estimation of prior root probabilities with forced reversibility using rootprobability=TRUE and reversible=TRUE. the remaining branches ($3). The modified tree files are no longer in the conventional Newick format, we have developed a function read.tree2 in DiscML to read such modified tree files. Additional features DiscML allows binary (1s/0s) analysis on data with more than two character states by converting all non-zero characters to 1s with simplify=TRUE. Bp Figure 1 Phylogenetic relationship of three Bacillaceae (B1, B2, B3) clades, on which the evolutionary rates of gene families are estimated using DiscML. A, a constant rate is estimated on each phylogeny; B, separate rates are estimated for external branches (μ 1 ) versus internal branches (μ 2 ) on each phylogeny. These three clades were studied in our previous study on gene presence, absence, and fragments [20]. Gene families are recategorized, with gene absence and fragments as character state 0, single-copy genes as 1, and gene families with two or more members as 2. Results and discussion DiscML was first tested using the gene family data on three Bacillaceae clades ( Figure 1A, Additional file 1 and [20]). In the previous study [20], we distinguished gene fragments from gene absence and gene presence. In this study, we eliminated the character state specific for gene fragments and re-categorized gene fragments as gene absence or character state 0, single-copy genes as character state 1, and gene families with two or more members 2 LnL 732 * * * 14 * * * 20 * * * μ 1 is for external branches, while μ 2 is for internal branches on each tree as illustrated in Figure 1B. * * * P < 0.001 (df=1), as 2 LnL approximately follows a χ 2 -distribution. http://www. as 2 (Additional file 1), so that the application of BD models on these data is meaningful. It is worth to note that, though the number of character states is restricted to three here, DiscML is flexible and capable of analyzing a large number of character states. The performance of DiscML is found to be reliable. For instance, the ER+0 model with the option of simplify=TRUE in Table 1 is mathematically identical to the M 00 model in [20]. The optimization in [20] was achieved using the Nelder-Mead simplex method [25], while the optimization in Table 1 was achieved using the PORT routines [17]. Importantly, the DiscML estimates are identical to the previous estimates for all three clades. As expected, the parameter-rich models consistently outperformed the nested simplistic models (e.g., LnL BDARD > LnL BDISYM > LnL BDER; LnL ARD > LnL SYM > LnL ER). Consistent with previous studies [3,20,26], rate estimates in closely related clades tend to be higher than those in distantly related clades due to the transient nature of many acquired genes (Table 1). Tested on an Intel Core i7 (3.4 Ghz) 16 GB RAM Dell desktop, the computation using DiscML is fast (Table 2) (Table 2). DiscML was developed to allow separate rates among branches since evolutionary rates can vary among lineages [27][28][29]. In the three Bacillaceae clades, we assigned separate rates between external branches (μ 1 ) and internal branches (μ 2 ) as illustrated in Figure 1B. Our results in Table 3 support the previous findings of higher gene turnover rates on external branches than those on internal branches [26,30]. It is often of interest for users to know the individual rate of each character site. Previously, we have shown that the mitochondrial intron in the 21S rRNA gene undergoes very rapid turnover in yeast [31]. In this study, we estimated the individual rates of all 17 mitochondrial introns on the yeast phylogeny ( Figure 2 and Additional file 1) based on the intron distribution pattern (Additional file 1). On the plot generated by DiscML using ind=TRUE (Figure 3), users can visually compare the individual rates of different introns. For instance, the introns at sites 7 and 8 have faster turnover rates than the 21S rRNA intron at site 17 ( Figure 3). The R commands used in the study are provided in Additional file 1. Conclusion We illustrated the versatility of DiscML on different types of data and analyses. With a great flexibility and fast computational speed, we are confident that DiscML can be used in a variety of studies on different discrete characters.
2016-05-12T22:15:10.714Z
2014-09-27T00:00:00.000
{ "year": 2014, "sha1": "19f54110debee59720b8901c934d6dd42d37a94b", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-15-320", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfe21e46a2ddd59b0dc5d88438a2dbe16a525c28", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
145943583
pes2o/s2orc
v3-fos-license
Effect of Natural Additives as Coconut Milk on the Shooting and Rooting Media of in v itro Barhi Date Palm ( Phoenix dactylifera L . ) The objective of the research study was to determine the effect of addition of different concentrations of three types of natural additives on Date Palm cv. Barhi: (1.25g/l, 2.5g/l, 5.0g/l for Casein Hydrolysate and 10%, 20%, 30% for (Coconut Milk and Yeast Extract), in addition to the control (0.05 BA mg/l) for shooting stage and (0.1 NAA mg/l, 3 g/l AC) for rooting stage. The results show that the use of 30% Coconut Milk achieved a high number of shoots and the highest shoot length was recorded with 10% Coconut Milk. In the date palm rooting stage, the results show that the use of 30% Coconut Milk increased the number of roots, shoot thickness and rooting percentage. However, root length was increased with 10% Coconut Milk. The lowest values were recorded with using Yeast Extract in this stage. Introduction Date palm (Phoenix dactylifera. L.) has a great economical importananc and agricultural uses throughout human’s history. Also, it is one of the oldest cultivated fruit trees in the world. Date palm is a very important crop in the Middle East, since it can grow well in both semi-dry desert areas and the newly cultivated land. The production of Arab world of dates is about 80% of the total production of the world. Egypt is the world largest date producing country i.e. more fruitful female palms (1.5M tonnes/annum) produce 1.694.813 tons of dates [9], [7]. In Egypt, date palm trees distribution covers a large area extends from Aswan to north Delta, beside the Oasis of Siwa, Bahriya, Farafra, Kharga, Dakhla. Egypt is one of the most productive countries of dates in the world, the number of fruitful female palms in Egypt is about 15 million produce 1.694.813 tons of dates [9]. Date palm is commonly propagated by ground offshoots; however, a female date palm produces only 10-20 offshoots in its entire life [20], which is a limiting factor for the propagation of commercial cultivars. A non-conventional technique of in vitro culture is widely used in many species including date palm [14]. The production of plants through in vitro culture is successfully introduced in many species [23]. The technique of tissue culture for propagation date palm, also called in vitro propagation, has many advantages as larg scale multiplication troughout the year, production healthy female cultivars, (disease and pest-free), or males having superior pollen; production of genetically uniform plants [19]. Recently, the natural products is using Yeast and plant extracts in vitro which have been discovered. Some undefined components such as Yeast Extracts, Frnit Juices and Protein Hydrolysate were frequently used in nutrient media as opposed to defined amino acids or vitamins as a further supplementation [4]. In addition, some other natural additives as Coconut Milk is frequently used as a popular addition to the media of orchid cultures in the floral industry of tissue culturing [5]. Natural extract could be By-Products of Palm Trees and Their Applications Materials Research Forum LLC Materials Research Proceedings 11 (2019) 186-192 doi: https://doi.org/10.21741/9781644900178-13 187 used at a 6% concentration as a replacement for sucrose [7] the utilization of natural additives compounds instead of hormones in culture media may decrease the possibility of genetic instability in plants. Organic additives such as Coconut Water and Casein Hydrolysate have been used to rise embryogenic callus growth and somatic embryogenesis in several plant species as well as date palm [6]. The aim of the research study was to determine the effect of different concentrations of combination natural additives such as Coconut Milk, Casein Hydrolysate and Yeast Extract, with the goal of enhancing the in vitro date palm cv. Barhi shoot and root proliferation. Materials and methods Explant and sterilization: The experiments were carried out in the Tissue Culture Laboratory for Date Palm Research and Development, Agriculture Research Center, Giza, Egypt. Four-yearold female offshoots of date palms cv. Barhi were collected and used as explants. Preparation of explants was done by removing the roots and outer green mature leaves from the offshoots, then reducing the size to less than 25 cm. remaining mature leaves were removed gradually from the bottom offshoot to the top in the laboratory [14]. The gradual removal of white young leaves and surrounding white fibrous leaf sheath resulted in 5 cm shoot tips, which were further trimmed to 2 cm for explant use. All excised shoot apexs were stored temporarily in an anti-oxidant solution (150 mg/l ascorbic acid and 100 mg/l citric acid) prior to surface-sterilization. Under aseptic conditions, shoot apexs were soaked in 70% ethanol alcohol solution for 30 seconds, followed by immersion in (1.0 g/l) of mercuric chloride for 5 min and thoroughly washed with sterilized distilled water for one-time. After that additional leaf primordial were removed from sterilized explants and then these explants were sterilized in 50%(v/v) commercial bleanch (Clorox) 5.25% w/v, sodium hypochlorite NaOCl) plus 1 drop Tween 20 for 15 min with rotary agitation, rinsed three times with sterilized distilled water. Effect of different natural additives on shooting and rooting stages: Shoots clusters which havd been received from indirect somatic embryogenesis as recomnded by (El-Dawayati et al,2018) were used as explants in this experiment. Different concentrations of three natural additives as follows: (1.25g/l, 2.5g/l, 5.0g/l for Casein Hydrolysate and 10%, 20%, 30% for Coconut Milk and Yeast Extract), were supplemented to a standard nutrient growth medium (control treatment without natural additives) for shooting and rooting, Control (treatments) were prepared by culturing the same explants on the same media under the same conditions without any supplements to study their effects on shoots development during shooting and rooting stages. All refined techniques were completed under aseptic conditions. Standerd growth media preparation for shooting stage was composed of 3⁄4 MS basal nutrient medium according to Murashige and Skoog with vitamins [16, 22], with addition of 100 mg/l Myo-Inositol; 80 Adenine Sulfate; 170 mg/l NaH2PO4.2H2O; 0.3 mgl/l Ca panthothianic acid; 0.4 mg/l thiamineHCl; 2 glycine; 0.5 mg/l nicotinic acid; 0.5 mg/l pyridoxin-HCl; 100 myo-inositol; 30g/l Sucrose; 0.05 mg/l (BA) and 0.1 NAA mg/l growth regulators and 6 g/l Agar; 7000 [Agar-agar/Gum agar] (Sigma Chem. Co., St. Louis, MO) (in mgl) [1]. Standerd growth media preparation for rooting stage Also the same different three natural additives at different concentrations were added to rooting media which consist of the same components of previous standerd growth media of shooting but supplemented only with 0.1 NAA mg/l growth regulator, with the addition of 1.5 g/l activated charcoal . and. 0.3 mgl/l Ca panthothianic acid; 0.4 mg/l thiamineHCl; 2 glycine; 0.5 mg/l nicotinic acid; 0.5 mg/l pyridoxin-HCl; 100 myoinositol; 200 glutamine; 1g; 30000 sucrose; and 6000 agar. By-Products of Palm Trees and Their Applications Materials Research Forum LLC Materials Research Proceedings 11 (2019) 186-192 doi: https://doi.org/10.21741/9781644900178-13 188 The pH value was adjusted at 5.7 before adding agar gerlite and autoclaving the medium at 1.2 Kg.cm equivalent to 121oC for 20 min. The nutrient media was dispensed into small jars twentyfive ml of media for shooting stage. The plantlets were cultured in tube size (25 x 250 mm) each tube contained 25 ml for rooting stage. Explants of each treatment and control treatments were transferred and repeatedly recultured for 2 recultures every 8 weeks into fresh medium of the same compostion. [10]. all samples were incubated for 16 hours under 1500 lux light conditions shooting stage and 3000 lux light conditions for rooting stage. They were then subjected to 8-hr dark conditions at 27 ± 2°C for the shoot multiplication stage. Subculturing was performed twice on the control samples and three times for the natural additives with their three different concentrations [4]. All procedures were carried out in a decontaminated horizontal laminar flow hood. The experimental design was completely randomized with three replicates in each treatment. Data recorded of 10 treatments were first analyzed as a whole using the aforementioned statistical design and then it was divided into groups as follows [14]: Table 1, Different concentrations of three types of natural additives (1.25, 2.5, 5.0 mg.L) for Casein Hydrolysate and (10%, 20%, 30% for Coconut Milk and Yeast Extract) T1 Control 0. 5 BA mg.L (Shooting stage). T2 Control 0. 1 NAA mg.L +3AC g.L (Rooting stage). T3 Casein Hydrolysate 1.25 mg.L. T4 Casein Hydrolysate 2.5 mg.L. T5 Casein Hydrolysate 5 mg.L T6 Coconut Milk 10%. T7 Coconut Milk 20%. T8 Coconut Milk 30%. T9 Yeast Extract 1.25 mg.L. T10 Yeast Extract 2.5 mg.L. T11 Yeast Extract 5 mg.L Collected data for shooting were calculated by estimated the number of shooting, shoot length per cluster in cm and shoot thickens per cluster, number of roots formed rooting % and the length of roots per cluster in cm. Statistical Analysis: A factorial design in completely randomized arrangement was used and data were subjected to analysis of variance. Difference of means among treatments was determined using L.S.D. test at the 5% significance level according to Smith et al. [11]. By-Products of Palm Trees and Their Applications Materials Research Forum LLC Materials Research Proceedings 11 (2019) 186-192 doi: https://doi.org/10.21741/9781644900178-13 189 Fig (3): Effect of Coconut milk, Casein Hydrolysate and Yeast extract on (shoot thickness cm) of in vitro Barhi CV. (Phoenix dactylifera L.). Fig (2): Effect of Coconut milk, Casein Hydrolysate and Yeast extract on shoot length (cm) of in vitro Barhi CV. (Phoenix dactylifera L.). Fig (1): Effect of Coconut milk, Casein Hydrolysate and Yeast extract on No. of shoots of in vitro Barhi CV. (Phoenix dactylifera Introduction Date palm (Phoenix dactylifera.L.) has a great economical importananc and agricultural uses throughout human's history.Also, it is one of the oldest cultivated fruit trees in the world.Date palm is a very important crop in the Middle East, since it can grow well in both semi-dry desert areas and the newly cultivated land.The production of Arab world of dates is about 80% of the total production of the world.Egypt is the world largest date producing country i.e. more fruitful female palms (1.5M tonnes/annum) produce 1.694.813tons of dates [9], [7].In Egypt, date palm trees distribution covers a large area extends from Aswan to north Delta, beside the Oasis of Siwa, Bahriya, Farafra, Kharga, Dakhla.Egypt is one of the most productive countries of dates in the world, the number of fruitful female palms in Egypt is about 15 million produce 1.694.813tons of dates [9].Date palm is commonly propagated by ground offshoots; however, a female date palm produces only 10-20 offshoots in its entire life [20], which is a limiting factor for the propagation of commercial cultivars.A non-conventional technique of in vitro culture is widely used in many species including date palm [14].The production of plants through in vitro culture is successfully introduced in many species [23].The technique of tissue culture for propagation date palm, also called in vitro propagation, has many advantages as larg scale multiplication troughout the year, production healthy female cultivars, (disease and pest-free), or males having superior pollen; production of genetically uniform plants [19].Recently, the natural products is using Yeast and plant extracts in vitro which have been discovered.Some undefined components such as Yeast Extracts, Frnit Juices and Protein Hydrolysate were frequently used in nutrient media as opposed to defined amino acids or vitamins as a further supplementation [4].In addition, some other natural additives as Coconut Milk is frequently used as a popular addition to the media of orchid cultures in the floral industry of tissue culturing [5].Natural extract could be used at a 6% concentration as a replacement for sucrose [7] the utilization of natural additives compounds instead of hormones in culture media may decrease the possibility of genetic instability in plants.Organic additives such as Coconut Water and Casein Hydrolysate have been used to rise embryogenic callus growth and somatic embryogenesis in several plant species as well as date palm [6]. The aim of the research study was to determine the effect of different concentrations of combination natural additives such as Coconut Milk, Casein Hydrolysate and Yeast Extract, with the goal of enhancing the in vitro date palm cv.Barhi shoot and root proliferation. Explant and sterilization: The experiments were carried out in the Tissue Culture Laboratory for Date Palm Research and Development, Agriculture Research Center, Giza, Egypt.Four-yearold female offshoots of date palms cv.Barhi were collected and used as explants.Preparation of explants was done by removing the roots and outer green mature leaves from the offshoots, then reducing the size to less than 25 cm.remaining mature leaves were removed gradually from the bottom offshoot to the top in the laboratory [14].The gradual removal of white young leaves and surrounding white fibrous leaf sheath resulted in 5 cm shoot tips, which were further trimmed to 2 cm for explant use.All excised shoot apexs were stored temporarily in an anti-oxidant solution (150 mg/l ascorbic acid and 100 mg/l citric acid) prior to surface-sterilization.Under aseptic conditions, shoot apexs were soaked in 70% ethanol alcohol solution for 30 seconds, followed by immersion in (1.0 g/l) of mercuric chloride for 5 min and thoroughly washed with sterilized distilled water for one-time.After that additional leaf primordial were removed from sterilized explants and then these explants were sterilized in 50%(v/v) commercial bleanch (Clorox) 5.25% w/v, sodium hypochlorite NaOCl) plus 1 drop Tween 20 for 15 min with rotary agitation, rinsed three times with sterilized distilled water. Effect of different natural additives on shooting and rooting stages: Shoots clusters which havd been received from indirect somatic embryogenesis as recomnded by (El-Dawayati et al,2018) were used as explants in this experiment.Different concentrations of three natural additives as follows: (1.25g/l, 2.5g/l, 5.0g/l for Casein Hydrolysate and 10%, 20%, 30% for Coconut Milk and Yeast Extract), were supplemented to a standard nutrient growth medium (control treatment without natural additives) for shooting and rooting, Control (treatments) were prepared by culturing the same explants on the same media under the same conditions without any supplements to study their effects on shoots development during shooting and rooting stages.All refined techniques were completed under aseptic conditions.Standerd growth media preparation for shooting stage was composed of ¾ MS basal nutrient medium according to Murashige and Skoog with vitamins [16,22], with addition of 100 mg/l Myo-Inositol; 80 Adenine Sulfate; 170 mg/l NaH2PO4.2H2O;0.3 mgl/l Ca panthothianic acid; 0.4 mg/l thiamine-HCl; 2 glycine; 0.5 mg/l nicotinic acid; 0.5 mg/l pyridoxin-HCl; 100 myo-inositol; 30g/l Sucrose; 0.05 mg/l (BA) and 0.1 NAA mg/l growth regulators and 6 g/l Agar; 7000 [Agar-agar/Gum agar] (Sigma Chem.Co., St. Louis, MO) (in mgl -1 ) [1].Standerd growth media preparation for rooting stage Also the same different three natural additives at different concentrations were added to rooting media which consist of the same components of previous standerd growth media of shooting but supplemented only with 0.1 NAA mg/l growth regulator, with the addition of 1.5 g/l activated charcoal .and. 0.3 mgl/l Ca panthothianic acid; 0.4 mg/l thiamine-HCl; 2 glycine; 0.5 mg/l nicotinic acid; 0.5 mg/l pyridoxin-HCl; 100 myoinositol; 200 glutamine; 1g; 30000 sucrose; and 6000 agar. The pH value was adjusted at 5.7 before adding agar gerlite and autoclaving the medium at 1.2 Kg.cm -2 equivalent to 121ºC for 20 min.The nutrient media was dispensed into small jars twentyfive ml of media for shooting stage.The plantlets were cultured in tube size (25 x 250 mm) each tube contained 25 ml for rooting stage.Explants of each treatment and control treatments were transferred and repeatedly recultured for 2 recultures every 8 weeks into fresh medium of the same compostion.[10].all samples were incubated for 16 hours under 1500 lux light conditions shooting stage and 3000 lux light conditions for rooting stage.They were then subjected to 8-hr dark conditions at 27 ± 2°C for the shoot multiplication stage.Subculturing was performed twice on the control samples and three times for the natural additives with their three different concentrations [4].All procedures were carried out in a decontaminated horizontal laminar flow hood.The experimental design was completely randomized with three replicates in each treatment.Data recorded of 10 treatments were first analyzed as a whole using the aforementioned statistical design and then it was divided into groups as follows [14]: T 11 Yeast Extract 5 mg.L -1. Collected data for shooting were calculated by estimated the number of shooting, shoot length per cluster in cm and shoot thickens per cluster, number of roots formed rooting % and the length of roots per cluster in cm.Statistical Analysis: A factorial design in completely randomized arrangement was used and data were subjected to analysis of variance.Difference of means among treatments was determined using L.S.D. test at the 5% significance level according to Smith et al. [11]. Results and discussion Data in Fig (1) show the effect of addition of different concentrations of three types of natural additives on Date Palm cv.Barhi: (1.25g/l, 2.5g/l, 5.0g/l for Casein Hydrolysate and 10%, 20%, 30% for Coconut Milk and Yeast Extract), in addition to the control (0.05 BA mg/l mg/l) among different treatments regarding the number of shoots, Maximum increase in number of shoots with (6.66) was observed when Coconut Milk 30% (T7) added as addition to the control (0.05 BA mg/l) for shooting stage, followed by the same materials (T6) of Coconut Milk 20% and (T3) Casein Hydrolysate 2.5 g/l (5.00) i.e.T5, T4, T8, T2, T9, T1 and T10 (2.66).Data regarding in vitro shoot length (cm) showed that T5 (Coconut Milk 10%) had highest value (2.66) followed by T6, T7 (11.83 & 11.50) and the lowest results were viewed the Yeast Extract 5g/l (7.33).Higher value recorded for number of roots (6.67) was founded in T7 (Coconut Milk 30%) followed by T6 and T5 (5.33&4.66),respectively as shown in Fig (5).Results presented in Figure (6) when assessing root lengths in (cm), the highest results were acquired and identical in Coconut Milk 10% (T 5) and Coconut Milk 20%, control were quite close (2 cm), on the hand, lowest results were in Yeast Extract 5g/l (0.50).Regarding to Fig (7), the highest same values were cleared observed with three treatments (T1, T6 and T7) which recorded (100%), the lowest results were in Yeast Extract 5g/l (13.33).using of natural additives instead of plant growth regulators when added to culture medium may be gave minimum or reduce the possibility of genetic instability in plants [4].Our results showed the potential use of natural additives to stimulate proliferation.Medium composition, genotype and plant hormones some factors, which affected on multiplication.[12] date palm cv.Maktoom showed higher shoot-bud multiplication in MS medium with a hormone combination of 1 mg L-1 NOA, 1 mg L-1 NAA, 4 mg L-1 2iP and 2 mg L-BAP.Half-strength MS medium improved with 0.5 mg L -1 NOA and 0.5 mg L -1 Kin produced (23.5) shoot buds per explant after 3 months of multiplication in cv.Najda [13].Average of production an of 18.2 buds per culture in cv.Hillawi, in the MS medium containing 1 mg L-1 BAP and 0.5 mg L-1 TDZ [2].Many researchers [20] studied the effects of using plant extracts and Yeast in vitro culture.In media undefined components such as fruit juices, Yeast Extract and Casein Hydrolysate were frequently used in place of defined vitamins or Amino Acids, or even as further supplements.As it is essential that a medium should be the same each time it is prepared, materials, which can differ in their composition, are best avoided if at all possible, although improved results are sometimes obtained by their addition [4,5,15].High protein content was founded in Coconut Milk, while high Amino Acid and vitamin were in Casein Hydrolysate, so this confirms that these natural additives increase cell division.Additionally, both Casein hydrolysate and Coconut milk act as cytokine, so they both affect the growth of shoots.These results are in accordance with [8].Duhamet and Gautheret [26] declared that Coconut Milk are frequently used as a stimulator of cell division; this is due to the high Amino Acid content in Coconut Milk as mentioned with [17].[18] Suggested 1 mg L -1 NAA induces optimum and better rooting at the same concentration IBA or IAA.Mejhoul cv.[3] reported that the shoots grew an average of (13.4 cm) with an average 4.6 roots number per shoot with wide and green leaves from (3 months) old hormone-free 1/2 MS medium strength.Yeast Extract showed an inversely proportional relationship with indoles, which could be an indicator to least efficacy being attributed to them, where they acquired the lowest results in number and length of roots; [19] corroborates these findings.[17] declared that the most efficient secondary somatic embryo formation in association with coconut milk was the most effective component and 5.00g/l casein hydrolysate that for the growth vigor,.The Yeast Extract [2] produced the lowest readings in all assessed concentrations.In addition, chemical analysis was performed that tested Chlorophyll' A' & 'B', Amino Acid, total Carbohydrate, Protein, Indole and Phenol.In addition, the results showed that Coconut Milk 30% and Casein Hydrolysate 2.5g/l gaved the best results, both in test responsiveness and regenerative abilities. Summary Our findings validate the results in the supplementation of nutrient media with natural additives as growth regulators.It revealed Coconut Milk 30%, 20%, 10% then Casein Hydrolysate 2.5g/1 as the most successful inducers and are recommended for the in vitro culturing of Barhi Date Palm (Phoenix Dactylifera L.). were quite close (Coconut Milk 20% & 30%) among various means of different concentrations of three types of natural additives on Date Palm cv.Barhi the three sources (Casein Hydrolysate, Coconut Milk and Yeast Extract) when compared to the control samples.
2019-05-07T13:41:03.761Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "c6fd1006ccc49c51e37fba8b8da05cb80690daef", "oa_license": "CCBY", "oa_url": "https://www.mrforum.com/wp-content/uploads/open_access/9781644900178/13.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c6fd1006ccc49c51e37fba8b8da05cb80690daef", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
237006862
pes2o/s2orc
v3-fos-license
A New Filled Function for Non-smooth Global Optimization and Its Applications This paper aims to discover a global optima for non-smooth optimal issues by designing a novel filled function, which, regarded as a bridge delivering one minimizer to another minimizer, contains only one parameter for easy adjustment. Theoretically, the natures of the considered filled function are conducted as well as a new algorithm is presented. Finally, certain experimental calculations are presented to illustrate the performance of the constructed algorithm. Introduction With the advancement rapidly in artificial intelligence, engineering design, financial economy, and national defence industry, etc., researches about solving nonlinear programming problem ( ):min ( ) n x R P f x  are a concern field with extremely significance. Up to now, lots of theoretical and computational contributions are performed in [1,2,3]. Especially, the conception of filled function initially mentioned in [4] has been further discussed in [5,6,7,8,9,10], which is considered as an utility and effective method to address global optimization issue. The filled function approach uses the considered filled function as a bridge to deliver the local optima to get one better solution. The implementation of the filled function technique involves two stages. The first stage is to optimize original objective function before the corresponding filled function is considered in the second step to execute the minimization process. The filled functions mentioned in [5,7,8,9,10] have some shortcomings. The filled function presented in [5] with an exponential term, its value varying quickly, may fail to calculation. The filled function constructed by [5] is an improved version of the one proposed in [4], and the major disadvantage is that the execution of filled function algorithm runs beyond the search space of problem, but lying in a line linking the newly discovered local minima and one point located at certain neighbourhood of the next undetected optimal solution. It is a significant difficulty of computation to discover such the direction. The primitively purpose of the filled functions discussed in [7,8,9,10] is to solve problems of smooth global optimization. However, numerous cases in practical scenario are essentially presented as non-smooth global optimization model. Our work mainly is devoted to expand filled function approaches to solving nonsmooth optimal issues, which is more suitable for practical needs. Usually the settlement of global optimization covers two critical issues: one is how to avoid the local minima to discover a global solution, and the other is the confirmation that whether the current optimum solution is global. This article mainly aims to solve the former problem. The organization of the article is in this order: Section 2 constructs a new filled function with single parameter on non-smooth issues, theoretically analysis of which is studied before Section 3 derives a new filled function approach. Section 4 includes numerical experiments for nonlinear optimization cases conducted by using the proposed algorithm and conclusions of this work is summarised in Section 5. One Single Parameter Filled Function and Its Properties A global minimization problem with box constraint condition is discussed in this paper, and the mathematical formula is ( ):min ( ) Most of filled function algorithms mainly utilizes Clark generalized gradient, the more information of which can be seen in [13], to solve non-smooth global optimization issue. Let * ( ) x L P  , and we define the concept of filled function with regard to problem ( ) P . if and only if it meets below conditions: (1) * x represents an absolute maximum of * ( , ) F x x in the box set X . (2) For two unequal points * , x x X  and satisfying that holds. (3) If * x does not indicate a best optimal solution, so one point represents locally optimal solution to the function where 0   is a fixed small scalar. In the next, we will give the process of proving that * ( , , ) represents a filled function. x O x   , applying the meanvalue theorem, we can obtain Therefore * x represents one strictly local maximum with respect to * ( , , ) , and the fixed small scalar 0 Proof: According to the definition, one has Therefore, one obtains * 0 ( , , ) Provided that for any Based on the aforementioned inequality, one derives   * 5 * ( ( ) ( )) ( , , ) , , . Due to for any This completed the proof. Filled Function Algorithm According to the above-mentioned theoretical analysis of the natures with respect to presented * ( , , ) P x x r , and then this section describe a novel filled function approach for address optimal issues with non-smooth function. Step 1: Beginning from selected point 1 , x utilize one suitable optimal algorithm for non-smooth local optimization to discover a local optima * 1 x with respect to primal problem ( ) P , and then get into step 2. Step 3: Establish the filled function * 1 ( , , ) F x x  based on formula (1) and then get into 4. Step 4: If 2 k n  , step 7 is performed; if not, let and utilize x to discover one local optimal solution k x with respect to considered filled function: Step 5: If , k x X  let 1 k k   , then get into step 4; If not, step 6 is selected. Step 6: If x is utilized to address the original problem ( ) P for discover one new local optima * 2 x satisfying and get into step 7. Step 7: Lessen  through utilizing indicates the best solution with respect to the problem ( ) P . Therefore, the iterative loop terminates. Notes: (1) The given method according to the discussed filled function is suitable for smooth cases, as well. (2) The iterative loop of proposed filled function method includes two phases (as shown Figure 1): a local minimization phase and a global drop-off phase. A local optima * x could be detected by the means of arbitrary suitable optimal methods [11,14] in the former phase. The second stage aims to solve the considered function Numerical Experiment Some numerical experiments, including one case applying the filled function algorithm to address NPC issue, are conducted by the presented filled function method. All experiments are executed in Fortran 95. In order to discover a local optimal solution for optimal problem, for non-smooth scenario, we adopt the optimal methods mentioned in [6,9], however, for smooth scenario, the penalty method and conjugate gradient method are selected. Example 1 [6]: (2 ))) 20(exp( 0.02( )) 1) . . 300, 0.5 2, 30 30, 1, 2 The presented method obtained the best results * * .Therefore, one can address the problem ( ) P to achieve the solutions of ( ) NE . Specially, assume that the number of problem ) NE ( roots is not less than one, so that each best solution * x to problem ( ) P satisfying * ( ) 0 f x  obtained by our method represents a root of the ( ). NE The best solution * 5 =(1.452 10 ,6.8933045) x   of the above example was obtained successfully by the means of the presented algorithm. And the corresponding computations are described in the following Table 2. Table 1 and Table 2 report the calculations of the experiment cases, which state that the structured filled function approach can avoid local optima to obtain the best solution with a high accuracy for non-smooth optimization issues. The convergence curve of the presented algorithm is described in Figure 2 to observe the convergence rate of the algorithm. Therefore, the results of this section indicate that the new algorithm has advantages of local optimal solution avoidance along with convergence speed simultaneously during the period of the iterative loop. Conclusion It has been proven that the filled function approach, as one of the auxiliary function methods, can effectively address the optimization problem to obtain the global optimum solution. A new one single parameter filled function with structure simplicity as well as computation easily is constructed, which is fit for both non-smooth and smooth scenarios of global optimal issues. We studies theoretically the natures with respect to the constructed filled function, and then provide the new algorithm. Some smooth and non-smooth cases are conducted and computational results revealed that the theoretical analysis of presented filled function is correct along with the designed filled function algorithm is applicable and effective with satisfactory performance.
2021-08-14T20:03:19.746Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "991377ffa3ac931e8e89cd030635f2eabea1029f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1995/1/012053", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "991377ffa3ac931e8e89cd030635f2eabea1029f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253841855
pes2o/s2orc
v3-fos-license
A machine learning framework for scRNA-seq UMI threshold optimization and accurate classification of cell types Recent advances in single cell RNA sequencing (scRNA-seq) technologies have been invaluable in the study of the diversity of cancer cells and the tumor microenvironment. While scRNA-seq platforms allow processing of a high number of cells, uneven read quality and technical artifacts hinder the ability to identify and classify biologically relevant cells into correct subtypes. This obstructs the analysis of cancer and normal cell diversity, while rare and low expression cell populations may be lost by setting arbitrary high cutoffs for UMIs when filtering out low quality cells. To address these issues, we have developed a novel machine-learning framework that: 1. Trains cell lineage and subtype classifier using a gold standard dataset validated using marker genes 2. Systematically assess the lowest UMI threshold that can be used in a given dataset to accurately classify cells 3. Assign accurate cell lineage and subtype labels to the lower read depth cells recovered by setting the optimal threshold. We demonstrate the application of this framework in a well-curated scRNA-seq dataset of breast cancer patients and two external datasets. We show that the minimum UMI threshold for the breast cancer dataset could be lowered from the original 1500 to 450, thereby increasing the total number of recovered cells by 49%, while achieving a classification accuracy of >0.9. Our framework provides a roadmap for future scRNA-seq studies to determine optimal UMI threshold and accurately classify cells for downstream analyses. Introduction One of the key objectives in cancer genomics is characterizing the composition and diversity of cancer and normal cells in the tumor microenvironment (TME) (Ren et al., 2018). Several studies have shown that the composition of the TME, such as the prevalence of infiltrating lymphocytes, polarity of myeloid cells and signaling from stromal components play a critical role in the maintenance and progression of malignant cells, and can serve as indicators of therapeutic potential and response (Gooden et al., 2011;Awad et al., 2018;Maibach et al., 2020;Wu et al., 2020;Geng et al., 2021). The study of the TME has been greatly enhanced by the introduction of single cell RNA sequencing (scRNA-seq), which enabled characterizing the diversity and phenotypes of cells in a tumor at a fine resolution (Rubio-Perez et al., 2021;Tang et al., 2022). Since the introduction of scRNA-seq more than a decade ago, several incremental technological advances have improved the accessibility and quality of transcriptomic analyses (Hwang et al., 2018;Chen et al., 2019). One such advance is the introduction of unique molecular identifiers (UMIs) which allows direct quantification of available transcripts (Islam et al., 2013). While non-UMI scRNA-seq platforms as Smart-Seq2 provide an improved transcript coverage and high level of mappable reads, UMI platforms such as 10X and drop-seq benefit from the limited amplification bias from highly abundant transcripts (Picelli et al., 2014;Zhang et al., 2019). The higher throughput of UMI platforms also improves the detection rates of rare cell populations, such as certain immune cells, within tumor samples (Azizi et al., 2018). Thus, scRNA-seq technologies have greatly enhanced the ability to characterize the diversity of cancer cells and the TME. However, the ability to accurately classify the cell types in scRNA-seq dataset is often limited by technical factors, such as read quality of the cells. The quality control (QC) process in a typical scRNA-seq pipeline involves identification and filtering out cells of low quality, typically based on the number of UMIs, number of unique genes, and/or the percentage of mitochondrial DNA (mtDNA). The stress induced by droplet-based UMI methods introduces a challenge in ensuring that the UMIs map to healthy cells (Chittur et al., 1988). For example, cells with leaky or damaged membranes can result in a drop in the number of UMIs and genes detected, while the number of UMIs mapping mtDNA may become relatively high (Luecken and Theis, 2019). This complicates the distinction between true low-quality cells and quiescent, small, and/or rare cell populations, thus creating a trade-off between cell quality and diversity during the QC process (Luecken and Theis, 2019). Since mitochondrial DNA content varies significantly across organisms and tissues, comprehensive analysis of these variables helps to establish universal organism and tissue-specific threshold guidelines (Osorio and Cai, 2021). However, due to the variability in the number of UMI and genes owing to biological and technical factors, a similar universal threshold cannot be established a priori. A probabilistic model was proposed to sort out low-quality cells but its accuracy was limited by the prevalence of low-quality cells, which is usually unknown (Hippen et al., 2021). Additionally, several scRNA-seq pre-processing pipelines included different approaches for QC including the option to view the UMI distribution per cell type using user-defined marker genes (McCarthy et al., 2017;Guo et al., 2021;Grandi et al., 2022). However, these approaches generally depend on the user's judgment to detect outliers (lowquality cells) from reads and/or gene distribution curve. The scRNA-seq literature shows the number of reads threshold selected at QC can vary from as low as 100 and up to 2500 UMIs, yet the rationale for selecting such thresholds is usually missing Gambardella et al., 2022;Gao et al., 2022;Karademir et al., 2022;Lian et al., 2022). Another approach which involves an iterative process between the QC step and downstream analysis was also proposed to improve the detection of low-quality cells (Luecken and Theis, 2019). But the mechanism by which the downstream information can be used to optimize an initial reads threshold is not yet defined. To address the lack of a systematic approach to determine an optimal reads threshold for filtering cells and classifying cells with high accuracy, we have developed a novel machine learning framework that uses cell identity information collected from a high-quality gold standard. Using this approach, we can identify the lowest reads cut-off that can be implemented in an scRNAseq data and accurately classify cell lineages and subtypes. We used expert-labelled lineage and cell type identities from a gold standard breast cancer scRNA-seq dataset to train the predictive classifiers. We systematically downsampled the reads per cell in the gold standard dataset using a Poisson model and then applied the classifier to predict cell types. We then calculated the prediction accuracies of the classifiers using the known identities of the cells. This allowed us to determine the optimal threshold at which sufficient biological information was retained. Using this approach, we rescued 49% more cells from the gold standard dataset, which is valuable for downstream analyses of the TME. Using two external datasets, we show that our approach can be applied to low expression cells and to subtypes of major cell types as neutrophils and T-cell subtypes, respectively. Importantly, our framework can be extended to any scRNA-seq dataset where users seek to rescue and classify additional cells at optimal read depths. Analysis workflow The analysis pipeline consists of the following main steps (Figure 1). We applied a stringent QC threshold on the FELINE dataset (raw UMIs) to filter for the high-confidence, high-quality Frontiers in Genetics frontiersin.org 02 cells. A combination of unsupervised and supervised expertled approaches was used to generate the high-quality cell lineage and subtype labels which were used at the gold standard for downstream analysis. For each dataset, we first split it into training and test sets (50/50). Next, the training set was used to train the classification models to predict cell lineage and subtypes. The test set was then downsampled using Poisson model at different target UMI thresholds. We then assessed the accuracy of the classification models on the test set at different target UMI thresholds. The analysis steps are described in more details in the subsections below. Gold standard scRNA-seq dataset preprocessing We used the FELINE clinical trial scRNA-seq dataset which spans 35 patients with ER-positive HER2-negative early stage breast cancer (Griffiths et al., 2021). The patient samples were processed using the 10X Chromium platform and sequenced using 150-bp paired-end sequencing at a median depth of 34,000 reads per cell (Griffiths et al., 2021). The reads were aligned to a reference genome (GRChg38) using Bioinformatics the ExperT SYstem and CellRanger v.3.0.2 pipelines (Chen and Chang, 2017). FeatureCounts was then used to generate a matrix of gene transcript UMIs for each cell, which we refer to as "original dataset" in this manuscript (Liao et al., 2014). To generate the gold standard dataset, we applied a stringent QC filter which retained cells with >1,500 reads, 500-7,000 unique genes, and less than 20% mitochondrial content, as reported in the original study (Griffiths et al., 2021). After filtering out "low-quality" cells and doublets, we retained 176,644 "high-quality" cells. To generate Uniform Manifold Approximation and Projection (UMAP), we lognormalized, scaled the count matrix, and ran principal component analysis (PCA) on the 2000 highly variable genes using R package Seurat v.4.1.1 (Butler et al., 2018). We then FIGURE 1 Analysis plan workflow. Flow chart shows the process of initial QC and generation of gold standard cell type annotations from the FELINE dataset. This is followed by a 50/50 split of a subsample into training and test sets for both SingleR and SingleCellNet classifiers for all datasets. The test set counts were then transformed using a Poisson model using different thresholds which is then used to determine the classification accuracy of lineage and cell type labels. Frontiers in Genetics frontiersin.org 03 constructed the K nearest neighbor and using Seurat's FindNeighbor function on 10 principal components which was used to construct the UMAP. We then used SingleR to generate a preliminary cell type label for each cell using Human Primary Cell Atlas (HPCA) as a reference (Mabbott et al., 2013;Aran et al., 2019). These labels were used to annotate the clusters as either epithelial, stromal, or immune based on the most frequent cell type labels by SingleR. The SingleR labels were validated using lineage marker gene expression for epithelial cells (KRT19, CDH1), stromal cells (FAP, HTRA1), and immune cells (PTPRC) (Griffiths et al., 2021). SingleR cell type labels were also validated using cell type marker gene expression for macrophages (CSF1R, CD163), T-cells (CD2, CD247), B-cells (MS4A1, IGHM), fibroblasts (COL5A1, FBLN1), endothelial cells (VWF), pericytes (RGS5), and adipocytes (CIDEA). To identify putative cancer cell, we used InferCNV which predicts copy number alterations based on the positional gene expression intensity across all chromosomes (Korsunsky et al., 2019). We used stromal and immune cells as normal references for InferCNV and labelled epithelial cells with positive copy number alterations (CNA) profile as cancer cells (Griffiths et al., 2021). All downstream analyses excluded nonmalignant epithelial cells. The raw (un-normalized) UMI count matrix of the gold standard dataset was used for model training and assessment. A random unbiased subsample of the gold standard dataset (n = 35,000) was used to create a Seurat object for downstream analysis. We removed cells with >15,000 reads to account for any missed doublets. Low-quality cells subset For "low-quality" cells which that were excluded from the gold standard dataset, we predicted the cell type labels using SingleR and human primary cell atlas (HPCA) as a reference (Mabbott et al., 2013;Aran et al., 2019). To generate lineage labels, we aggregated cell type predictions into lineage labels as follows: epithelial (epithelial cells), stromal (fibroblasts, endothelial cells, chondrocytes, osteoblast, smooth muscles), immune (T-cells, B-cells, macrophages, monocytes, NK cells, neutrophils). To study the outcome of the initial and optimized thresholds on cell retention rate, we combined the gold standard subsample (n = 35,000) with a low-quality subsample (n = 35,000) for a total of 70,000 cells. Training lineage and cell subtype classification modes We used two different multi-class prediction algorithms for the analysis, SingleCellNet (SCN) and SingleR. SCN is a Random Forest classifier developed for scRNA-seq datasets and implemented as R package singleCellNet v.0.1.0 (Tan and Cahan, 2019). SingleR is a reference-based cell type classifier where after an internal marker genes identification step, cell identity is determined by Spearman correlation between the expression profile of the unknown cell and the reference samples e.g., HPCA (Aran et al., 2019). Due to the infeasibility to train a random forest classifier on all genes, we applied Seurat's FindAllMarkers function (test.use = "negbinom", min.pct = 0.5, max.cells.per.ident = 2000, logfc.threshold = 0.5) to generate lineage and cell type marker gene sets. For either lineage or cell type levels, we sampled 400 cells per label using splitCommon function implemented in R package singleCellNet v.0.1.0. The lineage and cell type samples were split 1:1 into a training and test set. For the SCN classifier, the UMI matrices of both training sets were filtered for the corresponding marker gene set previously identified. The SCN classifier was trained using scn_train function (nTopGenes = 100, nRand = 50, nTrees = 1000, nTopGenePairs = 200) implemented in the singleCellNet package. In contrast, the SingleR classifier was trained on all available genes in UMI matrices without filtering using trainSingleR function implemented in the R package SingleR v.1.6.1. Systematic downsampling of reads and genes To simulate reduce average reads per cell at a pre-specified threshold, we downsampled the reads from high-quality cells. We used a Poisson distribution model to calculate a transformation factor. The probabilities density function for an integer vector x is defined as: where, λ is the point mass (Poisson rate). For each cell, we generated a vector of random deviates of length = number of genes, and λ = target threshold/total reads. Reads from each cell were multiplied by their transformation factor to reduce the total counts per cell to the desired threshold. To downsample the genes of the FELINE dataset, we first converted the UMI matrix into binary expression. For cells where n> = 1, we reduced random n genes from being expressed to not expressed (1 → 0) where n is the number of genes above test threshold. Each transformed matrix was then used to assess the accuracy of classification for the corresponding threshold. In the non-binary experiments, the remaining binary matrix was converted back to a nonbinary UMI matrix for assessment while in binaryexperiments, both the training and downsampled matrices were binary. Model assessment Using the SCN and SingleR trained models, we generate the predicted labels for all downsampled matrices using scn_predict and classifySingleR functions, respectively. We then used the true labels to calculate the Area Under Receiver Operating Characteristic Curve (AUROCC) for both models at each threshold using the R package pROC v.1.18.0. Cell retention rates in gold standard scRNA-seq dataset The diversity of cell populations within the TME introduces a challenge when applying a UMI threshold across tumor samples: a stringent, high UMI threshold would remove most of the low-quality cells, but also lose important populations with low reads like immune cells. In contrast, a lenient threshold would retain the low-UMI populations, but this could also increase the noise and possibly skewing the results of the downstream analysis. In addition, the QC step is usually performed early in the analysis pipeline where biological information (cell identities) is not yet available. Thus, a biology-driven revision of QC thresholds can be easily overseen. In the FELINE dataset, we had used 1,500 reads as a threshold for low-quality cells (Figure 1) (Griffiths et al., 2021). To construct the gold standard dataset, we used InferCNV to identify cancer cells and SingleR to predict normal cell identities which were verified by marker gene expression (Supplementary Figures S1A,B). After meticulous cell type labelling of high-quality cells, a closer view of UMI distribution across cell lineages showed a high level of retention of epithelial cells (87%) post-QC. In contrast, only around half of the stromal and immune cells were retained ( Figure 2). As breast cancer cells are of epithelial origin (Noureen et al., 2022), it is expected that actively FIGURE 2 Post-QC retention rate varies across different lineages and cell types in the FELINE dataset. Density plots depict the reads-per-cell distribution across different lineages and cell types within a subsample of the original dataset (n = 70,000). The initial QC count cut-off (1,500 reads), as dashed line, splits the fraction of cells considered as "high-quality", highlighted in blue, from the cells considered as "low-quality", highlighted in red, across different cell populations. The average count and the fraction of "high-quality" cells are annotated for each population. Frontiers in Genetics frontiersin.org 05 proliferating cancer cells were driving a higher average UMI among epithelial cells (5,354 UMIs) than stromal (3,114 UMIs) or immune cells (2,154 UMIs) (Figure 2). In addition, at the finer cell subtype annotation level, two-thirds of macrophages/monocytes were retained, while only a third of the sequenced population of T and B lymphocytes were retained ( Figure 2). Since B-and T-lymphocytes have the lowest average UMIs per cell in this cohort (1,813 and 1,639 respectively), the initial QC threshold only retained a small fraction of these cells for downstream analyses, suggesting an optimization of the initial threshold might be required. Machine learning framework guides threshold optimization and accurate classification We developed a novel framework that systematically identified the lowest read depth threshold that can be used to accurately classify cell lineages and subtypes. Our approach trained classifiers for lineage and subtypes on a training subset of the gold standard dataset, and then predicted the cell lineage and subtypes of a held-out test or validation subset from the gold standard dataset at progressively diminished read depths. By following this approach, we could identify what is the minimum number of average reads required to accurately classify cells. We used SCN and SingleR multi-class prediction algorithms to determine the lowest UMI threshold where sufficient biological signal was retained. We then applied a Poisson model to the test datasets to downsample to a set of desired reads threshold including 0, 50, 100,150,200,250,300,350,400,450,500,600,700,800,900,1000,1500,2000, 3000 and 4000 UMIs. Following the transformation, the mean number of UMIs in the downsampled cells were close to the desired UMI thresholds ( Figure 3A). Indeed, the reads in the downsampled cells followed a Poisson distribution, as the variance increased at higher thresholds. Noticeably, the number of unique genes followed a Poisson distribution as well ( Figure 3B). We used the trained classifiers to predict lineage and cell type labels for the downsampled cells. The ground truth and predicted labels were used to generate a confusion matrix to calculate the area under the receiver operator curve (AUROCC) at each threshold. We considered AUROCC values above 0.9 to be accurate classifications. The SingleR classifier showed an accurate prediction of both lineage and cell types at an average read depth of 450 UMIs or~200 genes ( Figure 3C). However, the model progressively lost its predictive ability at below the 250 UMIs threshold. On the other hand, the SCN classifier showed an accurate prediction for both classes at an average read depth of 1,500 UMIs or~650 genes, while its predictive ability was gradually lost at thresholds below 800 UMIs ( Figure 3D). The accuracy of the SingleR classifier relatively plateaued at the 350 UMI threshold. However, the accuracy of the SCN classifier increased linearly throughout with the increasing thresholds. As expected, almost all the AUROCC values for the broader lineage class were equal or higher than the narrower cell type class. It's worth mentioning that SingleR classifier showed an overall higher classification accuracy which we attribute to the fact that SingleR calculates the spearman correlation between each cell's expression profile and reference cells regardless of expression values while SCN only considers expressed genes e.g., non-zero expression values. Consequently, we selected the conservative 450 UMIs from the more accurate classifier at the finer cell type resolution as the optimized threshold. FIGURE 4 Loss of distinct cell clusters on UMAP below 450 UMIs in the FELINE dataset. Dimension reduction using Uniform Manifold Approximation and Projection (UMAP) shows that as count thresholds fall below 450 reads, a gradual loss of the distinct cell clusters is observed on lineage (A), and cell type levels (B) (n = 1,500). Frontiers in Genetics frontiersin.org 07 In addition, we performed downsampling of gene numbers by dropping random genes at different maximum number of genes thresholds (Supplementary Figures S2A,B). Like the UMI downsampling, accurate classification (AUROCC >0.9) of lineages and cell types was achieved using 200 and 600 genes for SingleR and SCN classifier, respectively (Supplementary Figures 2C,D). We then applied the same transformation to a binary count matrix for training and test sets (Supplementary Figures S3A,B). Both classifiers yielded similar performance to non-binary counts at 250 and 450 genes for SingleR and SCN, respectively (Supplementary Figures S3C,D). Given the typical correlation between observed between UMIs and number of genes, it was not surprising that similar thresholds were obtained using the UMI-based and the gene number approaches. Frontiers in Genetics frontiersin.org 08 Loss of distinct clustering below the optimized threshold To see the effect of downsampling on the low dimensional data structure, we analyzed the downsampled cells from the 1500, 450, 350, 250, and 150 read thresholds using uniform manifold approximation and projections (UMAPs). Similar to the initial 1500 UMI threshold, the cells at the 450 UMI threshold showed distinct separate clusters at the lineage level ( Figure 4A). As threshold was reduced, the inter-cluster distances gradually decreased. On the cell type level, the cells at the 450-threshold not only clustered by lineage but retained a rational biological hierarchy as shown by subtype cluster grouping ( Figure 4B). As with the lineage level, the distinct clustering was gradually lost at lower thresholds ( Figures 4A,B). This suggests that biological information retained at as low as 450 reads-per-cell maintains cell identity in our dataset. Optimized QC threshold rescue substantial number of cells with low transcription level To increase the number of stromal and immune cells available for downstream analysis, we applied the optimized threshold of 450 reads-per-cell to a subsample of the original dataset (n = 70,000). Relative to number of cells retained by the initial threshold of 1,500 reads, the optimized threshold rescued an additional 8,813 stromal cells and 6,535 immune cells, an increase of 77% and 113%, respectively ( Figures 5A,B). The gain was even more prominent among the cells with low average reads as Frontiers in Genetics frontiersin.org 09 2,976 T-cells and 1,298 B-cells were rescued which is 176% and 151%, respectively, more cells compared to the populations retained by the initial threshold. The gain among fibroblasts and macrophages/monocytes was also notable as the initial populations increased by more than 40% after applying the optimized threshold. The inclusion of rescued cells markedly improved the representation of diversity across all tumor samples, previously dominated by epithelial cells (Figure 5C). With the new thresholds, we observed a notable gain in lymphocytes across several tumors. We also noted that the optimized threshold led to the gain of 10 additional tumor samples that were excluded by the initial threshold. Thus, threshold optimization allowed the re-evaluation of cells initially penalized and discarded for their natively low expression. These rescued cells can then be incorporated in downstream analysis to characterize the TME. Applications in datasets containing cells with low expression and fine-grain labels To test the applicability of our approach to cell types with low gene expression, we used the Combes dataset (see Methods), which contains cell types with low expression levels, including as neutrophils and platelets. As with the FELINE dataset, we applied the transformation based on Poisson distribution to systematically downsample the counts in the Combes dataset. The resultant UMI means were reflective of the desired target UMI thresholds (Figures 6A,B). Using the original published cell type labels as ground truth, the cell type classification AUROCC for the untransformed counts were about 0.9, reflecting the low average read depth of this dataset (1599 UMIs) and very low coverage in some cell types, such as neutrophils (621 UMIs) and platelets (740 UMIs). SingleR achieved AUROCC >0.7 for this dataset at 250 UMIs or~90 genes while SCN achieved this level of accuracy at 350 UMIs or~115 genes ( Figures 6C,D). Similarly, we used the 10X PBMC dataset test (see methods for details) to demonstrate that the application of the framework in cell types with fine-grain labels. The PBMC dataset (average 2371 UMIs) contains fine-grain classification of monocytes and T cells. In addition to CD14 + and FCGR3A + monocytes, this dataset contains different T cells subtypes like naïve CD4 + , memory CD4 + , and CD8 + T cells. Again, we applied the transformation based on Poisson distribution to systematically downsample and obtain resultant UMIs that were reflective of the desired target thresholds (Figures 6E,F). SingleR classified cells with AUROCC >0.7 at 150 UMIs or~70 genes threshold, while the SCN classifier achieved this level of accuracy at 400 UMIs or~170 genes ( Figures 6G,H). Taken together, these results demonstrate that our framework can be applied to datasets containing cell types with low expression and fine granularity. Discussion Single cell RNA-seq of tumor samples have proved indispensable for TME studies. This has allowed researchers to perform analyses such as in-depth classification of the composition of tumors, identifying the key signaling mechanisms operating in cancer and non-cancer cells and characterizing the heterogeneity and evolution of cancer cells, which were not previously feasible using bulk-RNA sequencing (Nath and Bild, 2021). However, the detection of rare cell populations among the diverse TME is limited by the number of cells the scRNA-seq platform can handle. The introduction of UMI-based platforms allowed for higher cell capacity which better captures the diversity of the TME. However, arbitrary UMI thresholding during the standard scRNA-seq QC risks losing considerable number of cells, such as immune cells with low expression. This can lead to inaccurate assessment of the composition of the TME and overlook critical associations between diversity and tumor traits. For example, the presence of cytotoxic T cells in the TME is strongly associated immunotherapy response in multiple cancers (Sade-Feldman et al., 2018;Kim et al., 2021;Nagasaki et al., 2022). Therefore, assessment of immune response based on diversity of infiltrating lymphocytes could improve by optimizing the UMI thresholds. Recent studies to characterize the communication networks between various individual cell types within breast tumor have revealed unique signaling networks operate in tumors resistant or sensitive to cell cycle inhibitor therapy (Griffiths et al., 2022). Resolving these communication links also requires optimizing the UMI thresholds to ensure that the TME measured using scRNA-seq reflects the true composition of the tumor. To develop a framework that enables optimization of UMI thresholds, we used a systematic approach to downsample UMIs and accurately classify cells by lineage and cell type. We trained two classifiers, SCN and SingleR, on expert-labelled subsample of our gold standard FELINE dataset which was originally filtered using a stringent UMI threshold. We then downsampled the FELINE dataset using a Poisson transformation and evaluated the classification accuracies at various thresholds. Using a conservative AUROCC >0.9 as the cut-off for accurate classification in the FELINE dataset, we determined a significantly lower new threshold at 450 UMIs, corresponding to slightly more than 200 genes, compared to the initial threshold at 1,500 UMIs. The optimized threshold retrieved substantial number of additional cells that were initially disposed-off during filtering. The gain was prominent among cells with lower average reads than cancer cells such as stromal and immune cells. Notably, B-and T-lymphocytes populations increased more than 150% by applying the optimized threshold. We also noticed that the downsampled cells at this threshold retained similar distinct clustering patterns across lineages and cell type groups on the UMAP as the gold standard dataset. However, this was not the case at lower thresholds where the inter-cluster Frontiers in Genetics frontiersin.org distances were gradually lost. We also explored gene downsampling using random gene removal at different thresholds using binary and non-binary input which resulted in similar optimal threshold to the UMI downsampling. We further extend the application of our framework to two additional datasets. Analyses with the Combes dataset revealed that cells with low average expression, like neutrophils, can also be used in our framework to optimize thresholds. Similarly, analyses with the PBMC dataset showed that fine grain classification of cells can be accommodated in the framework. While this approach improved the diversity of major lineages and cell types of the FELINE, Combes and PBMC datasets, its current application depends on the original labeling accuracy for cell identities. This can be challenging for some cell populations, such as cells that lack established RNA markers. Currently, the framework relies on reliable labeling of cell types in the high-quality cells. A future addition to this framework could integrate additional biological information such as pathway level information and molecular signatures to identify biologically relevant clusters and improve classification accuracy. Our machine learning framework provides a systematic approach to optimize the initial UMI/reads threshold commonly used in scRNA-seq pipelines based on cell type annotations of cells with high read depth. This is especially valuable in rescuing cells with natively low expression like immune cells. Optimizing the QC reads threshold significantly improves the efficiency of cell diversity TME studies while maintaining accurate classification of lineage and cell type. Notably, this framework can be applied to any scRNA-seq dataset where rescuing rare or low expression cells is crucial for downstream analysis. Data availability statement The Combes et al. data are available through Gene Expression Omnibus under accession code GSE163668. The PBMC data are available at https://www.10xgenomics.com/ resources/datasets. Other datasets and code used in this analysis are available on our GitHub repository at https:// github.com/ibishara/scRNA-seq_threshold_optimization. Funding Research reported in this publication was supported by the National Cancer Institute (NCI) of the National Institutes of Health (NIH) under award number U54CA209978 and U01CA264620 awarded to AHB, and a pilot grant awarded to AN under U54CA209978. Work performed in the Integrative Genomics Core at City of Hope was supported by the NCI of the NIH under award number P30CA33572.
2022-11-25T14:12:47.369Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "9ff581698577061abab7391c421f880f87b842a3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9ff581698577061abab7391c421f880f87b842a3", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
8526698
pes2o/s2orc
v3-fos-license
A Novel Role of Lamins from Genetic Disease to Cancer Biomarkers Lamins are the key components of the nuclear lamina and by virtue of their interactions with chromatin and binding partners act as regulators of cell proliferation and differentiation. Of late, the diverse roles of lamins in cellular processes have made them the topic of intense debate for their role in cancer progression. The observations about aberrant localization or misexpression of the nuclear lamins in cancerous tissues have often led to the speculative role of lamins as a cancer risk biomarker. Here we discuss the involvement of lamins in several cancer subtypes and their potential role in predicting the tumor progression. Introduction The animal cell nuclei have a characteristic feature of a well-defined nuclear architecture and chromatin compartmentalization. The complex nuclear organization has been attributed to the increase in genome complexity and need for spatiotemporal regulation of the gene expression in higher vertebrates. Typical metazoan nucleus has been identified into three principal components-nuclear pore complex (NPC), nucleoplasm and lamina. The lamina is the meshwork of proteins found on the nucleoplasmic face of the inner nuclear membrane. A family of type V intermediate filaments (IF) proteins called lamins is the principal component of this lamina. This family of proteins is found among all metazoans except Hydra and arthropods, and is localized exclusively in the nucleus unlike other members of intermediate filaments family. 1 Absence of lamins or their homologs in plants and yeast support the notion that these proteins have evolved during the transition from open to closed mitosis. 2 However, recent studies show that plants have a substitute for lamin proteins. Even though the lamin like proteins does not show sequence similarity, their secondary structure, nuclear distribution, their influence in nuclear shape and size suggests them as functional lamin analogs. 3 In addition, homologs of metazoan lamins and lamin gene tree support the vertical evolution of lamin from the last eukaryotic common ancestor. 4 As we move higher on the evolutionary scale, the number and complexity of lamin isoforms increases from single lmn-1 of C. elegans to two (Dm0 and lamC) in Drosophila to three (LMNB1, LMNB2, and LMNA) in human. 5,6 The evolutionarily conserved lamins are subdivided into A-and B-type based on their biochemical properties, and for its conserved expression from worms to human, lamin B is considered as evolutionary precursor of lamin A. Among the lamin subtypes, B-type lamins have a ubiquitous expression and are considered essential for cell survival. A-type lamins, however, show a spatiotemporal expression during development and are majorly expressed in all differentiated cells and in some adult stem cells, while being considered absent in embryonic stem cells. 7,8 Lamin A and lamin C (collectively referred as lamin A/C) are alternative splice variants of the LMNA gene while lamin B1 and lamin B2 are transcribed from LMNB1 and LMNB2 respectively. Several studies have shown tissue-restricted expression of LMNA, minor splice variants lamin A 10, lamin C2, and also for lamin B3 a splice variant of LMNB2, indicating the specialized roles of these proteins. 9 Lamins C2 and B3 are shown to be germ-cell specific whereas lamin A 10 has been detected in cell lines derived from colon, lung, and breast carcinomas. Lamins are known to be the major building blocks of nuclear structure, its shape, and provide mechanical steadiness to the cell nucleus by protecting them from mechanical forces especially in load bearing cells like muscles. They also directly or indirectly regulate gene expression, differentiation, DNA repair, and apoptosis. They bind to chromatin in a sequence independent manner or through their binding partners and are determining factors for chromatin positioning in the nucleus. [10][11][12] Recently, disease causing A-type lamin mutants have been reported to be involved in regulating proteolytic degradation of proteins, affecting protein stability, and nuclear speckles 13 stability. Lamins are known to interact with cytoskeleton through nuclear membrane SUN and KASH domain containing proteins and are crucial for mechanotransduction and mechanical stability of the cell. Studies with lamin A/C deficient embryonic fibroblast have shown impaired mechanotransduction and decreased mechanical stiffness. 14 The human LMNA gene has 12 exons and studies spanning over two decades have reported more than 300 disease-causing mutations throughout the gene (9). These mutations lead to a wide variety of pleiotropic disorders with varying penetrance, which collectively with B-type lamin-associated diseases, are referred to as laminopathies. 13 Depending on mutation involved, the laminopathies can affect a particular type of tissue or may manifest as a complex disorder affecting several tissues. The majority of the laminopathies affect tissues such as muscles, cardiomyocytes, adipocytes, and neurons, which are mesodermal in origin. Loss of binding of mutant lamin A/C to pRb, cyclin D3, and emerin has been attributed to defective myoblast differentiation due to reduction in MyoD, desmin, and M-cadherin, thereby leading to muscle degeneration/dystrophy. 9 Two of the most studied laminopathies involving muscle tissues are Emery-Dreifuss muscular dystrophy (EMD) and limb-girdle muscular dystrophy (LGMD-1B). The late onset dilated cardiomyopathy and conduction-system disease (DCM-1A), the Charcot-Marie Tooth disorder, a neuropathy with peripheral nerve involvement (CMT-2B1), the Dunnigan-type familial partial lipodystrophy (FPLD), and the systemic disorders Hutchinson-Gilford progeria (HGPS), mandibuloacral dysplasia (MAD), and restrictive dermopathy (RD) have also been attributed to mutations in A-type lamins or their binding partners. On the other hand, mutations in Btype lamins are usually lethal and hence very rare. The only reported cases are of duplication of LMNB1 leading to adult-onset autosomal dominant leukodystrophy (ADLD), a neurodegenerative disorder characterized by myelin loss in the central nervous system, 15 and recently a LMNB1 polymorphic variant was implicated as a modifier of neural tube closure defects. 16 Individuals with LMNB2 heterozygous mutations are found to be susceptible to acquired partial lipodystrophy. Despite extensive studies, a comprehensive explanation for tissue restricted phenotype and the mechanism is still elusive. Lamins and cancer Many researchers and cancer biologist very well ascribed the relationship between aberrant expression of lamin and cancer subtype by investigating the changes in expression profile of lamin in diverse types of cancers. The improper expression of lamins and its interaction with other proteins are often present in tumor cells. For example, Atype lamins interact with a number of transcription factors and regulates both differentiation and proliferation in cells. Lamin A/C binding regulates emerin, pRb, c-Fos, SREBP1, MOK2 function and plays a role in p53, MAPK, ERK1/2, Wnt, TGF-β, Notch, and NF-kβ signaling. Studies with lamin A mutant overexpression have demonstrated a role for lamin A in myogenesis, adipogenesis, and osteogenesis. Expression of lamin A mutants in adult stem cells shows diminished potential to differentiate and regenerate tissues. Lamin A also regulates gene expression by binding chromatin to the nuclear periphery. Reduced or null expression of A-type lamins often correlates with low levels of differentiation and higher proliferation in cells. Furthermore, loss of lamin A leads to nuclear lobulations and changes in nuclear shape. 17,18 Cancer cells are often characterized as highly proliferative with unregulated signaling, having irregular nuclear morphology and properties resembling stem cells. 19,20 The diverse functions and wide interactome of lamins have often led to the speculative role of lamins as a cancer risk biomarker that could predict the probability of tumor progression and therefore prognosis. In the following sections we emphasize the involvement of lamins in several cancers subtypes. Role of lamins in colorectal cancer Colorectal cancer is the third major type of cancer in both men and women not only in the United States but also worldwide. The aberrant or misexpression of lamins are also present in these type of cancers. A recent investigation provided a comprehensive link between lamin A/C expression, patient prognosis and colorectal cancer (CRC) progression by comparing colorectal cancer and normal colon tissues for lamin expression. Using the Cox proportional hazard ratio (HR) method, patients were observed to have poor prognosis with almost two fold increase in mortality when the tumors tested positive for A-type lamin expression than patients with A-type lamin negative tumours. Lamin A/C expression is majorly deficient from the cells of the colonic crypts except for a few basal crypt cells, which are believed to be stem cells. Ectopic expression of GFP-lamin A in colorectal cancer cells revealed increased in cell motility accompanied by an up-regulation of T-plastin, an actin bundling protein, and down regulation of E-cadherin, a protein involved in cell adherence. The study implicates lamin A/C expression as a significant risk indicator of colorectal cancer -related mortality, probably due to increase in migratory and stem cell like properties. 21 Another investigation revealed a correlation between A-type lamin expression and disease recurrence/clinical outcome of stage II and III colon cancer patients. Using paraffin embedded tissues and tissue microarrays, the authors observed low levels of lamin A/C expression had a greater correlation with high disease recurrence, and suggested that these patients may benefit from adjuvant chemotherapy. 22 They also observed that microsatellite stable tumors exhibited more frequently low level of LMNA expression than microsatellite instable tumors. Moreover, a recent study for role of a calcium binding protein S100A6 and its interacting protein β-catenin, a Wnt pathway effector, in colorectal cancer tissues found that high levels of S100A6 expression is observed in metastatic versus non-metastatic human colorectal cancer cell lines. S100A6 protein was also established as a novel interacting partner of lamin A/C protein, hence potentially linking lamin A/C in colorectal cancer development and progression. 23 This clearly indicates that improper regulation of lamins leads to various type of gastrointestinal cancers, in later section of the review we shall discuss other form of gastrointestinal cancers also. Role of lamins in pancreatic cancer A recent study designed to look at the mechanism of betulinic acid treatment of pancreatic cancer discovered lamin B1 overexpression in pancreatic cancer. The drug betulinic acid shows antitumor property by down-regulating lamin B1 expression. Lamin B1 overexpression could serve as a biomarker in pancreatic cancer as the study found it to be associated with more malignant form of cancer with poor prognosis of patients. 24 Role of lamins in other gastrointestinal cancer The various intermediate pathological steps of gastrointestinal cancer are easily identifiable, making it easier to observe the changes in expression of nuclear lamins accompanying cancer progression. 25 But, there have been a few studies linking expression of nuclear lamins and progression of gastrointestinal cancer. In one such study it was found that in gastrointestinal neoplasm both types of lamins have reduced expression with A-type lamins having a more pronounced effect in case of gastric dysplasia. However, no such reduction was observed in stages of intestinal metaplasia and gastric atrophy. This reduction in expression was accompanied by aberrant, cytoplasmic detection of lamins by immunolabelling. Comparative studies with other solid state tumors showed reduced expression of both lamins-A/C and B1, frequently in squamous and adenocarcinoma of the esophagus, cervical and uterine cancers, breast cancer, and bronchial carcinoma but not in pancreatic and hepatic cancer. Hence, reduced expression of nuclear lamins may serve as a potential indicator of early stages of gastrointestinal cancer and cytoplasmic detection as an indicator of a more malignant form. 26 Role of lamins in neuroblastoma Lamin A/C shows spatiotemporal expression during development and has a potential role in neurogenesis. 8 Neuroblastoma is a solid tumor frequently observed in childhood involving primitive cells of the sympathetic nervous system with an ability to undergo differentiation. A key therapeutic intervention for this aggressive tumor is to induce differentiation by various chemical agents. 27 The observation that initial stages of human neuroblastomas show reduced expression of lamin A/C protein in the majority of these cases prompted to study the role of A-type lamins in differentiation and progression of neuroblastoma by knocking down lamin A/C in neuroblastoma cells. Cells with depleted lamin A/C levels fail to undergo retinoic acid-induced differentiation have increased cell migration and drug resistance. The inability to differentiate is further indicated by the absence of distinctive neurites outgrowth and reduced expression of neural markers. 28 Collectively, the studies indicate that reduced levels of lamin A/C expression could be used as a diagnostic tool for the more aggressive form of neuroblastoma. Role of lamins in prostate cancer A study on progression of various states of prostate cancer by comparing several human prostate cancer cell lines observed a correlation between post-translational modification of B-type lamin and the state of differentiation/proliferation in prostate cancer cells. Although the expression levels of B-type lamins were comparable between the cell lines, the malignant PC3 cells showed an increase in lamin B phosphorylation. As the lamins have been identified as a primary component of Nuclear matrix (NM) and Matrix Associated Regions (MARs) the authors speculate that modifying the interactions between NM and MARs may affect gene expression giving rise to a more malignant phenotype. 29 These results are supported by an independent study by a different group which reported that knock-down of a nuclear protein MeCP2 in PC3 and LNCaP cells causes aberrant proliferation and defective cell cycle progression. This defect is accompanied by diminished lamin A/C, lamin B1 and lamin B receptor (LBR) protein levels and altered nuclear shape. 30 Since, MeCP2 interacts with LBR and HP1 to anchor chromatin at the nuclear periphery, its deficit might lead to reduced cell proliferation and viability. Another study involving prostate cancer cell lines observed increase in lamin B-deficient microdomains (LDMDs) and nuclear lobulation, often correlating with augmented aggressiveness and motility of prostate cancer cells. Genes localizing to LDMDs show decreased expression due to stalled Pol II at the promoters in that region. The authors demonstrated that chromosomal regions linked to prostate cancer susceptibility mostly localize to LDMDs. 31 These observations provide mechanistic insights of B-type lamins in development and progression of prostate cancer. In another investigation using different prostate cancer cell lines as model for disease progression, found that increased A-type lamin expression leads to increased cell growth, colony formation, and malignancy of prostate cancer cells. It can be argued that increased A typelamin expression may modulate PI3K/AKT/PTEN signaling pathway by altered mechanotransduction between and nucleus and cytoplasmic membrane and lead to aberrant cell proliferation. 32 By using 2D-DIGE and MALDI-TOF/TOF mass spectrometry Skvortsov et al. reported lamin A is a differentially expressed abundant protein between low and high Gleason score prostate tumors. These observations clearly put forward that A type-lamins might serve as a biomarker of tumor differentiation and prognosis and as a novel therapeutic target for prostate cancer. 33 Role of lamins in germ cell cancer Seminoma and non-seminoma are two main types of germ cell tumors that occur in men. They differ in growth rate with non-seminoma having more rapid growth in comparison. Most non-seminoma tumors are of mixed type and the percentage of embryonic carcinoma subtype predicts the malignancy of germ cell tumor. 34 In an attempt to study lamin expression in testicular germ cell tumors, cryo-preserved tissue sections of the normal testis and various other testicular germ cell tumors were coimmunostained for both A-and B-type lamins. In testicular germ cell tumors, while B-type lamins were frequently found to be expressed, A-type lamins depicted differential expression with only lamin C being expressed in embryonic carcinoma. This differential expression could help establish detection of embryonic carcinoma in tumors and act as a prognostic marker. 35 Role of lamins in liver cancer Like other cancers Hepatocellular carcinoma (HCC) also involves aberrant gene expression and is frequently observed along with liver cirrhosis. Lamin B1 may be considered as a marker for cirrhosis, because its expression level changes considerably in cirrhotic tissue as compared to normal tissue. 36 The expression level of not only Lamin B1 but also both the subtypes of lamins was investigated in hepatocellular regeneration during liver cirrhosis and in different grades of hepatocellular carcinomas. Immunohistochemistry on frozen tissue sections revealed lamin expression both in cirrhosis and carcinomas. 37 The proteomic expression profiling and clinicopathological study of diseasefree and patients suffering from cirrhotic liver and HCC identified lamin B1 and vimentin as predominant proteins elevated in cancerous tissues. Circulating lamin B1 and vimentin could serve as novel biomarkers of early stage HCC that could be detected by noninvasive method. 38 Another group performed similar proteomic analysis of normal and cancerous tissue and identified sarcosine dehydrogenase, liver carboxylesterase, peptidyl-prolyl isomerase A, and lamin B1 as novel hepatocellular carcinoma biomarkers. 36 Hence, testing for increase in lamin B1 expression could lead to early detection of hepatocellular carcinoma. Thus, Lamin in combination with other liver enzymes that overexpressed in Hepatocellular carcinoma could be a good diagnostic marker. Role of lamins in lung cancer The correlation between lamin expression and lung cancer was one of the earliest investigation relating lamins to cancer progression. 38 Kaufmann group has shown that the A-type lamins decreased in small cell lung cancer (SCLC) cell lines. They also demonstrated that lamin A/C levels were more than 80% lower in SCLC cell lines compared to non-SCLC lines. 38,39 When lamin expression was compared between small cell lung cancer (SCLC), squamous cell carcinomas, and adenocarcinomas it was found that lamin B expression was unaltered, and lamin A/C expression was weaker in SCLC cell lines compared to non-SCLC cell lines. 39 lamin expression was compared between various non-SCLC and SCLC lines. Expression of v-rasH oncogene in the NCI-H249 small cell line gives rise to phenotype resembling large cell carcinoma of the lung and increase in expression of lamin A/C and vimentin. Here, increase in lamin A/C expression is associated with increased malignancy of lung cancer. 40 Another study took a closer look at the protein expression profile of cancer cell line A549 and compared it to normal lung fibroblast cell line MRC-5. Lamin A/C was found to be overexpressed in A549 cells and was postulated to be a biomarker of lung cancer for early detection. 41 Studying the expression of A-type lamins in lung adenocarcinoma cell line GLC-A1 showed that not only was the expression of lamin A/C reduced, but also there were significantly higher levels of lamin C compared to lamin A. 42 Recent studies with A549 cell lines and green tea polyphenols reported to have antitumor properties showed that green tea extract induced upregulation of lamin A/C expression. The increased A-type lamins in turn can lead to altered actin remodeling and consequently, reduced cell motility. The result is in contradiction to earlier discussed report as authors here show that lamin A/C overexpression seemingly reduces the migratory property of lung cancer cells. 43 In most of the lung cancer, B-type lamins are generally overexpressed. Taken together, we can clearly see that Lamin A/C act as diagnostic marker in early stages of lung cancer while Lamin B1 can be a good marker for later stages of cancer. Role of lamins in skin cancer Expression patterns of lamin subtypes between normal human skin, actinic keratosis, squamous cell carcinoma (SSC) and basal cell carcinoma (BCC) were correlated to their proliferative potential using immunohistochemistry. Though A and B-type lamins are expressed in both normal and cancerous epidermal cells, a high percentage of proliferating cells found in basal and squamous cell carcinomas stain positive for lamin A expression, suggesting these cells may undergo differentiation. 44 Another important oncogene that correlates well with BCC progression is GLI1. A small molecule inhibitor of the hedgehog signaling, Vismodegib has been recently approved by FDA for BCC treatment. While many strategies have been documented to overcome GLI1's role in cancer, 20,45 especially BCC, 46 the relationship between lamins and GLI1 has to be elucidated in detail. Another study examined the importance of variation in lamin expression as a diagnostic marker in keratinocytic tumors. The expression of all kinds of lamins was reduced with lamin B showing heterogenous pattern in differentiated SCCs and keratoacanthomas. 47 A detailed analysis of lamin subtypes was performed in tissue sections of basal cell carcinomas leading to the categorization of four types of cells lamin A negative, lamin C negative, lamin A/C negative and lamin A/B2 negative. Correlation of these cell subtypes to proliferation rates revealed that absence of lamin A is associated with high proliferation rates and absence of lamin C with slow growth rate, hence implicating absence of A-type lamin expression in cancer progression. 48 A proteomic profile between normal human oral keratinocytes and oral squamous cell carcinomas derived cell lines found that twenty-two proteins were differentially expressed including proteins like annexin A1, heat shock protein 27 and lamin A/C. 49 These investigations explore a possible avenue for A-type lamins as a signature molecule for oral cancer diagnosis. Role of lamins in ovarian cancer Recently, in a proteomic exploration to identify potential biomarkers for ovarian cancer (OC) in women with polycystic ovary syndrome (PCOS), tissue samples from women with and without OC were compared. The authors found that six biomarkers calreticulin, fibrinogenγ, superoxide dismutase, vimentin, malate dehydrogenase, and lamin B2 were overexpressed both in women with OC and in women with PCOS. These biomarkers could help identify the possible risk of ovarian cancer in women with PCOS. 50 Further studies would be necessary to evaluate the true potential of these biomarkers. Role of lamins in breast cancer Samples of breast cancer and associated non-cancerous tissues were examined to correlate the expressions of lamin A/C, lamin B1and LBR to various stages of human breast cancer and their clinical outcome. While higher expression of LMNA were associated with early stages of cancer and hence favorable prognosis, the expression of LMNB1 correlated directly to the tumor grade and declined with increasing probability of mortality. Hence, in breast cancer, the decreased expression of LMNB1 associates with poor prognosis. 51 In another study, null expression of A-type lamins in a majority of cancerous tissue or aberrant, heterogeneous expression in breast cancer cells was reported. Knockdown of lamin A/C expression by shRNA led to cancer like altered morphology and aneuploidy in primary breast epithelial cells, implicating A-type lamins in breast cancer progression. 52 Role of lamins in progeria disorder Progeria or premature aging is a severe systemic disorder caused by mutations that lead to altered lamin A processing that leads to formation of a truncated protein progerin. Progerin also accumulates in different tissues of normal individuals as they age, suggesting it has a role in normal aging. 53 Aging also leads to genomic instability, which is one of the leading risk factor for cancers, raising speculations about correlation between progerin and cancer. Mouse models in which cells accumulate prelamin A could help establish the relationship between prelamin A accumulation and invasive properties of cancer. Silencing of gene ZMPSTE24 in breast, oral, and lung cancer model causes progerin accumulation and changes in proteoglycan synthesis pathway leading to increased production of over-sulfated forms of chondroitin sulfate and heparan sulfate. These changes in the ECM components could lead to reduced invasive potential and establish progerin as a safeguard against cancer. Experiments performed in mosaic mouse models comprising of both prelamin A and normal mature lamin A expressing cells indicate that prelamin A does not affect tumor initiation, but suggests that it prevents cancer invasion. 54 Another study investigating the role of progerin in human prostate, breast and colon cancer cell lines detected higher than normal expression of progerin in cancer cells. Ectopic progerin expression does not cause cellular senescence in cancer cells, and this could be attributed to defective DNA damage repair associated with progeria. Based on these results the authors hypothesize that progerin could promote tumor formation by increasing either DNA damage or genomic instability. 55 However, no thorough clinical investigation to corroborate progerin's role in cancer has been made, possibly due to quite a short lifespan of progeria patients. Lamins-prognostic or diagnostic biomarker? Cell motality, cell migration, and invasion are one of the critical fac-tors during the progression of metastasis. Various studies have already indicated that undeniably lamin A/C is involved in cell proliferation, migration and invasion of various cancer cells. Enlargement and distortion of nuclear shape are characteristic features of a malignant cell also. As lamins play a vital role in providing structural and mechanical strength to the nucleus, their role in cancer has been under considerable scrutiny. Lamins often show aberrant expression and localization in cancer cells as shown in Table 1. Aberrant localization of lamin A/C to cytoplasm has been observed in various cancers like lung, colorectal and gastric cancers. 21,26,39 Lamins are also shown to regulate oxidative stress in the cancer cells. 56 While the low level ROS induces cell proliferation, increased ROS often results in DNA damage. ROS also acts as a basic signaling molecule in both the normal and cancer cells. Radiation or chemotherapeutic agents induced bystander effect is also mediated mostly by ROS. [57][58][59][60] However, the role of lamin in cancer cell and bystander cells has to be elucidated in detail. The regulation of DNA damage and repair proteins, especially DNA double strand break repair proteins like ATM, ATR, H2AX, 61 BRCA1, FANCD2 62-64 has to be tightly regulated to avoid the genomic instability and carcinogenesis. The dysregulation of A-type lamins impacts transcription, DNA replication and repair, and epigenetic modification of chromatin, hence inducing genomic instability that can contribute to cancer progression. 65 Thus, it will be quite interesting to study the nuclear and cytoplasmic role of lamins in cancer progression and its recurrence. Role of lamins in both promotion and inhibition of apoptosis suggests a strong correlation between expression of lamins and malignancy of tumor cells. 66 The current ambiguity in assigning lamins as a cancer biomarker is due to its variable expression between cancer subtypes. 67 It is well known that Lamin B is universally expressed in many cell types, even cancer cells, making them a poor diagnostic marker for most studies except in the case of HCC where its overexpression is observed in both early and late stages of cancer. Similarly, lamin A/C might not serve as a useful diagnostic biomarker due to the variability between cancer subtypes. Larger statistical studies are necessary before clinical diagnosis utilizes aberrant expression or localization of lamins as a diagnostic biomarker for cancer. Differentiation of tumor cells correlates with cancer prognosis, with higher differentiation correlating with better prognosis and low differentiation leading to poor prognosis. 67 Lamin B potential to become a prognosis marker has been well studied in case of prostate and pancreatic cancer where it is associated with the more malignant form of cancer. Expression of lamin A/C is often used to demarcate differentiated cancer cells. Lamin A/C expression is also shown to alter expression of E-cadherin, which leads to reduced cell adhesion. Increase in cancer cell motility leads to metastasis and hence significantly worsens the prognosis. 67 Therefore, based on the subtype of cancer involved lamin A/C has the potential to be a cancer biomarker, but detailed mechanis-tic investigations for the role of lamins in cancers are still wanting. This also indicates that lamins can be also used as prognostic marker in combination with other cancer markers. The role of Progerin in development of cancer is also not well elucidated and this could be another important area where we have to emphasize more. It will be also interesting to study the role of other nuclear lamin related protein like emerin and LAP2 alpha (Lamina-associated polypeptide 2 ) which also overecpressed in various cancer cells. The cross talk between these proteins and lamins are not so well studied. Conclusions In conclusion, lamin is over expressed in most of the cancers and has the ability to maintain the cancer cells homeostasis. Especially, lamin maintains the cell differentiation, proliferation and motility in tumors, which is essential for aggressive tumors. Conversely, contradicting results exist regarding role of lamins in cancer. This ambiguity could be answered by considering the numerous diverse functions that lamins perform in the cell and to relate its expression pattern in context to its role in cancers that arise from different cell types. As highlighted in the present review, lamin can be a potential diagnostic biomarker compared to prognostic marker. However, 'bench to bedside' approach to correlate large number of clinical samples and lamin expression analysis in these clinical samples can provide a better insight for cancer diagnostics.
2018-04-03T00:33:13.752Z
2016-10-10T00:00:00.000
{ "year": 2016, "sha1": "f590fba61bd49c0517987baf8a0e68cee080f8f1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4081/oncol.2016.309", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f590fba61bd49c0517987baf8a0e68cee080f8f1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265463875
pes2o/s2orc
v3-fos-license
Use of loop diuretics in patients with chronic heart failure: an observational overview Introduction This study aimed to evaluate the use and dose of loop diuretics (LDs) across the entire ejection fraction (EF) spectrum in a large, ‘real-world’ cohort of chronic heart failure (HF) patients. Methods A total of 10 366 patients with chronic HF from 34 Dutch outpatient HF clinics were analysed regarding diuretic use and diuretic dose. Data regarding daily diuretic dose were stratified by furosemide dose equivalent (FDE)>80 mg or ≤80 mg. Multivariable logistic regression models were used to assess the association between diuretic dose and clinical features. Results In this cohort, 8512 (82.1%) patients used diuretics, of which 8179 (96.1%) used LDs. LD use was highest among HF with reduced EF (HFrEF) patients (81.1%) followed by HF with mild-reduced EF (76.1%) and HF with preserved ejection fraction EF (73.8%, p<0.001). Among all LDs users, the median FDE was 40 mg (IQR: 40–80). The results of the multivariable analysis showed that New York Heart Association classes III and IV and diabetes mellitus were one of the strongest determinants of an FDE >80 mg, across all HF categories. Renal impairment was associated with a higher FDE across the entire EF spectrum. Conclusion In this large registry of real-world HF patients, LD use was highest among HFrEF patients. Advanced symptoms, diabetes mellitus and worse renal function were significantly associated with a higher diuretic dose regardless of left ventricular ejection fraction. INTRODUCTION Loop diuretics (LDs) play a key role in the treatment of chronic heart failure (HF), to prevent congestion, alleviate symptoms and retain euvolemia. 1 2The use of LDs in HF patients is highly recommended (class I) by the European Society of Cardiology (ESC). 3owever, the level of objective scientific evidence for its effectiveness is low (level C), and the recommendation is mainly based on expert consensus.Also, factually, the optimal dose and intensity of LDs in HF patients is not well described.Since clinical trials of LDs in HF patients are out of the question, evidence gained from large-scale registrations of representative practices is crucial.Against this background, we studied the use of LDs, including the daily dose and determinants of its use, in the CHECK-HF (Chronisch Hartfalen ESC-richtlijn Cardiologische praktijk Kwaliteitsproject-HartFalen) registry, a large and 'real-world' cohort of chronic HF patients in The Netherlands. Study sample, setting and design The design and methods of the CHECK-HF registry have been described in more detail previously. 4In short, the CHECK-HF is a cross-sectional registry consisting of 10 910 unselected chronic HF patients from 34 participating Dutch hospitals, who were WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Loop diuretics (LDs) are highly recommended and form a cornerstone medication in the treatment of chronic heart failure (HF).⇒ LDs help to prevent congestion and maintain euvolemia.⇒ The vast majority of the HF patients are prescribed LDs. WHAT THIS STUDY ADDS ⇒ The current objective evidence remains low regarding the description of the usage and dosage of LDs.⇒ This study offers extensive 'real-word' data, providing more insight on this subject across the entire ejection fraction spectrum. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ This study gives insight on the dose and the determinants related to the dose of LDs.and seen at the outpatient HF clinic (96%) or general cardiology outpatient clinic (4%).Data regarding patient characteristics, laboratory results, echocardiography and detailed information of HF therapy was collected in the registry. Data and analysis The current study included 10 366 HF patients in which data regarding diuretic use and left ventricular ejection fraction (LVEF) was available.Patients were categorised in three groups: HF with preserved ejection fraction (EF) (HFpEF; n=2153, 20.7%), HF with mild-reduced EF (HFmrEF; n=1535, 14.8%) and HF with reduced EF (HFrEF; n=5614, 54.2%), according to the 2016 ESC HF guidelines. 6In 1064 patients (10.3%),LVEF was reduced (ie<50%) but not exactly measured these patients were stratified as semiquantified LVEF.In patients using LDs other than furosemide (ie, bumetanide), the LD dose per day was multiplied by 40 to obtain the furosemide dose equivalent per day (FDE).Given that torasemide is unavailable in the Netherlands, this study exclusively involves furosemide and bumetanide.We distinguished patients using an FDE>80 mg as 'high' dose and FDE≤80 mg as 'low' dose. We present the baseline characteristics of LD users and non-LD user in HFpEF, HFmrEF, HFrEF and semiquantified LVEF patients separately.Between-group differences in these characteristics are evaluated by χ 2 tests and Fisher's exact tests (continuous data), and as Mann-Whitney U tests and one-way analyses of variances (continuous data).Logistic regression analysis was applied to identify factors related with FDE dosage ('low' vs 'high'), for which we considered age and sex, clinical factors including body mass index (BMI), NYHA class and renal impairment (estimated glomerular filtration rate (eGFR) <60 mL/ min/1.73m 2 ) and the use of other cardiovascular medication, including beta-blockers, renin-angiotensin-system (RAS) inhibitors and mineralocorticoid receptor antagonists (MRAs).The results are expressed as ORs with 95% CI.Multiple imputation used was applied to account for missing data, applying the monotone method or the Markov chain Monte Carlo method.Finally, in patients with HFrEF, HFmrEF and impaired semiquantified LVEF, we compared high and low FDE with the target dose of guideline-directed medical therapy (GDMT).Statistical analysis was performed using SPSS software V.25.0.P values<0.05were considered as statistically significant. RESULTS In this cohort of 10 366 HF patients, 8512 (82.1%) patients used diuretics of which 8179 (96.1%)LDs.LD use was highest among HFrEF patients (81.1%) followed by HFmrEF (76.1%) and HFpEF (73.8%, p<0.001).Among patients with an impaired semiquantified LVEF, 81.8% used LDs.Across the entire LVEF spectrum, patients using LDs were older, were more often men, had a higher BMI and NYHA class, lower systolic and diastolic blood pressure, lower eGFR and more often had diabetes mellitus than the non-LD users (table 1).Also, irrespective of LVEF, patients using LDs more often used MRAs and less often used RAS inhibitors.We found no differences in the use of beta-blockers. Figure 1 shows the distribution of FDE among the three different HF groups and impaired semiquantified group. Multivariate analysis Across the entire LVEF spectrum HF patients, using a high dose of LDs were associated with a higher BMI and potassium levels and a lower blood pressure (table 2).Also, these patients had a higher likelihood to have diabetes mellitus, atrial fibrillation and renal impairment.In patients with HFpEF and HFrEF, high FDE was associated with NYHA classes III and IV (table 2).The outcomes of multivariate analysis in the imputed dataset are compared with outcomes of the complete cases analysis, both analysis show the same trends for all variables included in the multivariate analysis. GDMT target dose in HFmrEF, HFrEF and impaired semiquantified LVEF Figure 2 shows the difference in the target dose of guideline-recommended HF medication between patients using high FDE versus low FDE in patients with HFmrEF, HFrEF and an impaired semiquantified LVEF, respectively.Figure 2 shows that there is no difference in reaching the target dose of beta-blocker between low and high FDE for patients with HFmrEF and HFrEF (12.7% vs 12.8%, p=0.96 and 14.6% vs 13.4%, p=0.39).However, patients in HFmrEF and those in HFrEF using low FDE more often reach the target dose RAS inhibition (30.0%vs 19.5%, p=0.001 and 36.6% vs 21.5%, p<0.001).The opposite is seen in the use of MRA, in which patients using high FDE more often reach their target dose for both HFmrEF and HFrEF (4.4% vs 16.8%, p<0.001 and 4.2% vs 16.6%, p<0.001). DISCUSSION In this study, we evaluated the overall use of diuretics and daily dose in a large cohort of chronic HF patients.The median dose of FDE was relatively low.The highest dose was used in patients with reduced LVEF, and the level of GDMT was negatively influenced by higher diuretic dose.Also, impaired renal function was associated with a higher dose of LD, which important to realise and consider in clinical practice.Furthermore, our analysis show that a higher dose of LD was associated with symptomatic HF.Still, the guidelines are in contrast to these observations as they recommend minimise LD use to preserve renal function. 3Interestingly, in this regard, diabetes mellitus was among the strongest determinants among LD use which is also related to renal preservation. 7n the current literature, large real-world registries describing the use of diuretics and/or daily dose of diuretics in detail are scarce. 8Data from The EuroHeart Failure Survey programme showed that in patients with an LVEF of less than 40%, the use of LDs was higher compared with those with a higher LVEF. 9These results, together with the distribution of furosemide dosages, are in line with the results from our registry.Interestingly, in contrast with our study, the retrospective study of Broscious et al 10 showed significantly more use of LDs in HFpEF.Also, the FDE among HFpEF patients tended to be higher in this study compared with HFrEF.Since Heart failure and cardiomyopathies our study had a cross-sectional design, we were unable to investigate clinical outcomes with regard to diuretic use.2][13] In the study conducted by Faselis et al, 11 the authors showed that patients who receive LDs after hospitalisation for HF decompensation had significantly better 30-day clinical outcomes.Unfortunately, this study did not provide comprehensive information on LD daily doses.In contrast, Nuzzi et al 12 described the use of LDs in patients with dilated cardiomyopathy.Which showed that LD use and increasing FDE over time is a strong indicator for a clinical event.In addition, in the study of Pellicori et al, 13 worse prognosis towards patients using LDs is also shown.However, after adjusting for severity of congestion neither the use nor the dose of LDs was associated with clinical outcomes.The latter two studies described similar LD daily dose profiles as in the HFrEF group from our study.Current guidelines recommend to prescribe LDs as low as possible to prevent decongestion and to discontinue when possible to preserve renal function. 3In view of these studies, one might expect that discontinuing LDs will occur more frequently over time.This may be related to the fact that high doses of LDs are more an indication for more advanced HF rather than deleterious per se, which is also in line with the findings of our study. 13 14ideline-recommended therapy in HFmrEF and HFrEF Our results show that patients using a high FDE in HFmrEF and HFrEF are less likely to reach the target dose of RAS inhibiters and more likely to reach the target dose for MRAs.This finding is in agreement with the 'Enhanced Feedback for Effective Cardiac Treatment' (EFFECT) study. 15Interestingly, comparing the data from the EFFECT study with the current cohort shows that over 10 years, little has changed regarding the use of LDs.The more recent 'A systems BIOlogy Study to Tailored Treatment in Chronic Heart Failure' (BIOSTAT-CHF) study 16 showed that higher dosages of LDs limited the uptitration of RAS inhibitors in HFrEF patients who were on suboptimal GDMT.Also, at high dosages of LDs, patients tended to use more often MRAs on higher dosages, which is in line with our results.Another interesting finding is the higher diuretic need in diabetes mellitus.While our cohort has no information on sodium-glucose cotransporter 2 (SGLT2) inhibitors which were not available for HF at that time, it is relevant to note the diuretic and natriuretic effect of SGLT2 inhibitors, but also many pleiotropic, explaining partly their efficacy in recent HF trials. 17 18It will be informative to study in HF registries what the effect of SGLT2 inhibitors is in (lowering) daily dose of diuretics in future studies. Strengths and limitations This registry contains a large number of HF patients in a Western population treated according to European guidelines, which describes the use of LDs across the entire EF spectrum.However, limitations need to be mentioned.As, this registry only contains cross-sectional data, no clinical outcome data was available. Future perspective As LDs are a cornerstone treatment in alleviating decompensation and maintaining euvolemia in HF patients.It will be informative to observe in HF registries what the effect of angiontensin receptor neprilysin inhibitor (ARNI) and SGLT2 inhibiters on LDs usage is, particularly when compared with data form CHECK-HF.The 'Prospective comparison of ARNI with ACEI to Determine Impact on Global Mortality and morbidity in Heart Failure' (PARADIGM-HF) trial already showed that the use of ARNIs was associated with a reduction in LD dose, compared with an ACE inhibitor. 19With the recent evolution of haemodynamic monitoring (eg, CardioMEMS, Cordella) in HF, it offers the opportunity to titrate LDs based on haemodynamic measurements and according to a predefined treatment guideline. 20 21The observed benefits of these techniques are mainly driven by changes in LDs. CONCLUSIONS In this large registry of real-world HF patients, loop diuretic use was highest among HFrEF patients.Advanced symptoms, diabetes mellitus and worse renal function were significantly associated with a higher diuretic dose regardless of LVEF. Figure 1 Figure 1 Distribution of loop diuretic dose (furosemide dose equivalent per day) in heart failure with preserved ejection fraction (HFpEF), heart failure with mild reduced ejection fraction (HFmrEF), heart failure with reduced ejection fraction (HFrEF) and semiquantified left ventricular ejection fraction (LVEF).
2023-11-29T06:17:05.399Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "12bd1fe8ba0602c4e1ac620db9a6d0a843c2ebb7", "oa_license": "CCBYNC", "oa_url": "https://openheart.bmj.com/content/openhrt/10/2/e002497.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01c783b21f2aade52f151b0d88fa8606ad6f17f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18855015
pes2o/s2orc
v3-fos-license
Martensitic Transformation in Ni-Mn-Sn-Co Heusler Alloys Thermal and structural austenite to martensite reversible transition was studied in melt spun ribbons of Ni50Mn40Sn5Co5, Ni50Mn37.5Sn7.5Co5 and Ni50Mn35Sn10Co5 (at. %) alloys. Analysis of X-ray diffraction patterns confirms that all alloys have martensitic structure at room temperature: four layered orthorhombic 4O for Ni50Mn40Sn5Co5, four layered orthorhombic 4O and seven-layered monoclinic 14M for Ni50Mn37.5Sn7.5Co5 and seven-layered monoclinic 14M for Ni50Mn35Sn5Co5. Analysis of differential scanning calorimetry scans shows that higher enthalpy and entropy changes are obtained for alloy Ni50Mn37.5Sn7.5Co5, whereas transition temperatures increases as increasing valence electron density. Introduction Ferromagnetic shape memory (FSM) alloys are of considerable interest due to their exceptional magnetoelastic properties.Their potential functional properties include: Magnetic superelasticity [1], large inverse magnetocaloric effect [2] and large magneto-resistance change [3].Most of these effects are ascribed to the existence of a first order martensitic transformation with a strong magneto-structural coupling.Transformation temperatures of shape memory alloys depend on the composition and their OPEN ACCESS values spread to a very wide range [4].These materials are interesting for the development of new magnetically driven actuators, sensors and coolers for magnetic refrigeration [5]. FSM behavior is found in Heusler alloys, which have a generic formula X2YZ and are defined as ternary intermetallic systems with L21 crystalline cubic structure.The most extensively studied Heusler alloys are those based on the Ni-Mn-Ga system.However, to overcome some of the problems related with practical applications (such as the high cost of Gallium and the usually low martensitic transformation temperature), Ga-free alloys have been searched and analyzed frequently during the last few decades, specifically with the introduction of In or Sn.Martensitic transformation in ferromagnetic Heusler Ni50Mn50−xSnx bulk alloys with 10 ≤ x ≤ 16.5 was first reported by Sutou et al. [6].Later, Krenke et al. studied magnetic and magnetocaloric properties and phase transformations in Ni50Mn50−xSnx alloys with 5 ≤ x ≤ 25 [7].Rapid solidification techniques, such as melt-spinning, are an alternative to obtain these materials (ribbon shape) [8,9]. Another important factor affecting the magnetic behavior of Ni-Mn-Sn and Ni-Mn-Sn-Co systems is the annealing process.Some authors have found different magnetic behavior in melt-spun Ni-Mn-Sn-Co ribbons annealed at temperatures from 973 K to 1173 K [10,11]. In our work, we investigate the structural and thermal behavior of three melt-spun alloys of the Ni-Mn-Sn-Co system (by modifying Mn and Sn atomic %).These ribbons were not annealed. Experimental Section Polycrystalline Ni-Mn-Sn-Co alloy ingots were prepared by arc melting high purity (99.99%) elements under argon environment in a water-cooled quartz crucible.The ingots were melted three times to ensure a good homogeneity.Thus, ingots were melt-spun on a rotating copper wheel (Buheler, Lake Bluff, IL, USA) set by controlling process parameters as: Linear wheel speed (48 ms −1 ), atmosphere (argon, 400 mbar), injection overpressure (500 mbar) and distance between wheel and injection quartz crucible (3 mm).The as-spun ribbon samples (Alfa Aesar, Heysham, UK) obtained were: Ni50Mn40Sn5Co5, Ni50Mn37.5Sn7.5Co5and Ni50Mn35Sn10Co5 (at.%).The main difference among these alloys is the partial substitution of Mn by Sn whereas the content of Co and Mn is constant. Thermal and structural analyses were performed by applying several techniques.Scanning electron microscopy (SEM) investigations were carried out using a Zeiss DSM 960A microscope (Zeiss, Jena, Germany) operating at 30 kV and linked to an energy dispersive X-ray spectrometer (EDX; Zeiss, Jena, Germany).X-ray diffraction (XRD) analyses were performed at room temperature with a Siemens D500 X-ray powder diffractometer (Bruker, Bullerica, MA, USA) using Cu-Kα radiation.Thermal analyses were performed by differential scanning calorimetry (DSC) using a DSC822e calorimeter of Mettler-Toledo (Mettler Toledo, Columbus, OH, USA) working at a heating/cooling rate of 10 K/min under argon atmosphere. Results and Discussion Heusler alloys produced by melt spinning show a typical columnar structure in the fracture cross section.Figure 1 shows the micrographs of the fracture section of alloys Ni50Mn40Sn5Co5, Ni50Mn37.5Sn7.5Co5and Ni50Mn35Sn10Co5, labeled as A, B and C respectively.All ribbon flakes have a similar morphology which consists of: fully crystalline and granular columnar type microstructure.This is a sign of the quick crystallization and fast growth kinetics of the samples.This suggests that the heat removal during rapid solidification process induces the directional growth of the crystalline phase.The ribbons' width is also similar (between 12 and 15 µm).Crystalline structures at room temperature were determined by analyzing X-Ray diffraction patterns of the three samples (see Figures 2-4 for alloys Ni50Mn40Sn5Co5; Ni50Mn37.5Sn7.5Co5and Ni50Mn35Sn10Co5 respectively).X-ray diffraction analysis begins with in an indexation based on the identification proposed by other authors [7,12,13].The lattice parameters were first calculated minimizing the global interplanar spacing (dhkl) error; defined as the difference between the values calculated from the Bragg equation to every identified peak of the XRD pattern compared to the crystal system geometry equation. It is found that, at room temperature, all alloys have martensitic structure.This martensitic structure is confirmed to be four layered orthorhombic 4O for Ni50Mn40Sn5Co5, four layered orthorhombic 4O and seven-layered monoclinic 14M for Ni50Mn37.5Sn7.5Co5and seven-layered monoclinic 14M for Ni50Mn35Sn10Co5.Lattice parameters are given in Table 1 (for Ni50Mn37.5Sn7.5Co5alloy only parameters from the main phase, 4O, are given). In our work it is found that substituting Mn with Sn favors to the formation of the modulated 14M monoclinic structure.Thus, the martensitic structure is 4O in samples with higher Mn/Sn ratio and 14M in samples with lower Mn/Sn ratio.Opposite behavior was found in Ni-Mn-Sn bulk alloys without Co [14].Thus, the Co addition probably influences what kind of martensitic phase is more stable.In Ni-Mn-Sn-Co ribbons, it was found that the addition of Co favors the evolution of the martensitic crystalline structure from a four-layered orthorhombic (4O) to a five-layered orthorhombic (10M) and finally to a seven-layered monoclinic (14M) [15].Thus, the addition of Co favors the formation of the 14M structure.This effect was not found in our alloys, probably because Co content is constant.Furthermore, it has been also found that the martensitic crystal structure changes from 14M in the bulk alloy to 4O in the melt spun ribbons due to the high oriented microstructure [16].In summary, the differences between our results and results from bibliography can be influenced by the combination of these three factors: Co constant content, Mn/Sn ratio and high oriented ribbons microstructure.More than three alloys are needed to check the influence of these parameters.At room temperature, XRD show that all samples have a martensitic phase.Thus, the occurrence of the martensitic transformation should be checked by DSC heating from room temperature (see Figures 5-7).The reversible austenite-martensite transformation was found in all samples.The absence of any secondary thermal process suggests that the produced ribbons are homogeneous.From DSC analysis characteristic transformation temperatures are determined.Start and finishing martensite and austenite transformation temperatures are referred as Ms, Mf and As, Af respectively.The martensitic transformation of Ni-Mn-Sn alloys is athermal in nature although a time-depending effect is observed through calorimetry interrupted measurements [17].The thermal hysteresis, ΔT, exists due to the increase of the elastic and the surface energies during the martensitic formation.Thus, the nucleation of the martensite implies supercooling. The equilibrium transformation temperature between martensite and austenite, To, is usually defined as (Ms + Af)/2.All the characteristic temperatures are given in Table 2. Table 2. Characteristic temperatures and thermal hysteresis as determined from DSC cyclic scans: Ni50Mn40Sn5Co5, Ni50Mn37.5Sn7.5Co5and Ni50Mn35Sn10Co5.Start and finishing martensite and austenite formation temperatures are referred as Ms, Mf and As, Af respectively.Thermal hysteresis, ΔT, and equilibrium transformation temperature between martensite and austenite, To.It is found that substituting Mn by Sn favors the decrease of the phase transition temperatures.Opposite effect was found in bulk Ni-Mn-Sn alloys without Co [14].Similarly, the addition of Co in Ni-Mn-Sn melt-spun alloys increases martensitic transformation temperatures [16].When doping the alloys, it is important which atom is substituted.In Ni-Mn-Sn-Fe bulk alloys the partial substitution of Mn by Fe causes a diminution of the transition temperatures [12].The same effect is observed in our alloys by substituting Mn by Co.Likewise, the partial substitution of Mn does not induce a general trend in the temperatures [13] whereas Co addition in Ni-Mn-Ga alloys increases the temperatures of the martensitic transformation [18].Furthermore, annealing also modifies transformation temperatures and thermal hysteresis [10].Thus, so many parameters affect structural transformation to assure which parameter determines the behavior of our samples. Changes in enthalpy, ΔH, and entropy, ΔS, during structural transformation are calculated from the area of the DSC peaks.Figure 8 shows its evolution as a function of the average valence electron density (e/a).The shift on the characteristic temperatures and thermodynamic parameters is related to e/a [19].The valence electrons per atom are 10 (3d 8 4s 2 ) for Ni, 9 (3d 7 4s 2 ) for Co, 7 (3d 5 4s 2 ) for Mn and 4 (5s 2 5p 2 ) for Sn, respectively. Energy-dispersive X-ray spectroscopy microanalysis has been used to obtain the exact composition of every sample and to calculate e/a parameter.EDX elemental composition and average valence electron density are presented in Table 3. Higher values of enthalpy and entropy are those of Ni50Mn37.5Sn7.5Co5alloy, probably due to the coexistence of two crystalline phases. One of the most typical ferromagnetic shape memory alloy phase diagram is the graphical representation of the martensitic start temperature as a function of the Z element content or as a function of the average valence electron density.In Figure 9 we represent Ms temperatures obtained in this work (symbols) and those obtained assuming linear relation in Ni-Mn-Sn bulk alloys [20].Our results show a diminution of the transformation temperatures.The main difference is for alloy Ni50Mn40Sn5Co5.In the literature it was found that the martensitic transformation, in Heusler Ni-Mn-Sn melt spun ribbons, occurs at lower temperatures than those compared to bulk alloys [15].Moreover, a change of the martensitic crystalline structure from 14M to 4O takes place with the decrease of the martensitic transition temperature.It is proposed that the internal stress was induced due to the highly-oriented microstructure, which leads to the decrease of the transition temperature because of a refined martensite plate and the formation of dense martensitic variants with different orientations.These results were supported by high resolution transmission electron microscopy (HRTEM).Furthermore, it was also found that the partial substitution of Ni by Co shifts the martensitic transformation to lower temperatures in Ni-Mn-Sn-Co bulk alloys [21].If our alloys have the same trend that bulk alloys (Figure 9), it is not clear the occurrence of the magnetic transformation. Conclusions Melt-spun ribbon of three alloys of the Ni-Co-Mn-Sn system has been produced: Ni50Mn40Sn5Co5, Ni50Mn37.5Sn7.5Co5and Ni50Mn35Sn10Co5.The austenite to martensite reversible transformation was found in all samples.Transformation temperatures increase as Mn/Sn ratio increases. Martensitic structure is four-layered orthorhombic 4O in samples with higher Mn/Sn ratio and monoclinic modulated seven-layered 14M in samples with lower Mn/Sn ratio.These results differ from other obtained in the bibliography.Probably, these differences are caused by the combination of three factors: constant Co content, Mn/Sn ratio and high oriented ribbons microstructure.Furthermore, substituting Mn by Sn favors to the decrease of the austenite-martensite reversible transition temperatures. Figure 2 . Figure 2. X-ray diffraction (XRD) pattern, at room temperature, of Ni50Mn40Sn5Co5 ribbon.The indexation corresponds to a four-layered 40 orthorhombic structure. Figure 3 . Figure 3. XRD pattern, at room temperature, of Ni50Mn37.5Sn7.5Co5ribbon.The indexation of the main phase corresponds to a four-layered 40 orthorhombic structure, whereas peaks marked with * correspond to a modulated monoclinic seven-layered 14M structure. Figure 4 . Figure 4. XRD pattern, at room temperature, of Ni50Mn35Sn10Co5 ribbon.The indexation of the main phase correspond to a modulated monoclinic seven-layered 14M structure. Figure 9 . Figure 9. Martensitic start temperature versus average valence electron density.Lines correspond to bulk alloys[20] whereas square symbols correspond to our samples.
2015-09-18T23:22:04.000Z
2015-04-28T00:00:00.000
{ "year": 2015, "sha1": "3b517d7c8eba699677182c3ed120da8d9ce7d671", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/5/2/695/pdf?version=1430228654", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "3b517d7c8eba699677182c3ed120da8d9ce7d671", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
237364987
pes2o/s2orc
v3-fos-license
Beijing genotype of Mycobacterium tuberculosis is associated with extensively drug-resistant tuberculosis: A global analysis We found that the frequency of Beijing genotype among XDR-TB strains was high. The data in this study would help guide the TB control program, and we however need further investigation to confirm the reliability of the present findings. Dear Editor; Tuberculosis is one of the most important infectious diseases in human history that is also known as the white plague. Tuberculosis is caused by the infection with Mycobacterium tuberculosis and is the second leading cause of death after HIV among the infectious aspects [1,2]. According to the WHO, there were about 10 million people who fell ill with TB in 2019; Furthermore, there were 1.5 million TB deaths in 2019 [3]. Despite more than a century of extensive studies, the control and eradication of tuberculosis have yet remained a global challenge and one of the medical emergencies considered by the World Health Organization [4,5]. It is not possible to eradicate Mtb due to the infection of a quarter of the world's population with latent TB. In addition, other factors such as co-infection with infectious agents (HIV, HTLV-1, HCV, and HBV), lack of effective vaccine in adults, and increased MDR and XDR strains all contribute to failure in the complete eradication of TB. However, continuous monitoring of patient data and genetic characterizations of Mtb strains in different geographical areas can be helpful in setting local programs and global policies to control and reduce TB disease [6][7][8]. Molecular typing of Mtb strains is an important tool in evaluating the transmission and outbreaks of this disease performed using molecular techniques, including IS6110-RFLP, Spoligotyping, and the variable number of tandem repetition of mycobacterial interspersed repetitive units typing (MIRU-VNTR) [9]. Nowadays, nine superfamilies have been identified for M. tuberculosis complex, including Mycobacterium africanum, Mycobacterium bovis, Beijing, EAI, CAS, T, Haarlem, X, and LAM, which account for more than a quarter of TB cases due to infection with the Beijing family [10]. Interestingly, most reported MDR-outbreaks are caused by the Beijing family [11]. Recently, we showed in a comprehensive literature review that the Beijing family is the most dominant resistant genotype in Iran; We also found that the frequency of the Beijing family among Iranian drug-resistance strains is significantly higher than the other genotypes [12]. However, the diversity of XDR-TB genotypes has not yet been properly elucidated. This study aimed to evaluate the frequency of common genotypes among the XDR-TB strains worldwide. Relevant studies were collected without restriction on publication dates; Also, the bibliography section of the articles was carefully examined in order not to miss the potential articles. We considered studies published in English with their available full-texts and considered XDR-TB genotypes as eligible studies using standard methods, including IS6110-RFLP, Spoligotyping, MIRU-VNTR, or whole-genome sequencing, and excluded articles on non-XDR-TB subjects, studies with repetitive samples, studies with unclear results and insufficient data, and studies published in non-English languages. Processing the literature search and evaluation of eligible studies was performed by two independent authors (MK and MM). The required data such as first author, publication year, country, geographic region, frequency of Mtb strains, frequency of XDR-TB strains, distribution of Mtb genotypes, typing method, and references are summarized in Table 1. The frequency of each XDR-TB genotype was reported using event rate corresponding confidence intervals (95% CIs); Moreover, the odds ratio with 95% CIs was used to measure the relationship between XDR-TB and each of the genotypes. Heterogeneity was measured using the I 2 index and Cochrane Q-test. Egger's p-value and Begg's p-value were used to evaluating the publication bias. All the statistical analyses were performed using the Comprehensive Meta-Analysis software (Biostat, Englewood, NJ). After evaluating the potential documents, 41 eligible studies were identified . These studies were conducted between 2006-2020 in Europe, Latin America, Asia, and Africa. In these studies, genotyping of Mtb strains was performed using IS6110-RFLP, Spoligotyping, and MIRU-VNTR methods. The data of 24,659 Mtb strains were evaluated in this study. The frequency of XDR-TB strains was estimated to be about 8.3% (95% CI: 5.1-13.1; I 2 : 98.2; Q-value: 2120.6; Egger's p-value: 0.84; Begg's p-value: 0.08); Furthermore, according to the subgrouping analysis, the prevalence of XDR-TB in Africa, Latin America, Asia, and Europe was estimated to be 29. We observed a significant relationship between the Beijing genotype and XDR-TB but there was no significant relationship between other genotypes and XDR-TB . However, no significant correlation was observed in the Latin American population (OR: 0.24; 95%CI: 0.14-0.42; p-value: 0.01). Therefore, the frequency of Beijing genotype among the XDR-TB strains was significantly higher than Dehli-Cas, EAI, and Haarlem genotypes. Based on the available data, identification of the Beijing genotypes, especially in the patients with treatment failure, is a reliable index for the XDR-TB cases. The Beijing genotype Mycobacterium tuberculosis was first introduced by Van Soolingen et al., in 1995 from Beijing (China), and after a while, several outbreaks of Beijing genotype were reported and identified in Asia, South Africa, Germany, Canary Islands, Russia, Thailand and the United States [54,55]. According to the available reports, more than a quarter of tuberculosis cases belong to the Beijing genotype [56]. Beijing strains have several remarkable properties: (1) they are mostly associated with active TB, (2) they are associated with treatment failure and multiple drug resistance, (3) they are capable of efficient proliferation in the lung macrophages and spread in the population, and (4) they are genetically unstable. In particular, mutt gene alleles cause drug resistance and alter bacterial morphology [57][58][59]. Numerous pieces of evidence have been reported regarding the relationship between the Beijing genotype and MDR-TB so that this genotype can be considered as a biomarker for drug-resistant TB [60][61][62]. We showed for the first time in a comprehensive analysis that the Beijing family is the most predominant genotype among the XDR-TB strains. Based on the present results, the Beijing genotype can lead to the occurrence of several serious outbreaks in close geographical areas, and therefore, the identification and screening of these patients from an epidemiological point of view is an important strategy in the TB control program. However, our study had several limitations: (1) the sample size was small, (2) heterogeneity was significant, and (3) in some cases, publication bias was significant. We found that the frequency of Beijing genotype among XDR-TB strains was high. The data in this study would help guide the TB control program, and we however need further investigation to confirm the reliability of the present findings. Transparency declaration The authors have no conflict of interest.
2021-09-01T05:39:48.006Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "050078294ee4442d054ea557cdaa153f4de32a85", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.nmni.2021.100921", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "050078294ee4442d054ea557cdaa153f4de32a85", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235683846
pes2o/s2orc
v3-fos-license
Exploiting the neuroprotective effects of α-klotho to tackle ageing- and neurodegeneration-related cognitive dysfunction Abstract Cognitive dysfunction is a key symptom of ageing and neurodegenerative disorders, such as Alzheimer’s disease (AD). Strategies to enhance cognition would impact the quality of life for a significant proportion of the ageing population. The α-klotho protein may protect against cognitive decline through multiple mechanisms: such as promoting optimal synaptic function via activation of N-methyl-d-aspartate (NMDA) receptor signalling; stimulating the antioxidant defence system; reducing inflammation; promoting autophagy and enhancing clearance of amyloid-β. However, the molecular and cellular pathways by which α-klotho mediates these neuroprotective functions have yet to be fully elucidated. Key questions remain unanswered: which form of α-klotho (transmembrane, soluble or secreted) mediates its cognitive enhancing properties; what is the neuronal receptor for α-klotho and which signalling pathways are activated by α-klotho in the brain to enhance cognition; how does peripherally administered α-klotho mediate neuroprotection; and what is the molecular basis for the beneficial effect of the VS variant of α-klotho? In this review, we summarise the recent research on neuronal α-klotho and discuss how the neuroprotective properties of α-klotho could be exploited to tackle age- and neurodegeneration-associated cognitive dysfunction. Introduction Ageing is the primary risk factor for cognitive decline and most neurodegenerative disorders. Cognitive dysfunction is the major symptom of Alzheimer's disease (AD), as well as being prominent in other forms of dementia. Thus, strategies to enhance cognition would impact on the quality of life for a significant proportion of the ageing population. α-klotho is a key anti-ageing gene: in mice its deficiency results in premature ageing and short lifespan [1], while its overexpression extends lifespan [2,3]. In humans, a genetic variant of α-klotho is associated with enhanced cognition [3]. In mouse models, α-klotho protected against both age-associated decline in cognitive performance and neurodegenerative disease-associated cognitive dysfunction (reviewed in [4]). These observations have led to α-klotho being considered as a potential neuroprotective and cognitive-enhancing agent. However, the molecular and cellular mechanisms underpinning these observations are far from complete. The klotho (KL) family of genes includes α-klotho, β-klotho and γ-klotho [5], which are all translated as single-pass transmembrane proteins. α-klotho is highly expressed in the brain and kidney, and to a lesser extent in other organs [6]. In the periphery, transmembrane α-klotho acts as a co-receptor for FGF23 to increase binding affinity to fibroblast growth factor (FGF) receptors. β-klotho is predominantly expressed in the liver, with lower levels present in the gut, kidney and spleen and mediates the activity of other members of the FGF family, mainly FGF-19 and FGF-21 [7,8]. γ-klotho, whose function is ill-defined, is expressed in the kidney and skin [6,7,9]. In this review, we outline the molecular and cellular properties of α-klotho (referred to hereafter as klotho), its neuroprotective functions and the role of the VS variant in enhancing cognitive ability. In addition, we highlight critical gaps in our knowledge of the mechanisms by which klotho confers neuroprotection; gaps which if filled may open new therapeutic approaches to mimic klotho activity in age-and neurodegeneration-associated cognitive dysfunction. Molecular properties of klotho Proteolytic processing of klotho The α-klotho gene is located on chromosome 13 and is translated into a single pass type 1 integral membrane protein. The klotho protein has a short intracellular domain (11 amino acids), a transmembrane domain (21 amino acids) and a large extracellular domain (980 amino acids; Figure 1A). The extracellular domain contains two repeat sequences of ∼440 amino acids each, termed the KL1 and KL2 domains. The 135-kDa transmembrane protein can be proteolytically cleaved in the juxtamembrane stalk region to produce a 130-kDa soluble, shed form of klotho (referred to here as soluble klotho but sometimes in the literature as shed klotho; Figure 1A,B). This cleavage in the juxtamembrane stalk, known as the α-cleavage, is carried out by a disintegrin and metalloproteinase domain-containing protein (ADAM) 10 and/or ADAM17 [10]. A second cleavage, known as the β-cleavage, occurs between the KL1 and KL2 domains and is likely carried out also by ADAM10 or ADAM17. The deletion of the α-cleavage site results in reduced αand β-cleavage products, suggesting that the α-cleavage mainly occurs prior to the β-cleavage [11,12]. It is unclear whether the intact KL1 and KL2 domains in the transmembrane and soluble forms of klotho have different properties to the individual KL1 and KL2 domains produced following β-cleavage. However, it should be noted that the β-cleavage appears to be a minor event ( Figure 1B,C) and in most cells and body fluids the 130-kDa form is the predominant form of soluble klotho. The transmembrane klotho is also cleaved by the β-site amyloid precursor protein (APP) cleaving enzyme 1 (BACE1) to generate a soluble form, and the transmembrane and cytosolic stub resulting from either ADAM or BACE1 cleavage is subject to intramembrane proteolysis by the presenilin-containing γ-secretase complex [13]. Such multistep proteolytic processing involving shedding of the ectodomain by an ADAM protease or BACE1 and then intramembrane proteolysis by the γ-secretase complex is a common feature of many cell surface transmembrane proteins, including APP and notch [14]. The α-klotho gene encodes another isoform derived through alternative mRNA splicing of exon 3; a secreted form of 70-kDa, which contains the KL1 sequence with an additional unique C-terminal sequence of 15 amino acids [15] (Figure 1A-C). The secreted and soluble forms of klotho are found in the cerebrospinal fluid (CSF), blood and urine [16,17]. As discussed below for klotho, a key issue with proteins that exist in multiple forms due to post-transcriptional processing, is to assign a particular function to a particular form. Does klotho have glycosidase activity? The KL1 and KL2 domains have sequence similarity to glycosidases and have been reported to possess glycosidase activity, cleaving sialic acid from the carbohydrate chains attached to glycoproteins. For example, the glycosidase action of klotho on the calcium channels TRPC subfamily V member (TRPV) 5 and TRPV6 appears to allow their binding to galectin-1, leading to their clustering and retention on the plasma membrane, with a resultant increase in calcium channel activity [18,19]. However, the recent crystal structure of the extracellular domain of klotho revealed that both the KL1 and KL2 domains lack a key catalytic glutamate and have major conformational differences in the loops surrounding the catalytic pocket as compared with catalytically active glycosidases; differences that are incompatible with an intrinsic glycosidase activity [20]. Similar substitutions of key active site residues in β-klotho also indicate that this protein cannot function as an active glycosidase [21]. Thus, it is unlikely that klotho itself has glycosidase enzymatic activity but more likely that the KL domains bind sugars on glycoproteins or glycolipids promoting protein-protein or protein-lipid interactions, respectively. Klotho in the brain In the brain, the highest level of klotho expression is in the choroid plexus, although klotho is also expressed in several other brain regions, including the hippocampus, cortex, cerebellum, striatum, substantia nigra, olfactory bulb and medulla [22,23]. Klotho is mainly expressed in both neurons and oligodendrocytes. Klotho expression in the brain starts in utero and continues to increase into adulthood [24,25]. However, klotho expression is reduced in the aged brain in monkeys, rats and mice [26] and in the CSF of humans [27]. The importance of klotho in healthy central nervous system (CNS) function was identified through klotho-deficient mice, where there were significantly fewer Purkinje cells in the cerebellum [1], diminished axonal transport [28] and cognitive impairment [29]. Klotho is In the media secreted klotho appears as two forms likely due to proteolytic processing. also required for the proliferation and maturation of adult hippocampal neural progenitor cells [30], oligodendrocyte maturation and myelin integrity [31]. The choroid plexus contains epithelial cells with tight junctions and supports the CNS by producing CSF and growth factors, as well as providing a gateway for the entry of immune cells into the CNS (reviewed in [32]). By analogy with the kidney which produces soluble klotho for the blood circulation, the choroid plexus likely produces soluble and secreted klotho for the CSF [23]. Selective knockout of klotho in the choroid plexus of mice revealed the importance of choroid plexus-produced klotho [33]. Expression of the cytokine response factors, intracellular adhesion molecule 1 (ICAM1) and interferon regulatory factor 7 (IRF7) was increased in the choroid plexus of a Flox/Cre klotho knockout mouse model [33]. This suggests that klotho plays a regulatory role in the expression of inflammation-related genes. In the same model, decreased production of klotho in the choroid plexus caused enhanced macrophage infiltration into the CNS and promoted activation of microglia [33]. Cell surface receptors for klotho The atomic structure of a 1:1:1 ternary complex of the extracellular domain of klotho, the FGFR1c ligand-binding domain and FGF23 has been determined, revealing that klotho functions as an on-demand scaffold protein that promotes FGF23 signalling [20]. Most of the known roles of the FGF/klotho complex are in the renal tubules of the kidney where it aids phosphate regulation, vitamin D metabolism and the reabsorption of other ions [34]. The FGF/klotho complex also has a role in mediating cardiovascular homoeostasis via cardiomyocytes [35]. As the expression of klotho is limited to a few cell types, yet the protein affects the function of several non-klotho-expressing systems, it is likely that the soluble and secreted forms act as circulating hormones or ligands. However, the identity of the receptor(s) for the soluble and secreted forms of klotho remain unclear. Recently, klotho has been highlighted as a metabolic coupler between neurons and astrocytes [36]. Insulin acts upon neurons to stimulate the production and secretion of klotho which in turn stimulates astrocytic aerobic glycolysis and lactate release via FGF receptor 1 (FGFR1) and extracellular signal-regulated kinase 1/2 (Erk1/2) activation [36]. There is also a case for a potential klotho receptor on endothelial cells as they express FGFRs to which circulating klotho may bind to form a complex to activate signalling pathways [37]. However, there is little evidence that FGF23 is active in the brain (reviewed in [23]). In HeLa cells and human embryonic kidney (HEK) cells, soluble klotho was reported to bind with a K d of 3 μM to mono-sialogangliosides, which are highly enriched in the outer leaflet of cholesterol-rich lipid rafts [38]. Binding of soluble klotho to gangliosides modulated lipid raft organisation and inhibited lipid raft-dependent phosphoinositide 3-kinase (PI3K) signalling [38]. This implies that soluble klotho interacts with a raft-based protein or protein complex, possibly mediated by low-affinity interaction between the KL domains on klotho and mono-sialogangliosides on the membrane. However, in the brain, the cell surface receptors for klotho remain to be determined. Are there different receptors for the soluble and secreted forms of klotho? Are the receptors localised to one specific cell type in the brain? Unbiased screening approaches using the soluble and secreted forms of klotho as ligands will help answer these questions regarding the identity of the receptors for klotho in neurons and other cells of the CNS. Identificaton of the receptor(s) for klotho in the brain will aid in clarifying the potential molecular signalling pathways that the protein is involved in. Signalling pathways modulated by klotho Various signalling pathways have been reported to be activated by klotho, including PI3K/Akt, Erk1/2, Ask1/p38 mitogen-activated protein kinase (MAPK), protein kinase R (PKR)-like endoplasmic reticulum kinase (PERK), those linked to the insulin and insulin-like growth factor (IGF)-1 receptors and Wnt1 [23, 39,40]. For example, overexpression of the soluble form of klotho has been shown to suppress insulin/IGF-1 signalling in mice [2]. In the periphery, soluble klotho modulated PI3K/Akt signalling causing changes in calcium homoeostasis in cardiomyocytes [41] and decreased the abundance of transient receptor potential cation channels (TRPCs) 6 and TRPC3 on the surface of podocytes via inhibition of PI3K-dependent exocytosis [42]. Recently, soluble klotho has been shown to down-regulate Orai-mediated store-operated Ca 2+ entry via PI3K-dependent signalling [43]. Klotho is also involved in regulation of cation channels such as Ca 2+ and K + [18]. For example, the purported sialidase activity of soluble klotho was observed to increase the abundance of K + channels at the surface of non-neuronal cells via N-glycan modification [44]. Again in SH-SY5Y cells, expression of klotho blocked the thapsigargin-induced phosphorylation of the ER stress markers PERK and eukaryotic initiation factor 2α (eIF2α) [46]. Recombinant soluble klotho regulated several signalling proteins in rat oligodendrocytes in vivo and in a human oligodendrocytic cell line, including Wnt, nuclear factor κ-light-chain-enhancer of activated B cells (NFκB), p53, Akt and Erk [31,47]. In rat primary hippocampal neurons soluble klotho increased phosphorylation of PI3K/Akt and Erk [48]. In contrast, up-regulation of klotho, through expression of the transmembrane form, in the brain of senescence-accelerated prone (SAMP) 8 mice, an accelerated ageing model, decreased PI3K/Akt and Forkhead box class O1 (FoxO1) phosphorylation [49]. In transgenic mice, klotho overexpression significantly protected dopaminergic neurons against oxidative stress, in part by modulating the activation of Ask1/p38 MAPK [50]. From these limited published studies it is clear that there are still significant gaps in our understanding of the signalling pathways modulated by klotho in the brain. In addition, several of the above studies are compounded by models in which klotho gene expression is increased which will increase the levels of transmembrane, soluble and secreted forms, thus making it difficult to distinguish which form of klotho is activating which signalling pathway (assuming that the various forms of klotho bind to separate receptors and/or modulate different signalling pathways). Studies either with recombinant forms of klotho or using viral vectors to selectively express a particular form of klotho, combined with single-cell RNA sequencing, will help to clarify the relative contribution of the different forms of klotho in modulating key signalling pathways in defined target cells in the brain. Effect of polymorphic variants of klotho on cognition The most common klotho variant, KL-VS (which stands for klotho with valine and serine substitutions) consists of six single nucleotide polymorphisms (SNPs) that are always found together: three SNPs are in introns and do not alter splicing, the SNP at nucleotide 1155 causes no change in amino acid, while the other two SNPs result in the amino acid substitutions F352V and C370S, which are located in the KL1 domain and therefore occur in all forms of klotho ( Figure 1A) [51][52][53]. In a transient transfection assay in HeLa cells, when incorporated into the secreted form of klotho, the F352V mutation on its own reduced by 6-fold the secretion of klotho, whereas the C370S mutation on its own increased by 2.9-fold the amount of klotho secreted [52]. The double mutation exhibited an intermediate phenotype (1.6-fold increase in secretion), providing an example of intragenic complementation in cis by human SNPs [52]. The KL-VS variant also increased klotho levels in sera of humans [3]. When incorporated into the transmembrane form of klotho, the F352V mutation on its own reduced proteolytic shedding in HEK293 cells, whereas the C370S mutation or the VS double mutation did not alter shedding of the protein compared with wildtype [51]. The F352V mutation led to a shorter half-life, but again this was attenuated in the VS variant [51]. When overexpressed the VS variant had more monomeric and less dimeric klotho, was a better binding partner for FGFR1, enhanced FGFR heterodimerisation and thus FGF23 signalling [51]. KL-VS homozygosity is associated with a reduced lifespan [52,54] and decreased cognitive function [55]. Whereas, heterozygosity for the KL-VS allele has been shown to protect against age-associated cognitive decline [55,56]. Furthermore, the KL-VS genetic variant of klotho was associated with enhanced cognition in three independent human cohorts and in a meta-analysis [3]. Such observations have prompted investigation into KL-VS allele status and the incidence of neurodegenerative disease. In individuals over 60 years, the KL-VS haplotype was associated with reduced risk of AD in the presence of apolipoprotein (Apo) Eε4 [57]. KL-VS heterozygosity in ApoEε4 individuals reduced the risk of progressing to mild cognitive impairment or AD, alongside increased amyloid-β levels in the CSF and reduced amyloid-β on positron emission topography scans [57]. The higher levels of amyloid-β in the CSF may be due to enhanced clearance from the brain. In ApoEε4 individuals with the KL-VS variant the amyloid-β burden did not exceed that of ApoEε4 negative individuals, suggesting heterozygosity of the VS haplotype may protect against ApoEε4-associated AD onset [58]. In a study of over 200 older adults, total tau and phosphorylated tau levels and cognitive deficits were reduced in KL-VS heterozygotes compared with non-carriers [59]. Heterozygosity of the KL-VS allele was correlated with a greater volume in the right dorsolateral prefrontal cortex (rDLPFC) and an enhanced executive function [55]. The rDLPFC is vulnerable to pathology and atrophy in AD [60]. Whereas KL-VS homozygosity was associated with a smaller rDLPFC volume and decreased executive function. Further investigation revealed that higher systemic klotho, via KL-VS heterozygosity, predicted greater connectivity between the rDLPFC to functional networks throughout the brain, including the anterior cingulate cortex and the right middle frontal gyrus [61]. In a separate study, individuals with KL-VS heterozygosity, relative to non-carriers, had slower cognitive decline and greater right frontal lobe volumes but also smaller white matter volumes and shorter survival [62]. Longitudinal cognitive trajectories indicated that KL-VS heterozygosity has an advantage in very late life, leading to the suggestion that the genotype-survival advantage of the KL-VS allele is age-dependent and mediated through differential cognition and brain volume [62]. Recently, no association of the KL-VS heterozygosity was found with cognition or brain structure in children and adolescents [56]. Other studies have assessed the KL-VS haplotype with cognitive ability in the same individuals from age 11 and again at age 79 [54]. From these various studies KL-VS heterozygosity appears to be protective in later life against age-related and neurodegeneration-associated cognitive decline, although the underlying molecular and cellular mechanisms by which this double mutation in klotho brings about these beneficial properties have yet to be understood. The klotho SNP G395A is located in the promoter region and confers a higher affinity for transcription factors compared with wildtype, so was hypothesised to be a functional variant [63]. The G395A polymorphism is associated with reduced cognitive impairment in people over 90 years of age, as assessed by the mini-mental status examination (MMSE) [64]. The MMSE score indicated no difference in populations between 60 and 79 years with the G395A polymorphism, however, the intelligence quotient level was enhanced [65]. These data suggest that the G395A polymorphism also may be cognitively protective in older people only. Neuroprotective effects of klotho Despite the lack of information regarding the cell surface receptors and signalling pathways for klotho in the brain, studies with transgenic mice overexpressing the klotho gene have shed light on the neuroprotective properties of klotho and, when crossed with mouse models of neurodegenerative diseases, have highlighted the potential beneficial effect arising from enhancing klotho expression in such disorders (summarised in Figure 2). In transgenic mice that overexpress klotho throughout the body, the mice performed better in multiple tests of learning and memory than control mice [3]. Elevated klotho enhanced long-term potentiation, a form of synaptic plasticity widely studied as a cellular model for learning and memory. This effect of klotho to enhance cognition was via stimulus of the N-methyl-d-aspartate (NMDA) receptor subunit GluN2B [3]. Klotho-overexpressing mice had increased GluN2B synaptic expression in both the hippocampus and the frontal cortex [3]. Klotho elevation also increased expression of FOS, which is involved in memory consolidation and increased by NMDA receptor activation [3]. In wildtype mice, adenovirus expression of secreted klotho resulted in enhanced learning and memory 6 months after a single adenovirus injection into the CNS [66]. Viral expression of secreted klotho in the cortical area 1 (CA1) region of the hippocampus improved performance on the object recognition test and enhanced hippocampal synaptic transmission [67]. When klotho overexpressing transgenic mice were crossed with human APP transgenic mice, a model that displays AD-like pathology and behavioural deficits, the increased klotho levels ameliorated the cognitive deficits seen in the human APP transgenic mice, independently of amyloid-β accumulation [68]. In the klotho/human APP transgenic mice, GluN2B was enriched in post-synaptic densities and NMDA receptor-dependent synaptic plasticity in the hippocampus was enhanced [68]. Oxidative stress has long been implicated in ageing-related cognitive impairment in both old experimental animals and aged humans [69]. For example, oxidative damage to the synapse in the cerebral cortex and hippocampus during ageing contributes to the deficit of cognitive functions [70] and increased oxidative stress was associated with cognitive decline in a healthy population [71]. Oxidative stress contributed to the ageing-associated cognitive impairment in klotho mutant mice [29] and klotho knockout mice had a generalised increase in the global burden of oxidative stress in the CNS [2], indicating that klotho exerts antioxidant effects in the brain. Furthermore, lentivirus-mediated up-regulation of the transmembrane form of klotho improved ageing-related memory deficits and reduced oxidative stress in senescence-accelerated mice [49]. Rat primary hippocampal neurons treated with soluble klotho were protected against glutamate-induced and amyloid-β-induced oxidative damage, in part through regulation of the redox system via Akt-dependent induction of the thioredoxin/peroxiredoxin system [48]. Recently, recombinant soluble klotho was found to protect SH-SY5Y human neuroblastoma cells against amyloid-β toxicity through decreasing reactive oxygen species and increasing superoxide dismutase activity [45]. In addition, klotho reduced multiple inflammatory markers, NFκB, interleukin-1β and tumour necrosis factor-α (TNF-α), in cells exposed to amyloid-β [45]. In APP/PS1 mice intracerebral overexpression of full-length klotho cDNA by lentivirus injection ameliorated amyloid-β burden, neuronal and synaptic loss, and the cognitive deficits observed in this model of AD [72]. The klotho treatment significantly inhibited NACHT (neuronal apoptosis inhibitory protein, MHC class II transcription activator, incompatibility locus protein, and telomerase-associated protein), LRR (leucine-rich repeat) and PYD domain containing protein 3 (NLRP3) inflammasome and the subsequent transformation of microglia to the M2 type that may enhance microglia-mediated amyloid-β clearance [72]. In addition, klotho knockdown in primary ɑ-klotho Full-length klotho overexpression in mice Soluble klotho in cell culture Protected dopaminergic neurons against oxidative stress [49] Protection against glutamate-& amyloid-β toxicity [45,47] Enhanced synaptic plasticity [3,68] Secreted klotho overexpression in mice Enhanced learning and memory [66,67] Enhanced hippocampal synaptic transmission [67] Decreased ROS & upregulation of redox system [45,47] Induced autophagy [73] Decreased inflammatory markers [47] Inhibited NLRP3 inflammasome [72] Soluble klotho administered peripherally to mice Improved amyloid-β clearance & cognition [72,73] Improved memory deficits, reduced oxidative stress [46] Enhanced synaptic plasticity [67] Enhanced synaptic plasticity [75] Enhanced cognition & neural resilience [75] Enhanced glutamate receptor signalling [75] Enhanced learning & memory [3,68] human choroid plexus epithelial cells impaired their ability to transport amyloid-β [72]. Also, in APP/PS1 mice, intracerebroventricular injection of a lentiviral vector encoding klotho ameliorated the cognitive deficit and AD-like pathology in mice 3 months later [73]. Klotho-induced autophagy activation and protein kinase B/mammalian target of rapamycin inhibition, suggesting that up-regulation of klotho in the brain promotes the autophagic clearance of amyloid-β and protects against cognitive deficits [73]. From these various studies, klotho appears to convey neuroprotection against cognitive decline through multiple mechanisms: (i) promoting optimal synaptic function via activation of NMDA receptor signalling; (ii) stimulating the antioxidant defence system; (iii) reducing inflammation; (iv) promoting autophagy and (v) enhancing amyloid clearance. However, further work is required to validate many of these findings and to determine which are the key mechanisms responsible for the cognitive-enhancing effects of klotho in vivo. Approaches to increase klotho levels in the brain Notwithstanding the limited knowledge of how klotho mediates neuroprotection and cognitive enhancement, the above observations (summarised in Figure 2) have led to the klotho pathway being considered as a potential therapeutic target for enhancing cognitive function [74]. As discussed above, overexpression of klotho using genetic approaches have provided convincing evidence that increasing klotho in the brain can enhance cognition and potentially reverse the cognitive-decline associated with ageing and AD. Several of these studies [3,49,68] have increased klotho expression throughout the body, so it is not clear whether the effects observed are due to increasing klotho in the CNS or the periphery. Targeted viral vector administration of klotho to discrete regions in the CNS has shown that klotho has direct beneficial actions on cells in the brain [66,67,73]. However, such genetic approaches in increasing klotho would be problematic in humans. An alternative approach is to administer recombinant forms of klotho. This is exemplified in the study where soluble klotho administered peripherally induced cognitive enhancement and neural resilience in young, aged and transgenic α-synuclein mice [75]. This occurred through activation of the NMDA receptor subunit GluN2B with resultant enhancement of NMDA receptor-dependent synaptic plasticity [75]. Selective blockade of GluN2B subunits with the highly specific antagonist Ro 25-6981 abolished this acute effect of soluble klotho [75]. An intriguing aspect of this study was the ability of the peripherally administered klotho to cause an effect in the brain without seeming to cross the blood-brain barrier (BBB) [75]. This raises the possibility that the peripherally administered klotho may be acting in the cerebrovasculature, possibly directly on endothelial cells or other components of the neurovascular unit (pericytes, astrocytes), which then signal to the nearby neurons. Clearly, further work is required to validate the ability of peripherally administered klotho to activate NMDA receptors in the brain without crossing the BBB. Another approach is to use small molecules to pharmacologically increase the expression of all forms of klotho or to selectively increase the soluble or secreted forms. In SAMP8 mice, the compound ligustilide elevated levels of klotho in the serum and choroid plexus, and reduced memory deficits and neuron loss [76]. Ligustilide inhibited the IGF1 pathway and induced FoxO1 activation, in addition to up-regulating klotho expression, in HEK293T cells [76]. Similarly, tetrahydroxystilbene glucoside was identified through studies on SAMP8 mice as increasing lifespan and increasing the level of neural klotho [77]. Using the klotho promoter to drive expression of luciferase, high-throughput screening was used to identify small molecules that promote klotho transcription [78]. FGF23 signalling assays and phosphorylation of Erk were assessed to determine that the increased klotho expression resulted in a functional change [78]. Whether any of the hits identified through this screen have progressed into in vivo studies has yet to be reported. In addition, it remains to be determined whether the gene-enhancing effects of ligustilide, tetrahydroxystilbene glucoside or other compounds identified through such genetic screens act solely via klotho or through activation of multiple genes. Pharmacologically promoting the proteolytic shedding of transmembrane proteins can have beneficial effects, for example, promoting the shedding of the prion protein through activation of ADAM10 with carbachol or acitretin reduces the binding and cytotoxicity of amyloid-β oligomers [79]. The proteolytic shedding of klotho can be stimulated with insulin [11] or the muscarinic agonist carbachol acting via activation of ADAM10 (Figure 3), indicating that it is feasible to increase the level of soluble klotho through pharmacologically enhancing ADAM10 and/or ADAM17 activity. As activation of ADAM10 also promotes the shedding of APP and increases the level of neuroprotective soluble APPα fragment [80], in addition to reducing the toxicity of amyloid-β oligomers through promoting the shedding of the prion protein [79], this approach would lead to neuroprotection through multiple routes. However, as both ADAM10 and ADAM17 have numerous other substrates, including some involved in tumourigenesis, activation of these proteases as a therapeutic approach has been questioned [81]. Finally, non-invasive and non-pharmacological approaches have been reported to increase klotho. A recent study showed blood klotho concentrations were increased after 2 weeks of moderate intensity training in men [82] and even after a single bout of high-intensity exercise [83,84]. The effect of diet on the expression of klotho has also been investigated. A low-calorie, high-protein diet significantly increased klotho expression in the brain of old rats and enhanced performance in the object recognition memory test [85]. However, such approaches as changes in exercise and diet will have multiple effects in the body, so linking a beneficial effect on cognition directly to alterations in klotho levels will be challenging. Concluding remarks There is a growing body of evidence for klotho having cognitive-enhancing properties from genetic studies on the KL-VS variant to experiments directly increasing the level of klotho in the brain. However, several important questions on the mechanisms by which klotho acts remain unanswered. For example, which form of klotho (transmembrane, soluble or secreted) mediates its cognitive enhancing properties? Can the different forms substitute each other? What is the identity of the receptor(s) in the brain for the soluble and secreted forms of klotho and which signalling pathway(s) is activated by them in the brain to enhance cognition? What is the molecular basis for the beneficial effect of the heterozygous VS variant of klotho? Is it a gain of function, a loss of function or both? Given that KL-VS homozygosity appears to be detrimental, would too much klotho or overstimulation of its signalling pathways have toxic effects or be counterproductive to enhancing cognition? How does peripherally administered klotho mediate neuroprotection? Does it cross the BBB or act on non-neuronal cells in the neurovascular unit? Once more details are uncovered on the molecular and cellular mechanisms of action of the soluble and secreted forms of klotho in the brain, more targeted approaches to mimic the actions of klotho may be realised which will enable us to exploit its neuroprotective properties to tackle age-and neurodegeneration-associated cognitive dysfunction.
2021-07-01T05:13:34.002Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "ab71802899372c50d40efdfca88e8363c167ab35", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/neuronalsignal/article-pdf/5/2/NS20200101/914857/ns-2020-0101c.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab71802899372c50d40efdfca88e8363c167ab35", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
11608071
pes2o/s2orc
v3-fos-license
Joint Scheduling and Transmission Power Control in Wireless Ad Hoc Networks In this paper, we study how to determine concurrent transmissions and the transmission power level of each link to maximize spectrum efficiency and minimize energy consumption in a wireless ad hoc network. The optimal joint transmission packet scheduling and power control strategy are determined when the node density goes to infinity and the network area is unbounded. Based on the asymptotic analysis, we determine the fundamental capacity limits of a wireless network, subject to an energy consumption constraint. We propose a scheduling and transmission power control mechanism to approach the optimal solution to maximize spectrum and energy efficiencies in a practical network. The distributed implementation of the proposed scheduling and transmission power control scheme is presented based on our MAC framework proposed in [1]. Simulation results demonstrate that the proposed scheme achieves 40% higher throughput than existing schemes. Also, the energy consumption using the proposed scheme is about 20% of the energy consumed using existing power saving MAC protocols. I. INTRODUCTION The increasing number of mobile devices and volume of mobile Internet traffic necessitate dense deployment of Internet access points (APs) in an ad hoc manner to increase network capacity via shorter communication links [2]. Also, diverse peer-to-peer communications [3], [4] are emerging to increase spectrum and energy efficiencies via shorter communication links and to interconnect several billion physical objects and integrate them into the existing networks. Such a dense and dynamic network of mobile nodes and APs and diverse peerto-peer communications require to establish an effective ad hoc network to efficiently utilize radio spectrum and to minimize energy consumption. In a wireless network, the data rate and energy consumption of a link depend on the transmission power level of the source node, the distance between source and destination, and the amount of interference at the destination node. The amount of interference at a destination depends on the distance to interfering source nodes and their transmission power level. Thus, the achievable data rate and energy consumption of transmitting links are interrelated. The set of concurrent transmissions and the transmission power level of each source should be properly determined to efficiently utilize radio spectrum and reduce energy consumption. In addition, a radio interface consumes a significant amount of energy in the idle mode in which it is not transmitting/receving a packet. The energy consumption can be reduced by putting the radio interface in a sleep mode, however, the awake and active times of the radio interface should be properly scheduled to avoid missing incoming packets [1], [5]- [8]. Although increasing spatial reuse allows more concurrent transmissions, it also decreases the signal-to-noise-plus interference-ratio (SINR) at the receivers. Therefore, the data rate of each transmission decreases as a result of a lower SINR. The trade-off between the increased spatial reuse and the decreased data rate has been studied in [9], [10] when using a CSMA/CA MAC. It is shown that the network capacity depends only on the ratio of the transmission power level to the carrier sensing threshold (i.e., carrier sensing range). It is proposed that all nodes use a same carrier sensing threshold and each source node adjusts its transmission power level based on its distance from the destination. However, when only carrier sensing is used, the transmission rates must be adjusted for the worst case interference to ensure successful reception of packets at the receiver. As a result, the transmission power control schemes (in which nodes independently choose their transmission power levels) cannot fully utilize the network capacity. Also, the CSMA based MAC protocols provide poor spatial spectrum reuse due to the hidden and exposed node problems [1], [11]. Centralized scheduling and transmission power control for wireless ad hoc networks are proposed in [12], [13]. The optimal scheduling and transmission power control to maximize total throughput in a two-cell two-link wireless network have been studied in [14]. In the network with two links, maximizing total throughput leads to binary power control. That is, each link should transmit at either the maximum power level or the minimum power level [14]. Motivated by the optimality of binary power control, the binary power control is also proposed for multi-cell networks with more than two links in [15]. The effect of transmission power level on total energy consumption depends on the energy consumption pattern of the radio interface [16]- [19]. The energy consumption has two components: the energy consumed in the radio interface circuit, and the energy consumed in the amplifier. When the energy consumption in the amplifier dominates the energy consumed at the radio interface circuit, the energy consumption per transmitted data bit in a two-link network can be reduced by decreasing the transmission power level [16]. However, when the energy consumption in the radio interface circuit is much larger than the energy consumption in the amplifier, minimizing the energy consumption per transmitted data bit in a two-link network is equivalent to maximizing network throughput [16]. Generally, the transmission power arXiv:1706.05596v1 [cs.NI] 18 Jun 2017 level for minimal energy consumption depends on the energy consumption pattern of the radio interface and the network condition. Thus, transmission at the minimum power level (as in [20]- [23]) does not always reduce the energy consumption. In [1], we present a novel MAC scheme for a wireless ad hoc network. All node transmissions are dynamically scheduled by a set of coordinator nodes that are distributed over the network coverage area. A coordinator node monitors source nodes' transmission requests in its proximity, actively exchanges scheduling information with its adjacent coordinators, and periodically determines contention-free transmission/reception times for nodes in its vicinity. For each scheduled transmission a proper space area around the receiver node is reserved to guarantee the required link SINR and enhance spatial spectrum reuse. Moreover, the deterministic data transmission time allows nodes to stay awake only when they are transmitting/receiving a packet to minimize idlelistening energy consumption. In this paper, we study efficient joint transmission scheduling and power control in a wireless ad hoc network. We show that the asymptotic optimal scheduling and transmission power control can be determined when node density in the network goes to infinity and the network area is unbounded. By analyzing the asymptotic optimal solution, we determine the fundamental limits of maximum spectrum and energy efficiencies in a wireless network. To approach the maximum spectrum and energy efficiencies in a practical network, we assign a transmission power level and a target interference power level to each link, which are determined based on the asymptotic optimal values. The concurrent transmissions at each time slot are scheduled such that the actual power of interference at the scheduled destination nodes are close to the target interference levels for efficient spectrum and energy utilization. We present a distributed implementation of the proposed scheduling and transmission power control scheme based on our MAC framework proposed in [1]. The main contributions of this paper can be summarized as follows: 1) We analyze asymptotic joint optimal scheduling and transmission power control, and determine the fundamental limits of network capacity, subject to an energy efficiency constraint; 2) Based on the asymptotic optimal solution, we propose a novel scheduling and transmission power control framework to approach maximum spectrum and energy efficiencies in a practical network. Also, we present a distributed implementation of the proposed scheme using only local network information; 3) The throughput and energy consumption of our proposed scheduling and transmission power control framework are evaluated in comparison with existing schemes. A new scheduling efficiency metric is introduced to compare the efficiency of different schemes with the asymptotic optimal solution. The rest of this paper is organized as follows: The system model is presented in Section II. In Section III, we analyze asymptotic joint optimal scheduling and transmission power control and determine the maximum spectrum and energy efficiencies in the wireless network. We propose a scheduling and transmission power control framework to approach the optimal solution in a practical network in Section IV. Simulation results are presented in Section V. Finally, Section VI concludes this paper. II. SYSTEM MODEL Consider a wireless ad hoc network where all network nodes use a shared radio channel for transmissions. We focus on single-hop transmissions as, at the MAC layer, each node communicates with one or more of its one-hop neighboring nodes 1 . Nodes are randomly distributed in the network area and the destination of each source node is randomly selected from the rest nodes within maximum data transmission distance d max . Let L denote the number of links and l ∈ {1, 2, ..., L} denote a link; The source and destination nodes of link l are denoted by S l and D l , respectively. Network links are considered to be directional (i.e., transmission from a source to a destination node). Bidirectional communications (such as a TCP link) between two nodes are handled by scheduling two different directional links. We denote the distance from the source node of link l to the destination node of link k by d lk , and the associated channel gain is h lk = cd lk −α , where c is a constant and α is the path-loss exponent 2 . Time is partitioned into slots of constant durations. Consider a scheduling interval of T slots, and let t ∈ {1, 2, ..., T } denote time slot index 3 . We assume that d lk , with l, k ∈ {1, 2, ..., L}, is constant over T time slots. Letγ = [γ lt ] L×T denote the transmission power matrix, where γ lt denotes the transmission power level of source node of link l at time slot t. Letū = [u lt ] L×T denote the scheduling matrix, where u lt = 1 if link l is scheduled for transmission at time slot t and u lt = 0 otherwise. A scheduled link transmits a data packet during a time slot that is scheduled. The SINR at the destination of link l at slot t is given by η lt = u lt γ lt h ll N0+ k =l u kt γ kt h kl , where N 0 is background noise power and k =l u kt γ kt h kl I lt is the power of interference at the destination. The achievable channel rate in bit/s/Hz over link l at slot t, using Shannon formula, is R lt = log 2 (1 + η lt ) and the average data rate at link l can be written as R l = 1 T T t=1 R lt . A radio interface can be in transmit, receive, idle and sleep modes. The power consumption of a radio interface in the transmit mode to transmit at power level γ is Γ c + g a γ, where Γ c is the circuit power consumption and g a > 1 is the inverse of the power efficiency of radio interface amplifier. The power consumption in the receive and idle modes is Γ c and in the sleep mode is Γ 0 . Each node puts its radio interface in sleep mode when it is not transmitting/receiving data to save energy. Thus, the sum of power consumption (in Joule/s) at the source and destination nodes of link l at slot t is P lt = u lt × (2Γ c + ) and the average power consumption at link l is P l = 1 T T t=1 P lt . The average energy consumed per transmitted bit (in Joule/(bit/Hz)) at link l can be written as E l = P l /R l . Joint optimal scheduling and transmission power control are to find a scheduling matrix and a transmission power matrix that maximize the network objective function, given by where w l ∈ [0, ∞) is the weighting factor of data rate of link l,R l denotes the maximum required data rate at link l, andÊ l denotes the maximum energy consumption per bit constraint at link l. To find an optimal solution in (1), we need to solve a non-convex mixed integer non-linear problem, which is known to be NP-hard [24], [25]. III. ASYMPTOTIC JOINT OPTIMAL SCHEDULING AND TRANSMISSION POWER CONTROL In this section, we study scheduling and transmission power control in the wireless network as the node density goes to infinity and the network area is unbounded. Consider a symmetric link scheduling in an unbounded network area as illustrated in Figure 1. The network area is partitioned into equal size hexagonal cells and a link is scheduled inside each cell. The source and destination distance is the same for all links and the position of every scheduled link with respect to all other scheduled links is identical. Due to the symmetry of scheduled links, the optimal transmission power should be the same for every scheduled link. Thus, the asymptotic optimal joint scheduling and transmission power control are to find a cell size and a transmission power level that maximize the network objective function. In the following, we analyze the spectrum and energy efficiencies in the network as the cell size and transmission power level vary, in order to determine the optimal scheduling and transmission power control. Let d denote the distance between the source and destination of a link, r g the distance between the center and a vertex of a cell, and γ the transmission power of every scheduled source node. The signal power at a destination node is γ (r) = cγd −α . Let d i0 , i ∈ {1, 2, ...}, denote the distance from the source node of an interfering link to the destination node of a target link. Using unity vectors v and w, we have where || · || denotes the Euclidian distance. By changing coordinates in (2), we have The interference power at a destination node can be calculated With the assumption that I N 0 , the SINR at a destination node can be calculated as (4) Also, with frequency reuse, the network space occupied by each scheduled link is given by Using (4) and (5), the total data rate (bit/s/Hz) per unit network area can be written as According to (6), the total data rate depends on the ratio r g /d r g , and can be maximized by choosing r g to maximize log 2 (1+F (r g )) Figure 2 for different path-loss exponent values. Also, the maximum achievable data rate is inversely proportional to the square of On the other hand, energy consumption per transmitted data bit (Joule/(bit/Hz)) is where 2Γ c + g a γ denotes the sum of power consumption in the source and destination nodes of a scheduled link (with the assumption that power consumption in a source node is Γ c + g a γ, in a destination node is Γ c and in a non-scheduled node is negligible. According to (7), the energy consumption per transmitted data bit decreases as the distance between scheduled links increases (i.e., as r g increases). We set the objective of joint scheduling and transmission power control to maximize the total data rate per unit network area, while keeping the amount of consumed energy per transmitted data bit below a threshold,Ê, as an energy efficiency constraint. That is, where η min is the minimum required SINR at a destination node for successful signal detection. The objective function in (8) is consistent with (1) in which w l = 1,R l = ∞, and E l =Ê, for every link l. We numerically solve (8) using a brute-force search over discrete values of γ and r g . Also, an alternative way to solve (8) based on the Lagrangian multipliers method and Karush-Kuhn-Tucker (KKT) conditions is discussed in the Appendix. Figure 3 shows spectrum efficiency and energy consumption per bit with optimized transmission power and cell size, as the the energy consumption constraint E varies. IV. SCHEDULING AND TRANSMISSION POWER CONTROL In a practical wireless network, scheduled links likely can not be placed in a symmetric manner, because the node density is finite and the link distances are not identical. Also, scheduling and transmission power control should be adaptive, as node location and traffic load vary over time. As discussed in Section II, the optimal scheduling and transmission power control are in general solutions of an NP-hard problem. Thus, we develop a heuristic scheduling and transmission power control framework based on the asymptotic optimal solution. The data rate and energy consumption of a link depend on the transmission power of the source and the power of interference at the destination node. We schedule links for transmissions such that the transmission power of source nodes and the power of interference at destination nodes follow the asymptotic optimal values. For this purpose, we assign a transmission power level to the source and a target interference power level to the destination of each link, which follow the values that maximize asymptotic spectrum efficiency while satisfying the energy consumption per bit constraint of the link. Then, we schedule concurrent links for transmissions such that the actual power of interference at the destination of each scheduled link is not larger than but as close as possible to the determined target interference power of the link. If the actual interference at a destination node is more than the target interference power, the data will not be successfully decoded at receiver (because the actual SINR at the destination node will be lower than the targeted SINR value used to adjust transmission data rate at the source node). However, it is desired to schedule links such that the actual interference at destinations are close to the target interference of the schedule links in order to allow more concurrent transmissions. A. Transmission power and target interference power We determine the transmission power and target interference power for a link based on the levels that maximize the asymptotic spectrum efficiency (data rate per unit area) while maintaining the energy consumption per bit of the link below a threshold. Using (4), we have By substituting (9) in (6) and (7), for transmission between a pair of source and destination nodes with distance d ll , setting the transmission power to γ l and the target interference power toĨ l provides the asymptotic spectrum efficiencỹ and energy consumption per transmitted bit According to (10), the asymptotic spectrum efficiency is inversely proportional to the link distance square, d ll 2 , and depends on the ratio of transmission power to target interference power, γ l /Ĩ l . Also, the optimal ratio γ l /Ĩ l depends on the link distance, d ll . In a practical wireless network, the distances between the source and destination nodes of different links are different in general. Thus, the desired ratios of transmission power to interference power for links with different distances are different. Given the different desired ratios of transmission power to interference power for the links, the transmission power and target interference power values should be prudently chosen such that links can be scheduled with actual interference power close to the target interference level at every scheduled link for efficient spatial spectrum reuse. The transmission power of a link determines the minimum distance between its source node and the destination of rest scheduled links, however, its target interference level determines the minimum distance between its destination node and the source node of rest scheduled links. To illustrate, consider a two-link network depicted in Figure 4. As transmission power of S 1 increases, the amount of interference imposed by S 1 on D 2 also increases. Thus, to maintain a target interference level, d 12 must be increased. Similarly, decreasing the target interference in D 1 requires larger distance d 21 to reduce the imposed interference from S 2 on D 1 . Therefore, it is desired that a link with higher transmission power to also have lower target interference level for efficient spatial spectrum reuse. To study how to choose the transmission power and target interference value of different links, we consider a two-link network as illustrated in Figure 4. We assume that β 1 and β 2 are independent and uniformly distributed in [0, 2π]. We also assume that the distances between the source and destination of the links, d 11 and d 22 , in different two-link network realizations, are independent and have an identical distribution. Let E(d 11 ) = E(d 22 ) = m 1 and E(d 11 2 ) = E(d 22 2 ) = m 2 . We consider the distance between the two source nodes (r in Figure 4) as a measure of the space occupied by the two scheduled links. Thus, it is desired to minimize the expected distance r (over random realization of β 1 , d 11 , β 2 and d 22 ) to minimize the average occupied space for the scheduled links and, as a result, maximize spatial spectrum reuse. Both links can be scheduled concurrently only if the actual interference power at each link is not greater than its target level. That is According to Figure 4, we have By substituting (13) in (12), the required conditions to schedule both links concurrently can be written as To be scheduled links Weak scheduling Good scheduling Taking expectation (with respect to β j and d jj , j ∈ {1, 2}) from both sides of (14), we obtain According to (15), the expected square of distance, E(r 2 ), increases as the transmission power levels increase and target interference power levels decrease. Also, E(r 2 ) can be decreased by setting Thus, the average occupied space for scheduling links is decreased (i.e., actual interference power levels are close to the target interference power levels in both links) when the product of transmission power and target interference power is identical for every link. This constraint ensures that a link with greater transmission power to target interference level ratio is optimally assigned both a higher transmission power and a lower target interference for efficient spatial spectrum reuse. Motivated by the analysis for the two-link network, we maintain the product of transmission power and target interference power at a fixed value for all links in the network. Therefore, we determine the transmission power γ * l and target interference powerĨ * l for link l, such that asymptotic spectrum efficiency (10) is maximized subject to energy consumption per bit (11) smaller than a threshold, while maintaining the product of transmission power and target interference power at a fixed value. Thus, transmission power γ * l and target interference powerĨ * l are calculated as follows. [γ * l ,Ĩ * l ] = arg max whereÊ l is the maximum energy consumption per bit threshold at link l, and constant λ should be chosen based on the feasible range of transmission power and interference bound of the links. We numerically solve (17) using a brute-force search over discrete values of γ l andĨ l . B. Link scheduling Given the transmission power and target interference of different links, concurrent links for transmissions should be properly determined such that the actual power of interference at the destination of scheduled links are close to their target interference power levels. For instance, consider the scheduling scenario illustrated in Figure 5. The first column shows six links to be scheduled. For simplicity of illustration, we use the circular areas to show link areas based on their transmission power and target interference power levels. Any two links can be scheduled simultaneously only if their circular areas do not overlap. The scheduled links are indicated by shaded circular areas in the second and third columns. The second column shows a weak scheduling plan in which only two links can be scheduled. A better scheduling plan is represented in the third column in which three links are scheduled by properly selecting the set of concurrent scheduled links. The better scheduling plan that schedules more concurrent links corresponds to the situation where the actual interference power levels are closer to the target interference power levels in the scheduled links, in comparison to the weak scheduling plan. We consider a sequential link scheduling scheme to avoid high complexity. At each step, one link is scheduled for transmission at a time slot, which are opportunistically determined to have the interference power as close as possible to the target interference level at the scheduled destinations. Letū i = [u i lt ] L×T denote the scheduling matrix after step i, withū 0 = [0] L×T . The data rate of link l up to sequential scheduling step i is R i l = 1 T T t=1 log 2 1 + u i lt γ * l h ll /Ĩ * l . Letγ i lt denote the maximum transmission power at the source node of link l at slot t, which does not increase the interference power at any already scheduled link before step i to more than its target interference power level. We havê Similarly, letÎ i lt denote the minimum possible target interference power for link l at slot t in the presence of already scheduled links before step i. We havê Thus, at step i, link l can be scheduled at time slot t ifγ i lt ≥ γ * l andÎ i lt ≤Ĩ * l . The ratioÎ i lt /Ĩ * l indicates how close the target interference power and the actual interference power are at link l at slot t in i th step, while γ * l /γ i lt is the indication for the link closet to link l, after scheduling link l at slot t at i th step. Thus, at step i, we schedule link l i for time slot t i for highest productÎ i lt /Ĩ * l × γ * l /γ i lt , given by scheduling is performed in several rounds and in each round a link is scheduled at most once. The sequential scheduling steps in each round continue until every link is either scheduled once or cannot be scheduled. The scheduling rounds continue until no new link can be scheduled. Figure 6 illustrates operations of the proposed link scheduling scheme. C. Distributed scheduling It is desired to have a distributed implementation of the proposed scheduling and transmission power control scheme, based on only local network information. According to (17), the transmission power and target interference power can be determined independently at each link. In Subsection IV-B, links are scheduled sequentially based on the information of already scheduled links using (20). However, the information of local scheduled links is the most relevant information to schedule links for transmission, because the power of interference decreases exponentially with distance. The power of interference at the destination node of link l at time slot t caused by source nodes of the scheduled links at distance farther than d 0 (> 0) is where c 0 is a constant and γ max denotes the maximum transmission power level. Thus, using only the information of scheduled local links within distance d 0 and I 0 , we can estimate the power of interference at a link to calculate (18) and (19) that are required for the link scheduling scheme in (20). As an example, consider scheduling of link l at time slot t when two other links are already scheduled at time slot t within distance d 0 with transmission power and target interference levels γ * 1 ,Ĩ * 1 and γ * 2 ,Ĩ * 2 , respectively. We haveγ i lt = min To coordinate distributed link scheduling, we employ a set of coordinator nodes distributed over the network area to collect and exchange local network information and to periodically schedule links in a distributed manner. In the following, we describe the proposed MAC framework (which is based on our scheme proposed in [1]) to coordinate link scheduling based on source node transmission requests. The network coverage area is partitioned into hexagonal cells as shown in Figure 7. The distance r g between the center and a vertex of a cell is chosen such that r g ≥ d max . Therefore, the destination node of each source node is either in the same cell or an adjacent cell. A coordinator node is placed at the center of each cell to coordinate all transmissions for nodes inside the cell. Figure 8 shows the frame structure. Each frame consists of three types of time slots: 1) Contention slots: During contention slots, the source nodes that want to initiate a transmission contend with each other using a truncated CSMA MAC scheme to send a request packet to the cell coordinators. If the number of contention slots is too small, nodes may not have enough time to transmit requests to initiate transmissions. On the other hand, assigning a large number of slots as contention slots decreases the number of data transmission slots which reduces network throughput. We have presented a mathematical model in [1] to determine the number contention slots in coordinators based on traffic load condition; 2) Scheduling slots: Each coordinator node has a scheduling time slot in every frame, in which it broadcasts a scheduling packet to coordinate all transmissions in its vicinity; 3) Data slots: Data packet transmissions are performed during contention-free data slots as scheduled by the coordinators. A link transmits one data packet during a data slot that is scheduled for transmission. A coordinator node maintains the following information about each link in its vicinity: 1) The source and destination locations; 2) The transmission power and target interference level; 3) The set of future data slots that it is scheduled; 4) The amount of data that it has for transmission. A coordinator receives transmission requests from source nodes during contention slots. Also, a coordinator receives the information of scheduled links for the future data slots by overhearing scheduling packets of adjacent coordinators during scheduling slots. The scheduling packets of a coordinator contains the information of all future scheduled data transmissions for every node within distance r a (≥ r g ) from the coordinator 4 . Figure 7 shows the area centred at a coordinator where the coordinator obtains the information of scheduled transmissions by overhearing scheduling packets of adjacent coordinators. According to Figure 7, a coordinator node acquires the information of scheduled transmissions within distance r n = 1.5r g + r a 2 − 0.75r g 2 and for each link, depending on the destination node's location in the cell, we have d 0 ∈ [r n − r g , r n ]. Based on the source node requests for transmission and the information of already scheduled links, each coordinator periodically schedules data transmissions for every link with the destination inside its cell in the future data slots before its own subsequent scheduling slot. A coordinator node schedules links for transmission according to the proposed link scheduling scheme in Subsection IV-B (with the consideration of already scheduled links by adjacent coordinators) and broadcasts a scheduling packet in its scheduling slot to announce the scheduling information to nodes inside its cell and to its adjacent coordinators. The scheduled links perform data transmissions during data time slots as scheduled by cell coordinators and announced during scheduling slots. Every node puts its radio interface in the sleep mode when it is not transmiting/receiving a scheduling, data or request packet to save energy. Figure 7. There are N nodes randomly distributed over the area. The destination node of each link is randomly selected from the nodes within distance d max from the source node. The ranges of feasible transmission power level, target interference power (c) Fig. 9. Optimal transmission power, target interference level and SINR versus θ as link distance d ll (m) varies. Table I based on IEEE 802.11 standard [26]. We set the energy consumption per bit constraint,Ê l = θ × min E l for every link l, where θ ≥ 1. Thus, θ = 1 corresponds to setting transmission power and target interference power for lowest energy consumption per bit in each link, while as θ increases, the energy consumption constraint is relaxed and the transmission power and target interference of a link are determined based on the values that provide highest asymptotic spectrum efficiency. Figure 9 shows the optimal transmission power, target interference level and SINR of a link versus θ as the link distance varies. The corresponding asymptotic spectrum efficiency and the energy consumption per bit are depicted in Figures 10(a) and 10(b) respectively. Figure 9(b) shows that the calculated optimal target inference level is always much larger than the thermal noise power level 5 , which conforms with the assumption of interference dominated network used in Section III. According to Figure 9(c), the SINR is set to the highest value for a link when the objective is to minimize energy consumption per bit (i.e., θ = 1). However, the optimal SINR value to maximize the asymptotic spectrum efficiency when the energy consumption constraint is weakened is always about 8 dB, independent of the link distance. We evaluate the performance of our proposed scheduling and transmission power control scheme via simulation. The following metrics are used as performance measure to compare different schemes: 1) Throughput: Throughput is defined as the summation of all transmitted data bits per second, weighted by the transmitted distance [27]; 2) Energy consumption: Energy consumption is defined as the ratio of total energy consumed in the nodes to the total number of transmitted data bits. Similar metrics are also used in [1], [5]- [8]; 3) Scheduling efficiency: According to (6), the spectrum efficiency for transmission distance d is bounded byR = 1/d 2 ×max G(·). Thus, the summation of all transmitted data bits per second, weighted by the second power of the transmitted distance, l R l d 2 ll ≤ max G(·) × A, where A denotes the area size and the equality holds under asymptotic optimal scheduling and transmission power control. Therefore, we define scheduling efficiency as the ratio l R l d 2 ll /(max G(·) × A). The performance metrics are evaluated based on the transmitted data and energy consumption of the nodes in an inner region of the network area to eliminate edge effects. Links with source nodes located inside the 7 central hexagonal cells (of the 19 hexagonal cells) in Figure 7 and all coordinator nodes inside this area are considered in evaluating the performance metrics 6 . We compare the performance of our proposed scheme with IEEE 802.11 DCF MAC with and without power saving and with optimized transmission power levels and carrier sensing threshold based on the analysis provided in [9], [10]. Also, we examine the effectiveness of each strategy that we use for determining transmission power and target interference power levels and for link scheduling by evaluating the throughput without the strategy. The compared schemes are as follows: Consider area of 19 hexagonal cells as illustrated in 1) The proposed scheme, denoted by "Proposed"; 2) "P -γ max ", "P -I min " and "P -arb. γ, I", representing proposed scheme when the product of transmission power and target interference power is not maintained at a fixed value, but respectively the transmission power is set to the maximum value, the target interference level 6 This is because nodes at the edge of simulated network area experience less interference from their adjacent nodes, as the simulated area is bounded. Therefore, performance of nodes in the inner part of the simulated area should be considered to evaluate the actual network performance. is set to the minimum value and the transmission power and target interference level are chosen arbitrary; 3) "P -ran. sch.", representing the proposed scheme when the link scheduling by coordinators at each scheduling step is not according to the link scheduling algorithm described by (20), instead a link and a data slot are randomly selected from the set of links and slots that can be scheduled; 4) "best-DCF" and "best-PSM", representing the DCF MAC of IEEE 802.11 in ad hoc mode without and with power saving mode respectively, with optimized transmission power levels and carrier sensing threshold based on the analysis provided in [9] and with optimized ATIM window size 7 . In each scheme, all control and signaling packets are transmitted using signaling rate R s , which requires minimum SINR η min during entire packet transmission time for successful reception at the destination. Data packets are transmitted using variable bit rate which is optimized for each link based on the statistics of SINR at destination during past transmitted packets to obtain highest average link data rate. A data packet is successfully received if the SINR at the destination node during the entire packet transmission time is not less than the required SINR for the used data transmission rate. The data packet duration is 1 ms in each scheme and the data packet header and ACK packet overheads are neglected in every scheme. Data packets are generated according to a Poisson process in each source node. The network load is defined as the aggregate bit generation rate in all nodes in the entire network area and is equally distributed among all nodes. Nodes are randomly distributed over the network area and the destination of each node is randomly selected from one of neighboring nodes within distance d m . We evaluate the performance of different schemes using our developed simulations in MATLAB for the following scenarios: 1) Static network -Nodes do not move over the simulation time. The simulations are performed for five seconds of the channel time and the performance metrics are averaged over five different random realization of the network; 2) Mobile network -Nodes move and network topology varies over the simulation time. Node i ∈ {1, 2, ..., N } moves with speed v i ∈ [0, 2] m/s and along direction φ i ∈ [−π, π], which are randomly selected for each node with uniform distributions. When the distance between a source and a destination increases to larger than d m , the source node will randomly chooses another destination node. Each node periodically (every one second) reports its current location to the coordinator by transmitting a control packet during contention slots. The simulations are performed for 50 seconds of channel time. Other simulation parameters are given in Table I. Figures 11-15 show the performance of different schemes in a static network. Figure 11 shows throughput versus energy consumption of the proposed scheme as the energy consumption per bit constraint varies. The energy consumption including only consumed energy during data slots (without considering energy consumed during scheduling slots and contention slots) is also plotted in the figure. According to Figure 11, as the energy consumption constraints vary from no constrains to the minimum energy consumption per bit constraints in every link, the network throughput is decreased by 38% and energy consumption is reduced by 18%, while the energy consumption for data transmissions/receptions only is reduced by 37%. Figure 12 shows the throughput using different schemes, as network traffic load and number of nodes change. The data transmission rate of the nodes using different schemes are depicted in figure 13. The proposed scheme provides about 40% higher throughput than best-DCF and best-PSM. Figure 12(a) shows the effectiveness of the strategies used for choosing transmission power and target interference power of the links and for link scheduling in our proposed scheme. Figure 13 compares the data transmission rate of the nodes using different schemes. In each scheme, nodes are sorted based on data transmission rate and the horizonal line shows node index. It is observed that the proposed scheme provides better fairness compared to best-DCF and best-PSM, as the link scheduling algorithm in the proposed scheme is to maintain fairness while efficiently choosing concurrent transmissions in each data slot. The energy consumption using different schemes, as network traffic load and number of nodes change are shown in Figure 14. The energy consumption of the proposed scheme is less than 10% of best-DCF and about 20% of best-PSM. Figure 15 compares the scheduling efficiency using different schemes. The scheduling efficiency of the proposed scheme is about 35% higher than best-DCF and best-PSM. Indeed, the scheduling efficiency of our proposed scheme is about 70% of the asymptotic optimal scheduling and transmission power control. The achieved scheduling efficiency is about 78% in data slots, as 90% of slots are data slots and the rest are scheduling and contention slots in the proposed scheme. The performance of different schemes in a mobile scenario is evaluated in Figure 16. The proposed scheme provides about 30% higher throughput compared to best-DCF and best-PSM. The energy consumption per transmitted data bit using proposed scheme is less than 20% of the existing schemes. Also, the scheduling efficiency using the proposed scheme is about 30% higher than the existing schemes. VI. CONCLUSION In this paper, we study joint scheduling and transmission power control for spectrum and energy efficient communication in a wireless ad hoc network. We analyze the asymptotic optimal joint scheduling and transmission power control, and determine the maximum spectrum efficiency, subject to an energy efficiency constraint. Based on the asymptotic analysis, we propose a scheduling and transmission power control scheme to maximize spectrum and energy efficiencies in a practical network. A transmission power level and a target interference power level are determined for each link based on the asymptotic optimal values. Concurrent links are scheduled for transmission such that the actual level of interference at each destination node is close to its target interference level. We present a distributed MAC framework to implement the proposed scheme based on local network information. Simulation results show that the proposed scheme provides about 40% higher throughput than existing schemes. The energy consumption of the proposed scheme is less than 20% of existing schemes. Also, the scheduling efficiency of proposed scheme is 70% of the asymptotic optimal solution, which is about 35% higher than existing schemes. APPENDIX In this section, we discuss solving (8) using the method of Lagrangian multipliers. The Lagrangian of (8) can be written as L(r g , γ, µ 1 , µ 2 ) = 1 d 2 × log 2 1 + F ( Thus, the KKT conditions can be written as The partial derivative ∂F (r g /d)/∂r g in (23) can be calculated using (4). Finally, the optimal solution can be calculated by examining stationary points in (23).
2017-06-18T00:22:54.000Z
2017-06-18T00:00:00.000
{ "year": 2017, "sha1": "b1b10a44eed4b19dfb2bca26cc0f7fde96a76b05", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1706.05596", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b8ad2b226946759b01a3e56b333d929c3992bc2a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
251274150
pes2o/s2orc
v3-fos-license
The Perceptions of Children and Adolescents with Cancer Regarding Nurses’ Communication Behaviors during Needle Procedures Background: Communicating with children and adolescents with cancer during a needle procedure can prove challenging for healthcare professionals. Objective: Our aim was to explore the perceptions of children and adolescents with cancer regarding communication with nurses during needle procedures. Method: Thus was a qualitative phenomenological study. Data were gathered through seven in-depth interviews with a convenience sample of children and adolescents with cancer. Data were analyzed using a grounded theory approach to identify themes in the participants’ narratives. Results: The analysis revealed three themes describing participants’ experience: (1) nurses need to explain clearly what they are going to do while also allowing children to express their emotions without feeling coerced; (2) nurses need to be honest and approachable and relate to children as active participants in the treatment process; and (3) it is distressing to hear other children who are undergoing a needle procedure cry out in pain. Further application of the constant comparison method yielded a core theme: (4) the pressures faced by oncology nurses lead them to focus on the technical side of procedures at the expense of their young patients’ communication needs. Conclusions: We suggest that hospital managers need to ensure that oncology nurses have sufficient training in communication skills and are confident in their ability to respect and respond to the communication preferences and needs of patients. Introduction Children and adolescents with cancer have to deal with a life-threatening diagnosis, intensive treatment, and invasive procedures [1]. The latter are often painful and frightening [2], and they are regarded by many young patients as the most stressful aspect of their illness [3]. Effective communication between children and nurses is therefore important to improve satisfaction with care and to facilitate the best possible outcomes [4]. However, given that these patients will be at different developmental stages, and that decision-making power lies ultimately with their parents, communication can be particularly challenging for professionals [5], especially when performing invasive procedures [6]. The treatment of childhood cancer is a long process that invariably involves many painful procedures which can impact patients on a daily basis [7] producing significant suffering [8]. Venipuncture, for instance, is a common source of pain among hospitalized children [9] and is associated with considerable distress [10]. Children are generally less capable than adults of understanding the reason for these procedures, how long they might take, or how much discomfort they may experience [11], although their memory for stressful events of this kind tends to be fairly accurate [12]. However, a lack of understanding about an invasive procedure [13] and distress while it is being performed may exaggerate negative memories in children, leading to even greater distress when they have to undergo similar procedures in the future [12,14]. Communication difficulties, the stress associated with invasive procedures, and being cut-off from family and social life due to long periods of hospitalization are all factors that can impact children's wellbeing [14], with studies showing that the negative effects may persist for years after cancer treatment ends [15]. Post-traumatic symptoms such as nightmares, flashbacks, constricted affect, anger, and an exaggerated response have been reported among adult survivors of childhood cancer [16], and they are related to the experience of invasive procedures in particular [17]. Being able to talk about these experiences and incorporate them into a coherent personal story can help patients cope with the emotional impact of their experiences [18], and hence it is important that health professionals are able to facilitate this at the same time as managing pain and distress through appropriate techniques [7]. Good pain management is especially important in the pediatric setting [9] and requires adequate knowledge and training among nurses [19] including as regards effective communication skills [20]. Various studies have examined communication in pediatric oncology from the perspective of parents or health professionals [21][22][23][24][25], although few have gathered the views of children themselves [26,27], especially in relation to their experience of needle procedures. This is an important gap, since a better understanding of children's communication preferences could help to ensure they receive the kind of support they need [28], which in turn could improve their psychological wellbeing [14]. The impact of health professionals' communication behaviors on patients has been identified as a priority target for research [29] insofar as there is a need to develop effective interventions for managing treatment-related stress, especially among children [30]. Accordingly, and given that the hospital care and treatment of children and adolescents with cancer is primarily administered by nurses, it is important to gather the views of these young patients regarding communication with nurses during potentially distressing procedures. Aims The aim of this study was to explore the perceptions of children and adolescents with cancer regarding communication with nurses during needle procedures. Study Design Given that our goal was to describe and understand the lived experience of participants and to identify key concepts in their personal narratives, we used an inductive, phenomenological approach [31] for this qualitative study. Participants We recruited a convenience sample of children and adolescents with cancer whose treatment required a needle procedure. With the aim of achieving maximum variation in terms of participants and contexts [32], we selected both male and female patients of different ages and sought to include different types of cancer (see Table 1). Recruitment was carried out in the pediatric oncology service of a reference center for the treatment of childhood cancer in Spain. Each year, the unit provides treatment to around 2000 children and adolescents with various types of cancer. Recruitment of participants continued until theoretical saturation was reached and interviews yielded no new information. Data Gathering Data were gathered through eight in-depth individual interviews. Following the recommendations of Seidman [32], these involved the sequential exploration of three topics so as to facilitate expression of the participant's lived experience and to situate it in a socio-historical context (see Table 2). Specifically, we began by asking participants to talk generally about their experience to date of needle procedures, before exploring in more detail their current experience as an in-patient, including as regards communication with nurses. We then asked them to talk about what it meant to them that they had to undergo routine needle procedures as part of their treatment. Interviews lasted between 30 and 50 min. Potential participants were identified through a member of the research team who worked as a clinical nurse in a pediatric oncology unit. Initial contact was made by telephone, informing the children's parents about the nature of the study and requesting their participation. Those who agreed were then invited for an interview at a time and place of their choosing, with informed assent (children) and informed consent (parents) being obtained prior to any data collection. All interviews were conducted by a member of the research team who was a nurse specialist in pediatrics and had no previous contact with participants and who had specific training in the psychological care of children and adolescents. In accordance with the personal preference of all of the children and adolescents who took part in the study, a parent (in all cases, the mother) was also present during the interview. Data Analysis All interviews were audiotaped and transcribed verbatim. The transcripts were then analyzed using the constant comparative method, an approach informed by grounded theory, in order to extract as much information as possible from the interviews. As recommended by Strauss and Corbin [33] we began with the open (substantive) coding of data, linking fragments of the participants' discourse to coding labels. Having identified the most frequently occurring codes (focused coding), we then drew up a provisional set of categories and described their properties and dimensions (axial coding). By applying this coding paradigm throughout the analytic process, we were ultimately able, as part of the process of selective coding, to establish relationships between the categories. This was then complemented by a literature review in order to contextualize our findings and to develop the theoretical narrative that is presented in the Results Section below. As further support for this narrative and the analytic process, we also generated a series of memos (code, theoretical, operational, and bibliographic) [34]. The data were analyzed independently by two members of the research team data analysis was performed using ATLAS.ti 7.1 GmbH (Berlin, German). Rigor We employed a number of strategies to ensure validity and reliability. The use of work standards and the ATLAS.ti software helped to ensure a systematic approach throughout the analytic process, which was also subject to both internal and external audit. As for the credibility of our findings, these are illustrated throughout with verbatim quotations from the interview transcripts. Regarding transferability, we describe the sampling context and acknowledge that the results may not be generalizable to other populations or settings. Finally, all of the researchers involved in analyzing the data kept a reflexive journal in order to ensure the confirmability of results. Ethical Consideration The study was conducted in accordance with the principles of the Declaration of Helsinki and was approved by the Clinical Research Ethics Committee of Fundació Sant Joan de Déu (PIC-63-17). As already noted, informed assent and informed consent was obtained in all cases prior to data collection. A data protection protocol was established in accordance with current legislation and requirements in our country. Results The analysis of interviews revealed three themes in participants' accounts of communication with nurses during a needle procedure: (1) nurses need to explain clearly what they are going to do, while also allowing children to express their emotions without feeling coerced; (2) nurses need to be honest and approachable and relate to children as active participants in the treatment process; and (3) it is distressing to hear other children who are undergoing a needle procedure cry out in pain. Further application of the constant comparative method revealed a core theme that underpins interpersonal communication between nurses and pediatric oncology patients: (4) the demands of the working environment mean that nurses are focused on the technical side of procedures, and thus they may fail to consider the communication needs of young patients. Nurses Need to Explain Clearly What They Are Going to Do, While Also Allowing Children to Express Their Emotions without Feeling Coerced The young patients we interviewed considered that nurses needed to improve their communication when performing a needle procedure, explaining things clearly and, above all, in a positive tone [28]. Having the opportunity to express their feelings related to the pain, stress and fear associated with needle procedures was also important to the young patients we interviewed. Well, because they shouted at you, they shouted. Some people just don't understand the pain a child can feel (Participant 1) They say to you: "We're going to take some blood, and don't cry because it'll be quick This suggests that what is lacking here is an encounter characterized by assertive communication, whereby nurses are able to be clear and direct while simultaneously respecting the rights and needs of children and adolescents to express what they think and feel [35]. As a style of communication, assertiveness also underpins the establishment of a therapeutic relationship [36]. Nurses Need to Be Honest and Approachable and Relate to Children as Active Participants in the Treatment Process In the view of our interviewees, nurses need to be honest Smith et al. [28] and approachable, which also means engaging with patients during treatment so as to build a relationship of trust. You trust them more than if they say, oh, it's not going to hurt, or whatever (Participant 4) Someone like that, you trust them more than someone who is cold or distant and who tells you it won't hurt (Participant 4) I like them to be straight with me. For example, in this hospital they tell it like it is. They don't say one thing to your mother and something different to you (Participant 4) For example, you ask if it's going to hurt and they say no, but then it really hurts. If it's going to hurt, then say so, and then the next time try to do it better and I won't be so nervous (Participant 7) I was crying. I said I couldn't take it any more. Don't you remember [child turns to his mother] that they said they were going to give me a prize, but they didn't (Participant 7) All the nurses are lovely. If it's the first time we've met, then they'll say "My name's so and so, and I'll be here for a few days". I've got to know almost all the nurses on the ward! (Participant 6) I like them to tell me what they're going to do (Participant 5) It depends on the nurse. Sometimes they do talk to you when they're doing something to you. But normally they're talking among themselves Sometimes they ask me if the injection hurt (Participant 4) A study by Ruhe et al. [37] likewise found that pediatric oncology patients felt dissatisfied when they were not treated as participants in the care process, and that this made it more difficult for them to understand their illness. Various studies have found that children are often marginalized in the hospital environment, insofar as professionals communicate primarily with parents [38]. Communication with children about their care is obviously a challenge, given that nurses need to consider the child's age, cognitive ability, behaviors, and physical and psychological condition, as well as the illness stage and response to treatment [39]. This is further complicated in the oncology setting, as nurses may inhibit communication with patients if they find it hard to deal with emotionally charged topics such as death and dying [40]. Other reasons for ineffective communication include feelings of despair, difficulty coping with stress, lack of skills, and a high workload [40]. It Is Distressing to Hear Other Children Who Are Undergoing a Needle Procedure Cry Out in Pain Hearing other children cry as they undergo a needle procedure was something that our participants found distressing. A well-designed hospital environment, which includes provision for privacy, can contribute to the psychological wellbeing of patients [41]. Privacy is regarded by Altman [42] as a process of selective control over social interaction, and for Shepley [43] it is more important than the interaction itself. For parents and children, this also implies the possibility of auditory privacy [43], which means that communal areas of hospitals, such as those where invasive procedures are performed, need to be designed in such a way that one cannot hear other people's conversations with health professionals [42]. Our interviewees found it both upsetting and annoying to hear other children cry, and it also reminded them of their own pain during needle procedures. The latter may have its basis in the mirror-neuron system, which is activated not only during individual action but also when observing a similar action in others [44]. This system of neural circuits enables us to perceive the emotions of others [44], as if we ourselves were experiencing them. This is necessary for our survival as it allows us to show empathy, which is fundamental to social interaction [45]. However, it also means that when perceiving pain or distress in others, we may also experience the associated physiological activation [46]. The Demands of the Working Environment Mean That Nurses Are Focused on the Technical Side of Procedures, and Thus They May Fail to Consider the Communication Needs of Young Patients The children and adolescents we interviewed felt that nurses needed to consider their need for communication, and not focus solely on the technical side of procedures. I'd like them to devote more time to me. It'd be nice if they didn't just talk to each other but also to me (Participant 1) But I also understand, because I realize they have lots to do. So I guess they talk to you like that because they've got more important things to do. They're in a hurry and all that This weariness or boredom that our participants referred to may suggest the presence of compassion fatigue among the nurses with whom they had contact. I'd like them to devote more time to me. It'd be nice if they didn't just talk to each other but also to me (Participant 1) But I also understand, because I realize they have lots to do. So I guess they talk to you like that because they've got more important things to do. They're in a hurry and all that Compassion fatigue is not uncommon among nurses who regularly care for patients with a life-threatening illness [46], and it can cause a pervasive decline in their desire, ability and energy to care for others [47]. The literature indicates that nurses working in oncology are more at risk of suffering compassion fatigue [48], although a high workload and a lack of support from colleagues and managers are also determining factors [49]. Discussion In this study, we explored the perceptions of children and adolescents with cancer regarding communication with nurses, specifically when undergoing the needle procedures that form a routine part of their treatment. The analysis of interviews revealed a series of factors that appear to make communication difficult. The first has to do with a high workload, leading nurses to focus on the technical side of procedures at the expense of the young person's need to communicate. The second factor relates to a possible lack of communication skills among nurses, insofar as the children and adolescents we interviewed wanted nurses to speak more clearly and honestly while also allowing them to express their own thoughts and feelings. Finally, fatigue among nurses may also make it difficult for them to pay due attention to the communication needs and preferences of pediatric oncology patients. Regarding a heavy workload, this is known to be among the factors that, in the view of nurses, can undermine their ability to provide adequate patient care [50]. Poor staff allocation and ward management, along with a high patient-nurse ratio, can create an unhealthy working environment for nurses, leading to occupational stress, an increased likelihood of errors, and job dissatisfaction [51]. One of the ways in which nurses manage the pressures of their workload is by adopting a cold attitude and maintaining a professional distance, something which our interviewees interpreted as a lack of understanding, honesty, and positivity, all of which can undermine patient care [52]. This professional distance would also explain why our participants felt there was a greater emphasis on technique than on the person being treated. The automatization and standardization of care, which is often linked to high workload, can also lead to the depersonalization of care [53], which again is reflected in the experiences of our interviewees. Together with the published research, our findings highlight the need for nurse managers to put in place systems that enable workload to be closely monitored in these types of services, where patients' needs, and therefore the required staffing ratio, are constantly changing. In this sense, the interviews in this study were carried out before the COVID-19 pandemic, specifically in the years 2018-2019 and, therefore, the situation in the health system may have worsened the overload of nursing work even more (something which we believe should be investigated). Our analysis also suggested the need for improved communication skills among nurses. As other authors have found [28], the children and adolescents we interviewed wanted nurses to be clear and honest, while also respecting their right and need to express their own feelings. It seems, therefore, that what is lacking is the ability among some nurses to engage in assertive communication. The interpersonal relationship and effective communication with patients are key factors underpinning the quality of nursing care [54][55][56][57], and they are especially important in the pediatric setting since children are likely to find it harder than adults to cope with being hospitalized [58]. Communicating with oncology patients brings further challenges, and it is therefore important to provide nurses with adequate training, not least as confidence in one's ability to communicate is vital to job satisfaction and stress management, and since ineffective communication leaves patients feeling dissatisfied with care [59]. Hospitals with haemato-oncology units therefore need to ensure that staff have the opportunity to develop the communication skills required to care for young patients with a potentially life-threatening illness. Finally, the distress that nurses themselves experience when regularly caring for children with a severe and potentially life-threatening illness may, over time, lead to compassion fatigue [50], and this could further inhibit their ability to consider the communication needs of young patients during needle procedures. An exacerbating factor here is that nurses usually have little time to reflect on their practice [60], which is an important gap given the stress produced by a heavy workload and the responsibility of caring for these patients [50]. Continuous occupational stress of this kind [61] can lead to burnout in the form of emotional exhaustion, depersonalization and a sense of poor personal accomplishment [62]. Research suggests that as many as one in four nurses may experience burnout [63] and those working in pediatric oncology units are likely to be more exposed to specific risk factors such as direct contact with death and dying, the suffering of patients and families [64], an excessive workload [50]. Emotional exhaustion and depersonalization among health professionals may be key factors contributing to the presence of structural violence or cruelty in hospital settings [65], which in the experience of our interviewees was characterized by communication that was insufficiently clear and honest, a lack of auditory privacy and a failure on many occasions to respect and allow them to express their feelings. Despite conducting an extensive literature search, we found few studies that address the issue of structural violence in hospitals, especially in relation to children with life-threatening illnesses. This highlights the need for further research, and particularly participant observation studies, so as to determine the extent to which these behaviors are present within complex care settings. The present study contributes to this goal, insofar as the interviews with children and adolescents suggest that one of the ways in which oncology nurses cope with the pressures and responsibilities of their working environment is by providing dehumanized care, which paradoxically can end up diminishing their sense of personal accomplishment and increasing their likelihood of burnout. Conclusions Our analysis of interviews with children and adolescents with cancer regarding their experience of communication with nurses when undergoing routine needle procedures indicates that nurses often focus on the technical side of these procedures, at the expense of the young person's communication needs. Being given the opportunity to express their own thoughts and feelings (without feeling coerced) was important for these young patients, and they wanted nurses to be clear and honest, relating to them as active participants in the treatment process. Privacy in the hospital setting is also important, since hearing other children cry as they undergo a needle procedure can be distressing. The primary limitation of our study is that the research team was comprised solely of female professional nurses, and therefore any preconceived ideas they share could have influenced the analysis and interpretation of data. With the aim of limiting this potential bias, all researchers involved in analyzing the data were required to keep a reflexive journal, recording their decisions and the reasons for them, as well as reflecting on their personal values and interests. We also conducted an extensive literature review so as to compare our interpretation with previous findings and to support the theoretical narrative we present in the results section. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Fundació Sant Joan de Déu (PIC-63-17). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
2022-08-03T15:13:59.423Z
2022-07-30T00:00:00.000
{ "year": 2022, "sha1": "3e737bdd14c17a291c78cd55b5758505c53cda2a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/15/9372/pdf?version=1659433239", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f62e9fff5433ec59d1b671afbbf3c067bea1905", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
250149337
pes2o/s2orc
v3-fos-license
Preoperative frailty and chronic pain after cardiac surgery: a prospective observational study Background Chronic pain after cardiac surgery, whether or not related to the operation, is common and has negative impact on health related quality of life (HRQL). Frailty is a risk factor for adverse surgical outcomes, but its relationship with chronic pain after cardiac surgery is unknown. This study aimed to address the association between frailty and chronic pain following cardiac surgery. Methods This sub-study of the Anesthesia Geriatric Evaluation study included 518 patients ≥ 70 years undergoing elective cardiac surgery. Pain was evaluated with the Short-Form 36 questionnaire prior to and one year after surgery. Associations between chronic postoperative pain and frailty domains, including medication use, nutritional status, mobility, physical functioning, cognition, HRQL, living situation and educational level, were investigated with multivariable regression analysis. Results Chronic pain one year after cardiac surgery was reported in 182 patients (35%). Medication use, living situation, mobility, gait speed, Nagi’s physical functioning and preoperative HRQL were frailty domains associated with chronic pain after surgery. For patients with chronic pain physical HRQL after one year was worse compared to patients without chronic pain (β –10.37, 99% CI –12.57 – –8.17). Conclusions Preoperative polypharmacy, living alone, physical frailty and lower mental HRQL are associated with chronic pain following cardiac surgery. Chronic postoperative pain is related to worse physical HRQL one year after cardiac surgery. These findings may guide future preoperative interventions to reduce chronic pain and poor HRQL after cardiac surgery in older patients. Trial Registration This trial has been registered before initiation under number NCT02535728 at clinicaltrials.gov. Supplementary Information The online version contains supplementary material available at 10.1186/s12871-022-01746-x. Background Chronic pain is a well-known complication after cardiac surgery and is reported by 18 to 35% of cardiac surgery patients in the Netherlands [1][2][3]. Especially in older patients, chronic pain, whether or not related to surgery, has a major impact on postoperative functional outcome, including health related quality of life (HRQL) [1][2][3][4][5]. Physical inactivity and reduced selfreliance due to chronic pain, have been associated with a greater vulnerability to stressors, social isolation, anxiety and depression [5][6][7][8][9][10][11]. Although preoperative anesthesiological assessment routinely includes risk stratification for cardiac or pulmonary complications, standardized screening for the risk to develop chronic pain is less common. Given the negative effects, it is essential that risk factors for chronic pain after surgery are identified in order to initiate preventive strategies. Frailty is characterized by a limited resilience to surgical stress, and has been associated with poor postoperative outcomes [12][13][14]. In community-dwelling elderly, chronic pain has been related to frailty [13]. Frail patients have more pain, poorer daily functioning and less physical activity [13]. In the surgical population, frailty has been associated with chronic pain after major elective non-cardiac surgery [15]. Although frailty is considered an important risk factor for poor surgical outcomes, evidence of a relationship between frailty and chronic pain following cardiac surgery is lacking. Identification of a relation between specific preoperative frailty characteristics (domains) and postoperative chronic pain may guide interventions to improve surgical outcome. With an ageing population, the number of cardiac surgery procedures in older patients will rise in the upcoming years [4][5][6]. Optimizing preoperative circumstances in these patients is therefore essential to target analgesic interventions and preserve postoperative quality of life. We hypothesized that preoperative frailty domains are associated with chronic pain and worse HRQL one year after cardiac surgery in older patients. This study therefore aimed to address whether specific frailty domains are associated with chronic pain following cardiac surgery in an older population. Additionally, the relationship of chronic pain to HRQL was evaluated.. Study design and population This sub-study of the Anesthesia Geriatric Evaluation and quality of life after cardiac surgery (AGE) study analyzed patients included at St. Antonius Hospital, The Netherlands [16,17]. The AGE study was a prospective observational cohort study in patients aged 70 years and older, that focused on the association between preoperative frailty with HRQL and disability after one year in elective cardiac surgery patients (i.e. coronary, valve, rhythm, aortic, or any combination of these procedures). The medical ethics committee approved the study protocol before patient recruitment (Medical Ethics Research Committees United (www. mec-u. nl), number R15.039). The study was first registered at clinicaltrials.gov under NCT02535728 at 31/08/2015. This manuscript adheres to the applicable STROBE guidelines. All participants provided written informed consent. Details on design and analyses of the AGE study have been previously reported [16]. Clinical characteristics and data collection After routine preoperative screening, an additional geriatric assessment was performed to assess physical, mental and social frailty in eleven domains. Physical frailty included the following domains: medication use, nutritional status using the Mini Nutritional Assessment [18] (MNA), mobility and gait speed using the Timed Get Up & Go test [19] (TGUG) and five-meter gait speed test [20] (5-MWT), daily physical functioning using Nagi's scale [20] and a handgrip strength test [21]. Screening for mental frailty included cognition using the Minimal Mental State Examination [22] (MMSE) and self-rated mental and physical health with the Short-Form 36 questionnaire (SF-36) [23,24]. To assess social frailty, we evaluated the living situation and educational level. Based on the multidimensionality of the frailty syndrome, a patient was considered 'overall frail' if a positive test for physical, mental and social frailty was present. An elaborate description of frailty tests and chosen cut-off values is described in additional file Table A1. Demographics and medical history were derived from the electronic health record, including health status, comorbidities, previous surgical procedures and/or laboratory tests. Data from the SF-36 was used to identify presence of preoperative pain (see 'outcomes' section below). Information on preoperative use of analgesics was retrospectively collected from electronic patient files and included acetaminophen, non-steroid anti-inflammatory drugs (NSAIDs), opioids and antidepressants. Opioids included intravenous and subcutaneous administered morphine, oxycodone hydrochloride controlled-release (Oxycontin), oxycodone hydrochloride immediate-release (Oxynorm) and tramadol. Antidepressants were selective serotonin reuptake inhibitors (SSRIs), tricyclic antidepressants (TCAs), pregabalin and amitriptyline. Polypharmacy and excessive polypharmacy were defined as ≥ 5 and < 10 prescriptions and ≥ 10 prescriptions, respectively. Perioperative analgesia Perioperative care was routinely performed according to local standard operating procedures. For intraoperative analgesia a continuous infusion of remifentanil was initiated directly after induction of anesthesia and intermittent fentanyl doses were used at predetermined times (i.e. prior to incision of the skin, sternotomy, aorta cannulation and opening of the pericardium). The dose of remifentanil and fentanyl was determined at the discretion of the attending anesthesiologist, depending on patient characteristics and intraoperative vital parameters. All patients received a loading dose of 10 mg intravenous morphine 30 min before the anticipated end of surgery. Postoperative pain management at the intensive care unit (ICU) consisted of intravenous paracetamol (1 g every six hours) and a continuous infusion of morphine (1-2 mg/h) according to protocol. After ICU discharge a standardized postoperative pain protocol was started including Oxycontin 10 mg twice daily, Oxynorm 5 mg as needed (maximum 6 times a day) and paracetamol 1 g four times a day during the first and second day at the ward. On the third day at the ward opioids were reduced and Oxynorm 5 mg as needed was prescribed, together with paracetamol 1 g four times a day. From the fourth day onwards patients received paracetamol 1 g four times a day. Insufficient pain control was managed by consultation of the hospital acute pain service that advised on an individualized pain treatment plan. Patients that suffered chronic pain preoperatively continued their pain therapy, with the exception of NSAID use. Preoperative opioid use was taken into account when defining postoperative opioid dose. Outcomes One year after cardiac surgery, study patients were invited by letter to complete and return the SF-36 questionnaire. One phone-call was used to remind non-responders. The primary outcome was chronic pain following cardiac surgery after 12 months. Data from the SF-36 questionnaires prior to and one year after surgery were used to determine chronic pain by the following question: 'How much bodily pain did you have during the past 4 weeks?' Answers were graded 1 to 6 and represented; 'None' (1), 'Very Mild' (2), 'Mild' (3), 'Moderate' (4), 'Severe' (5) and 'Very Severe' (6) [22,23]. For this study, chronic pain was divided in three groups: 'No pain' (grade 1), 'Mild pain' (grade 2-3) and 'Moderate to severe pain' (grade 4-6). Chronic pain was defined as a reclassification into a higher grade of pain or no improvement of preexistent moderate to severe pain one year after cardiac surgery. The source or location of pain symptoms were not registered. Our secondary outcome was HRQL according to the SF-36 [23,24]. HRQL was measured before, and at three and twelve months after surgery. Change in HRQL was expressed by a delta score between the preoperative measurement and at one year after surgery, consisting of eight sub scores (i.e. physical functioning, role functioning, role emotional, social functioning, bodily pain, mental health, vitality and general health). Sub scores ranged from 0 to 100 and were summarized into a mental HRQL and physical HRQL score, with positive values representing improvement. Death was scored as 0 points [16]. Statistical analysis Data are presented as frequencies and percentages (%) for categorical data and as median with interquartile range (IQR) or mean with standard deviation (SD) for continuous data, as appropriate. Normal distribution of the variables was assessed with visual inspection of the histograms and Q-Q plots. Differences between patients with and without chronic pain one year after surgery were compared using the Chi square test for dichotomous or categorical variables or the Mann-Whitney U test or Student's T-test for continuous variables as appropriate. To investigate the association between each frailty domain and chronic pain one year after cardiac surgery, multivariable log-binomial regression analysis was performed to present effect estimates as risk ratios (RR) with accompanying 99% confidence interval (99% CI). To take multiple testing into account, we tested against a p-value of 0.01 and used a CI of 99%. Bonferroni adjustment was deemed inappropriate and too conservative as the different frailty domains are highly dependent on each other [17]. As chronic pain one year after cardiac surgery was relatively common, the rare disease assumption would not hold. This means that an odds ratio, would not approach the corresponding risk ratio, hampering the interpretation of our results for clinical practice [25]. All associations were adjusted for EuroSCORE II to take age, sex, comorbidities and weight of the procedure into account. Additionally, the association was adjusted for intraoperative use of remifentanil, preexisting chronic pain and use of internal mammary artery [1,2,[26][27][28][29][30]. These confounders were a priori selected based on literature [1,2,[26][27][28][29][30]. Next, change of HRQL in all eight sub scores prior to and one year after surgery was compared between patients with and without chronic pain using a Wilcoxon signed-rank test, for this univariate analysis p-values ≤ 0.01 were considered statistical significant. To investigate the association of chronic pain with HRQL after one year, multivariable linear regression models were conducted, where physical and mental HRQL measured at 12 months were used as the outcome. All associations were adjusted for EuroSCORE II, preexisting chronic pain, overall frailty and physical or mental HRQL measured prior to surgery. Estimates are expressed as linear regression coefficients (β) with accompanying 99% CI. To assess the robustness of our findings, sensitivity analysis were performed using the same analytical approaches. The first post-hoc analysis excluded all patients who died within 12 months of follow up. In the second post-hoc analysis, only patients with new or worse pain one year after surgery (i.e. reclassified into a higher grade of pain) were scored as chronic pain and patients with preexistent moderate to severe pain were excluded from this definition. As SF-36 data was missing for 11% of cases and could lead to potential bias, multiple imputation was conducted using the 'mice' library in R [31,32]. Twenty data sets were created and the estimates and variances for each of the imputed datasets were pooled into an overall estimate using Rubin's rule. The imputed dataset was used for final analyses. In order to obtain chronic pain categories after imputation, the mean frequencies of the specific answers to the SF-36 questionnaire at baseline and one year after surgery across the 20 imputation datasets were rounded to the nearest integer. An a priori sample size was not performed, as the sample size was based on the available data of the AGE study [16]. Data analysis was performed using R statistics (version 3.6.3, 2020). Study population Overall, 518 patients were included in the analysis. Reasons for exclusion were withdrawal (n = 9) or cancellation of surgery (n = 17). Fifty-seven patients (11%) had one or more missing values (see additional file Table A2 for characteristics of patients with and without missing data). Prior to surgery, 91 patients (18%) were considered frail and chronic pain was reported by 331 patients (64%) of whom 77 (23%) were frail. Of all patients with preexisting chronic pain, 13% (44/331) used one analgesic and 7% (22/331) used two or more analgesics. Most common analgesics were acetaminophen (28/331, 9%), NSAIDs (18/331, 5%) and opioids (16/331, 5%). Patients with chronic pain prior to surgery more often used an antidepressant, 26/331 versus 4/187 (8% versus 2%), compared to patients without chronic pain prior to surgery (p = 0.01). Additional file Table A3 demonstrates the baseline characteristics for patients with and without chronic pain prior to surgery. Frailty and chronic pain after cardiac surgery One hundred forty patients (27%) reported improvement of pain, 243 (47%) had no or unchanged pain and 135 patients (26%) reported new or worse chronic pain one year after surgery (Fig. 1). According to our definition, chronic pain was present in 182 patients (35%), which included 47 patients with pre-existent moderate to severe pain that was not improved. Baseline characteristics according to chronic pain after cardiac surgery are presented in Table 1. Patients with chronic pain had higher EuroSCORE II at baseline, more often used opioids and had lower test results in the physical frailty domains. Patients preoperatively considered frail had a higher risk of developing postoperative chronic pain (aRR 1.58, 99% CI 1.08 -2.30). Figure 2 demonstrates the association between each frailty domain and chronic pain one year after surgery. Medication use, living situation, mobility, gait speed, Nagi's physical functioning and preoperative HRQL were associated with chronic pain after surgery. Patients with preoperative excessive polypharmacy, patients who were living alone and patient with lower mental HRQL had increased risks to develop chronic pain (aRR 2.03, 99% CI 1.32 -3.12, 1.54, 99% CI 1.11 -2.13, and aRR 1.02 99% CI 1.01 -1.03 per point decrease on mental HRQL, respectively). Also, preoperative impaired physical functioning was associated with postoperative chronic pain (aRR 1.11, 99% CI 1.04 -1.18 per second increase on 5-MWT, aRR 1.06, 99% CI 1.02 -1.10 per second increase on TGUG, aRR 1.32, 99% CI 1.19 -1.46 per point increase on Nagi's scale and aRR 1.03 99% CI 1.01 -1.05 per point decrease on physical HRQL). When patients with preexistent moderate to severe pain were excluded from the chronic pain definition in the post-hoc analysis, mobility and preoperative HRQL were no longer significantly associated (see figure in additional file Figure A1). Exclusion of patients who died within 12 months of follow-up did not change the associations (see figure in additional file Figure A2). Figure 3 demonstrates the mean change HRQL in all eight sub scores prior to and one year after surgery in patients with and without chronic pain. Patients without chronic pain significantly improved in each sub score, where patients with chronic pain worsened. Multivariable linear regression analysis demonstrated that patients with chronic pain reported worse physical HRQL one year after surgery compared to patients without chronic pain (β -10.37, 99% CI -12.57 --8.17). Chronic pain was not associated with mental HRQL after one year (β -0.83, 99% CI -3.26 -1.60). Results were similar after excluding patients who were deceased within one year after surgery and also after the exclusion of patients with preexistent moderate to severe pain from the chronic pain definition. Discussion This study addressed the association between frailty domains and chronic pain following cardiac surgery in older patients. Additionally, the impact of chronic pain on HRQL in older patients was evaluated. One out of three elderly reported chronic pain after one year and frail patients had a higher risk of chronic pain following cardiac surgery. Frailty domains that were associated with chronic pain following surgery were medication use, living situation, mobility, gait speed, Nagi's physical functioning and preoperative HRQL. In addition, we found that postoperative chronic pain was associated with worse physical HRQL one year after surgery. This study confirmed that chronic pain is common in elderly cardiac surgery patients with a similar incidence reported in prior studies [1][2][3]. However, in our study the number of patients with postoperative chronic pain (n = 182) decreased compared to the number of patients with pain prior to surgery (n = 331), and 27% of patients (140/518) reported an improvement in pain. This might be explained by an improved functional capacity, decreased ischemic chest pain and lower levels of anxiety following cardiac surgery. Nevertheless, increased pain symptoms were common. Considering that chronic pain had a profound effect on HRQL, identification of risk factors for development of chronic pain, especially in the frail and vulnerable population, is important and might help to initiate preventive strategies. Several risk factors including younger age, psychological impairment, preexisting pain, internal mammary artery harvest, use of remifentanil and emergency surgery have been described to increase the risk of chronic pain following cardiac surgery [1,2,[26][27][28][29][30]. In contrast to prior studies, these risk factors were not associated with higher risks of the development of chronic pain in our study. This is likely explained by differences between surgical cohorts. The AGE study consisted of a frailty-prone population undergoing a wide range of elective cardiac surgery procedures, and the mean age was higher than in other reports. Frail patients had a higher risk of developing postoperative chronic pain following cardiac surgery. This association might be explained by impaired physical exertion as higher levels of activity have been described to reduce pain sensitivity by decreased pain facilitation and increased pain inhibition [33]. Conversely, preexistent pain is well-known to have an impact on physical activity [5,6]. Furthermore, preexistent pain is known to be a risk factor for acute and chronic pain following surgery [1,26,27]. The question arises whether preexistent pain or impaired physical functioning (possibly due to pain or frailty) in these patients is the most relevant risk factor for the development of chronic pain. In our study, preexistent pain was not significantly associated with postoperative chronic pain. Also, in a post-hoc analysis, in which patients with preexistent moderate to severe pain were excluded from the chronic pain definition, our results did not change. Consistent with prior research, this underlines the association between impaired physical functioning and the development of chronic postoperative pain [33]. Besides impaired physical functioning, medication use, living situation and preoperative mental HRQL were associated with chronic postoperative pain. Polypharmacy is common in older patients, and might impede pain management for several reasons. Apart from ageand disease-related changes in physiology, disease-drug and drug-drug interactions might lead to a heterogeneity in response to medications and increased adverse drug effects. Frailty further increases this heterogeneity and thus frail elderly with polypharmacy may be more susceptible to adverse events [11]. Next to this, polypharmacy leads to medication non-adherence, leading to a suboptimal effect of prescribed analgesic therapy [34]. In our study, patients with excessive polypharmacy had a twofold risk to develop chronic pain. Finally, patients living alone are prone to social isolation, which contributes to feelings of depression or anxiety, and a more intense experience of pain [11,26,35]. Gender has been described to interact with multiple preoperative factors as well as cardiac surgery outcomes [36]. Female gender has been positively associated with preoperative frailty, psychological disease and dementia in cardiac surgery patients [36]. The results of our study confirm the well-investigated relationship between female gender and chronic pain [26,27]. When defining interventions to improve outcome following cardiac surgery based on preoperative risk stratification, genderrelated disparities should be taken into consideration. Our study confirmed the existing relationship between chronic pain and HRQL. In general, polypharmacy, physical inactivity, reduced self-reliance and social isolation leads to an increase in health consumption, pain and poor HRQL [11,27,28,37,38]. In addition, several studies found that pain adversely affects recovery and HRQL, and that the impact correlated with the severity of pain [27,28,37]. In patients with chronic pain in our study, mental and physical HRQL were lower prior to surgery and physical HRQL was worse one year after surgery compared to patients without chronic pain (p < 0.001). Understanding factors that are related to HRQL in older people can be used to preoperatively accommodate patients' needs and preserve quality of life. Risk stratification should lead to individualized evaluation and preparation for surgery. However, evidence for pre-habilitation is limited for cardiac surgery patients. However, preoperative exercise has been demonstrated to improve functional recovery [39]. Optimization of treatment expectations by a simple psychological intervention have shown to improve disability [40]. Currently, trials on pre-habilitation are being performed in cardiac surgery patients, but the results have to be awaited [41,42]. Comprehensive evaluation of pharmacotherapy should be part of each preoperative assessment, but deserves additional attention of, for example, a pharmacist or geriatrician in patients with polypharmacy. Patients suffering from chronic pain preoperatively should receive an individualized perioperative pain management plan, depending on their preoperative situation. Within this plan, additional pharmacotherapy, locoregional anesthesia and/or non-pharmacological interventions including may be considered to treat acute postoperative pain and prevent the increase of chronic pain symptoms following cardiac surgery. The following limitations should be considered. First, pain was determined by a health survey that was not specifically designed to assess pain or pain interference. This study population reported pain within the last 4 weeks at 12 months follow up after surgery, and defined it as chronic pain [43]. Unfortunately, differentiation between thoracic pain, wound pain, chest pain, pain due to the surgical procedure or other pain, and type of pain (i.e., neuropathic, musculoskeletal, inflammatory, or mechanical pain) was not possible [43]. Second, a single point estimate was used for the incidence of chronic pain which may have resulted in an underestimation. Besides, the ageing process may account for differences in pain signaling and perception, causing an inconsistence and variety in pain measurements. More specific, with ageing a loss in structure and function of peripheral nerves occurs [10,44]. Due to a decrease in the spread and magnitude of brain activation in response to pain in elderly, pain thresholds might be higher [10,44]. On the other hand, endogenous pain modulation in elderly shows age-related impairment [10,44]. In particular, inhibitory systems were reported to be affected, resulting in lower capacities to modulate pain. This inadequacy to modulate pain leads to an increased risk for chronic pain. Finally, we did not register age-dependent conditions such as arthrosis and neurological conditions which may be related to frailty as well as to chronic pain. Further analysis of the reason for frailty may improve the prediction of chronic postoperative pain. Future research to explore these current findings should determine pain using patient diaries with validated pain assessments. Conclusions Postsurgical chronic pain is common in elderly cardiac surgery patients. Preoperative polypharmacy, living alone, physical frailty, living alone and lower mental HRQL were positively associated to chronic pain following cardiac surgery. Secondly, chronic pain was associated with worse physical HRQL following cardiac surgery. The results of our study advocate that early identification of these factors may be used to identify older patients at risk for chronic pain after cardiac surgery.
2022-07-01T13:44:25.698Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "1c21a56a57e9e561a0e1edd21b52b3805c433302", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "1c21a56a57e9e561a0e1edd21b52b3805c433302", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
265348254
pes2o/s2orc
v3-fos-license
Dermoscopic findings in Tinea Capitis among under 18 children in dermatology polyclinic patients: a hospital-based cross-sectional study Background: Tinea capitis is a fungal infection that affects the scalp. It is caused by a group of fungi known as dermatophytes, which thrive in warm and moist environments. In Somalia, there is a data shortage regarding dermatological conditions, especially in Mogadishu, the most populous city in the country. Tinea capitis has gone unreported despite its high prevalence in Somali dermatology clinics and the Somali diaspora in Western countries. The absence of up-to-date information hampers the capability to diagnose, treat, and prevent Tinea capitis. Therefore, the study aims to evaluate dermoscopic signs about isolated organisms and potassium hydroxide (KOH) examination. Method: A hospital-based cross-sectional study was implemented between January and April 2023 in Mogadishu, Somalia. All eligible Tinea capitis-infected children were included in the study. Microscopically, analysis was conducted by adding 10% of KOH in fungal elements. Data were analyzed using descriptive statistics and the χ2 test at P value less than 0.05. Results: A total of 76 tinea capitis-infected children participated in the study; 56% were age group between 5-9 years old, 68.4% were male, and 92.1% showed KOH positivity. Trichophyton violaceum (65.8%) and Trichophyton sudanense (14.5%) were the most common fungal organisms detected in the culture. comma hairs (93.10%), scales (40.80%), and corkscrews (32.90%) were the most common dermoscopic signs of tinea capitis. The demographical characteristics and dermoscopic signs of tinea capitis significantly associated with the positivity of KOH examination were age, sex, comma hairs, corkscrew hairs, broken hair, Scales, and Zigzag hair. Conclusion: Children in Mogadishu, Somalia, bear a significant burden of Tinea Capitis infections. Trichophyton violaceum and Trichophyton sudanense were the predominant causative agents identified in the cultures. The most common dermoscopic signs of tinea capitis observed in this study were comma hairs, scales, and corkscrew patterns. Hence, early diagnosis of Tinea Capitis infections and timely, effective treatments with contact tracing are highly needed. Background Tinea capitis, also known as ringworm of the scalp, is a fungal infection that affects the scalp.It is caused by a group of fungi known as dermatophytes, which thrive in warm and moist environments [1] .This condition is most found in children and does exhibit a strong gender bias, although it can affect individuals of all ages [2] .Clinical presentations of tinea capitis can range from mild symptoms such as scalp flaking and discoloration to more severe manifestations, which include an itchy rash, hair loss, and the development of small, black dots on the scalp [3] .This condition is contagious and can spread from person to person through direct contact with an infected individual or via contact with contaminated objects like combs, brushes, hats, or pillows [4] .Fortunately, tinea capitis can be effectively treated with either oral or topical antifungal medications [5] . HIGHLIGHTS • A total of 76 tinea capitis-infected children participated in the study; 56% were age group between 5-9 years old, 68.4% were male, and 92.1% showed potassium hydroxide positivity.• Trichophyton violaceum (65.8%) and Trichophyton sudanense (14.5%) were the most common fungal organisms detected in the culture.Comma hairs (93.10%), scales (40.80%), and corkscrews (32.90%) were the most common dermoscopic signs of tinea capitis.• The demographical characteristics and dermoscopic signs of tinea capitis significantly associated with the positivity of potassium hydroxide examination were age, sex, comma hairs, corkscrew hairs, broken hair, Scales, and Zigzag hair. Fungi from the Trichophyton and Microsporum genera can both lead to tinea capitis.However, the primary causal agent varies across different geographic regions and experiences temporal fluctuations [6,7] .Various laboratory methods can be employed to confirm the diagnosis.Notably, some dermatophytes like M. audouinii and M. canis can be distinguished through their unique fluorescence under Wood's ultraviolet light.Regrettably, Trichophyton tonsurans does not exhibit this fluorescence, rendering this tool ineffective [8] .In most cases, the responsible organism can be identified through a fungal culture on Sabouraud dextrose agar or Mycocel® agar, following a potassium hydroxide preparation of hair from the affected area.Samples for culture can be obtained by scraping with a scalpel or more conveniently by employing a cytobrush or a moistened cotton swab [9] .Dermoscopy is a noninvasive, swift, and costeffective procedure with a well-established history of effectiveness as an ancillary technique for evaluating hair and scalp conditions.It has been recognized as a supportive technique for diagnosing tinea capitis [5] . In Somalia, there is a data shortage regarding dermatological conditions, particularly fungal infection especially in Mogadishu, the most populous city in the country.Tinea capitis has gone largely unreported, despite its high prevalence in Somali dermatology clinics and the Somali diaspora in Western countries [4,[10][11][12] .It is crucial to gain a comprehensive understanding of the current state of knowledge, specific disease patterns, dermoscopic indicators, and common causative factors in Somalia.It might enhance the treatment and preventive strategies.The absence of up-to-date information severely hampers the capability to diagnose, treat, and prevent Tinea capitis in Somalia.Therefore, the study aims to evaluate dermoscopic signs in relation to isolated organisms and potassium hydroxide (KOH) examination. Study design and setting A hospital-based cross-sectional study was implemented between January and April 2023.All Tinea capitis infection patients who had mycological testing, dermoscopic and clinical examinations of scalp lesions with the edge of a blunt scalpel, hair stumps were scraped off for direct microscopic examination and culture were included in the study after excluding individuals who are older than 18 years and those whose legal guardians refused to sign the consent form and participate in the study. Laboratory procedure To identify the fungal elements, samples were placed on a glass slide, added with 10% KOH, and then microscopically analyzed.To examine the affected areas of the scalp, we used a portable dermoscope (DermLite II hybrid, San Juan Capistrano, California) and captured images directly through the dermoscope using an iPhone 12, following the acquisition of informed consent from caregivers or legal guardians of the children.The reliability of the photos in Figure 2 taken with the iPhone 12 is attributed to their consistent camera settings and precise geotagging information present in the metadata.The cultural experiment was conducted and incubated at a temperature of 25°C on Sabouraud dextrose agar supplemented with antibiotics, including 0.5 mg/ml of cycloheximide.These cultures were maintained for 4 weeks and were subject to periodic examinations to monitor potential microbial proliferation.Cultures were considered negative if no discernible growth had occurred.Identification of the microorganism was achieved through the utilization of Lactophenol cotton blue stain, employing an analysis of colony morphology as well as microscopic observations of the culture mounts [13] . Sampling technique and data collection procedure This study included all Tinea Capitis infections in children under 18 years of age who were either admitted to or referred to the study hospital.Those who declined to participate were excluded.Instead of conducting a sample size calculation, we employed a post hoc power analysis as recommended by Kim et al. [14] This power analysis aimed for a statistical power of 80%, with an alpha level set at 0.05 [15] .The research included 76 patients initially suspected of having tinea capitis, but following the administration of a KOH test, it was found that 7.9% tested negative for the condition.To collect the data Dermatology residents who had received training in dermoscopic or the recognition of dermoscopic indicators of Tinea capitis carried out the dermoscopic assessment.The images were examined, and the results were reported by a dermatologist with experience in hair diseases and training in the recognition of Tinea capitis-specific signs (hence referred to as the expert). Data analysis The collected data were cleaned, coded, and entered on the speeded sheet imported into the SPSS version 20 (SPSS) for analysis.Data were analyzed by descriptive statistics and presented frequency with percentage for categorical variables and mean with maximum, minimum, and stander deviation (SD) for continuous variables.The χ 2 test was used for full cells and Fisher exact test was used for cells expected less than 5 frequencies in 20% of the total cell at P value less than 0.05.Data were presented in tables and figures, and this work report aligns with the STROCSS criteria [16] . Baseline characteristics A total of 76 tinea capitis-infected children participated in the study, 56% were age group between 5-9 years old, 68.4% were male, and 92.1% shows KOH positivity.In addition, 90.8% of the study participants showed fungal culture positive and Trichophyton violaceum (65.8%) was the most common fungal organism detected in the culture, followed by 14.5% Trichophyton sudanense, 7.9% Trichophyton tonsurans, and 2.6% Microsporum audouinii (Table 1). Dermoscopic signs of tinea capitis This study showed that the most common dermoscopic signs of tinea capitis were comma hairs (93.10%) followed by 40.80% of scales, 32.90% corkscrew, 15.80% broken hair, while Follicular keratosis, Black dots, and Zigzag hair showed 7.9% each of them and all other sings not detected (Figs. 1 and 2). Relationship between the KOH examination and demographical characteristics and Dermoscopic signs of tinea capitis This study showed that the Age (P value = 0.033) and Sex (P value = 0.001) of the child have a significant association with the positivity of KOH examination.Moreover, comma hairs (P value <0.001), Corkscrew hairs (P value = 0.001), Broken hair (P value <0.001), Scales (P value <0.034) and Zigzag hair (P value <0.001) were dermoscopic signs of tinea capitis that significantly associated with the positivity of KOH examination while Follicular keratosis and Black dots not significantly associated (Table 2). Discussion This study enroled 76 individuals diagnosed with tinea capitis, and a remarkable 92.1% of them tested positive for fungal elements during the KOH examination, while only 7.9% yielded negative results.KOH preparations are a widely used method for diagnosing fungal infections, including tinea capitis.A KOH preparation is deemed positive for tinea capitis when microscopic examination reveals the presence of hyphae.When a sample of infected scalp hair is subjected to KOH treatment, it dissolves the keratin, a protein found in hair and nails, while leaving the fungal cells intact.These fungal cells can then be observed under a microscope, appearing as branching, filamentous structures known as hyphae.This finding conclusively confirms the presence of a dermatophyte infection, which is crucial for effective treatment.Similar studies have consistently reported an 88% KOH positivity rate [13] . Furthermore, previous research has documented KOH false negative rates ranging from 5 to 40%.This can be attributed to the skill level of the diagnosing medical professional and limited medical equipment, especially in sub-Saharan African countries [17][18][19] .Therefore, it is strongly recommended that the diagnosis of tinea capitis patients rely not only on KOH testing but also on the clinical observation of experienced professionals. The study revealed that 90.8% of participants tested positive for fungal culture, with Trichophyton violaceum being the most frequently isolated fungal organism.It was followed by Trichophyton sudanense, Trichophyton tonsurans, and Microsporum audouinii.Numerous studies have also reported Trichophyton violaceum as the most common cause of tinea infections [19][20][21] .In addition, various other fungal organisms, such as Trichophyton verrucosum, Trichophyton tonsurans, and Trichophyton sudanense, have been identified in different cohorts.American studies have highlighted Trichophyton tonsurans as the most prevalent causative agent, while European studies have reported a high incidence of both Trichophyton tonsurans and Trichophyton violaceum [2,22] . These variations can be attributed to geographical differences, where a specific fungal species may become more prevalent in certain areas.This phenomenon suggests the possibility of a transmission circle, as these fungi are highly contagious and can be easily spread from person to person through direct contact or contact with contaminated objects, particularly in warm and humid regions of the body where they thrive.Furthermore, poor hygiene practices, sharing personal items, and residing in crowded or humid environments are common contributing factors in the study area and across the entire African continent, exacerbating the situation. The dermoscopic examination offers a simple and noninvasive procedure that serves as a valuable diagnostic tool for tinea capitis.This study investigates various dermoscopic indicators, with the most prevalent being "comma hairs," accounting for an overwhelming 93.10% of the study participants.Comma hairs are distinguishable as twisted, coiled strands resembling a comma or question mark.They result from fungal infection disrupting the natural hair growth pattern and are frequently encountered during the initial stages of tinea capitis, although they might be absent in certain cases [23] .Furthermore, other research studies have identified scaling, either diffuse or perifollicular, as another common sign, which aligns with the second most prevalent dermoscopic sign recognized in this study [24] .In addition, our study reveals that broken hair and corkscrew signs rank as the third and fourth most common dermoscopic indicators, respectively.Several studies have reported that broken hair exhibits no significant correlation with the causative agent or type of invasion [2,22,25,26] .It is essential to note that scaling and broken hairs are not pathognomonic signs of tinea, as they may appear in various scalp fungi or other dermatological conditions [27] .In this study area, there is a noticeable prevalence of reinfections of dermatophytosis, particularly among young individuals, which is frequently observed in dermatology clinics.We hypothesize that this trend is the result of a confluence of numerous risk factors including incomplete treatment, exposure to contaminated surfaces or objects, the sharing of personal items, suboptimal hygiene practices, and underlying health conditions that can heighten the risk of recurrence, particularly in individuals with compromised immune systems.It is worth noting that dermatophytes are not typically categorized as opportunistic fungal infections directly associated with immune compromise, as with Cryptococcus neoformans and Pneumocystis jirovecii pneumonia.Nevertheless, there remains a possibility that individuals living with compromised immune systems and chronic conditions might become more susceptible to various infections, including fungal ones [28][29][30][31][32][33] . This study identifies that the age and sex of the child are significant demographic factors associated with a positive result in KOH examinations for fungal elements.Although prior studies were not directly it is important to note that the prevalence of these infections can fluctuate based on age and various other factors.In general, tinea infections are more frequently observed in children than in adults, although they can affect both age groups.Furthermore, a positive KOH test does not necessarily imply an active infection, as fungal elements can persist in the skin or hair even after treatment.Therefore, in this study area, the diagnosis is not solely reliant on KOH positivity; it integrates clinical features into the interpretation of KOH test results and facilitates a more precise differential diagnosis.Notably, dermoscopic signs, such as comma hairs, corkscrew hairs, broken hair, scales, and zigzag hair were found to be significantly associated with KOH examination positivity in this study, whereas Follicular keratosis and Black dots showed no such association. Conclusion Children residing in Mogadishu, Somalia, bear a significant burden of Tinea Capitis infections, with a notable prevalence among males aged 5-9 years.Trichophyton violaceum and Trichophyton sudanense emerged as the predominant causative agents identified in the cultures.Moreover, distinct features such as comma hairs, scales, and corkscrew patterns were commonly observed in the dermoscopic examination of Tinea Capitis cases within this studied population.Hence, there is a compelling need for the early diagnosis of Tinea Capitis infections, along with the timely implementation of effective treatments and a broader epidemiological investigation incorporating contact tracing.These measures are vital for curtailing disease transmission and enhancing both prevention strategies and overall quality of life.Furthermore, it is highly advisable to delve into the contributing factors behind the elevated prevalence of Tinea Capitis and to work towards enhancing healthcare accessibility for this demographic. Strength and limitation It is the first similar study in Somalia focusing on Tinea capitis cases, which is a common paediatric dermatophyte infection, but it has been largely unreported in the region.The study is a cross-sectional design, which can establish associations between variables but cannot determine cause-and-effect relationships and inability to assess treatment outcomes.Despite this limitation, the study provides valuable information that can enhance treatment, interventions, and preventive programs for Tinea capitis in Somalia and the region as well. Figure 2 . Figure 2. (A) Shows comma hair in the black circles, (B) picture that shows scales, (C) shows multiple corkscrew signs in the side and outside of the circles. Table 2 Comparison of KOH examination and demographic characteristics and Dermoscopic signs of tinea capitis (n = 76) *Significant level at α = 0.05.a Fisher's exact test.b χ2 test.
2023-11-22T16:45:05.263Z
2023-11-17T00:00:00.000
{ "year": 2023, "sha1": "f7bd6cdd1393cd768337a4705f8caaa8468ca431", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/ms9.0000000000001530", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09ab9fe148049ab4889929f2d4dffd203ff031c6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
232259720
pes2o/s2orc
v3-fos-license
A new approach for computer-aided detection of coronavirus (COVID-19) from CT and X-ray images using machine learning methods The COVID-19 outbreak has been causing a global health crisis since December 2019. Due to this virus declared by the World Health Organization as a pandemic, the health authorities of the countries are constantly trying to reduce the spread rate of the virus by emphasizing the rules of masks, social distance, and hygiene. COVID-19 is highly contagious and spreads rapidly globally and early detection is of paramount importance. Any technological tool that can provide rapid detection of COVID-19 infection with high accuracy can be very useful to medical professionals. The disease findings on COVID-19 images, such as computed tomography (CT) and X-rays, are similar to other lung infections, making it difficult for medical professionals to distinguish COVID-19. Therefore, computer-aided diagnostic solutions are being developed to facilitate the identification of positive COVID-19 cases. The method currently used as a gold standard in detecting the virus is the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Due to the high false-negative rate of this test and the delays in the test results, alternative solutions are sought. This study was conducted to investigate the contribution of machine learning and image processing to the rapid and accurate detection of COVID-19 from two of the most widely used different medical imaging modes, chest X-ray and CT images. The main purpose of this study is to support early diagnosis and treatment to end the coronavirus epidemic as soon as possible. One of the primary aims of the study is to provide support to medical professionals who are most worn out and working under intense stress during COVID-19 through smart learning methods and image classification models. The proposed approach was applied to three different public COVID-19 data sets and consists of five basic steps: data set acquisition, pre-processing, feature extraction, dimension reduction, and classification stages. Each stage has its sub-operations. The proposed model performs in considerable levels of COVID-19 detection for dataset-1 (CT), dataset-2 (X-ray) and dataset-3 (CT) with the accuracy of 89.41%, 99.02%, 98.11%, respectively. On the other hand, in the X-ray data set, an accuracy of 85.96% was obtained for COVID-19 (+), COVID-19 (-), and those with Pneumonia but not COVID-19 classes. As a result of the study, it has been shown that COVID-19 can be detected with a high success rate in about less than one minute with image processing and classical learning methods. In the light of the findings, it is possible to say that the proposed system will help radiologists in their decisions, will be useful in the early diagnosis of the virus, and can distinguish pneumonia caused by the COVID-19 virus from the pneumonia of other diseases. Introduction Coronavirus has spread very rapidly around the world since December 2019. The World Health Organization (WHO) stated on January 30, 2020, that COVID-19 caused a pandemic. According to WHO (until 17 December 2020), 72,196,732 COVID-19 cases were detected worldwide. 1.630.521 of these cases resulted in death. The vast majority of cases occurred in the Americas (30,925,241 Cases) as of December 17, 2020. America is followed E-mail address: asaygili@nku.edu.tr. by Europe (22.603.335) and southeast Asia (11.468.106) [1]. The most prominent symptoms of COVID-19 are fever and cough. Also, shortness of breath, muscle or body aches, headache, loss of taste or smell, sore throat, and diarrhea are other common symptoms [2]. As you can see, this virus can manifest itself with many symptoms. And the symptoms it shows are among the common effects that can occur in daily life due to other diseases. This situation makes it difficult to distinguish coronavirus from other diseases. A medical professional takes the following steps to diagnose COVID 19; 1. It is questioned whether the patient is in contact, whether he complies with the mask, distance, and hygiene rules. 2. The patient's symptoms are analyzed. (Fever, cough, shortness of breath, diarrhea, etc.) 3. RT-PCR test is applied. 4. Radiological imaging is applied. In the early stages, chest radiography is used. In cases where chest radiography is insufficient, CT and ultrasonography are applied. 5. Blood analysis is done and findings are evaluated. As a result of all these procedures, whether the patient is COVID-19 is determined by the medical specialist. It is a real-time reverse transcriptase-polymerase chain reaction (RT-PCR) test taken from the SARS-CoV-2 throat swab that causes COVID-19 and is believed to be highly specific. However, this test has low sensitivity, especially in the early stages of the disease. The sensitivity rate of this test is around 60%-70% according to the studies [3][4][5]. This means that only 60 of the 100 positive COVID-19 patients who had a PCR test can produce true-positive results. For this reason, the disease needs to be supported by different methods such as blood analysis and medical imaging. At this stage, using radiological imaging methods together with computer-aided systems will greatly benefit medical professionals. Computed tomography (CT) and X-ray imaging techniques play a vital role in the early diagnosis and treatment of this disease [6][7][8][9]. Even if false-negative results are obtained due to the low RT-PCR sensitivity of 60%-70%, it can be detected by the radiological images of the patients [10,11]. In some studies, it has been stated that CT is a sensitive method for detecting COVID-19 pneumonia and can be considered as an auxiliary screening tool for RT-PCR [12]. It should be made clear here that radiological imaging is not a COVID-19 test. It would be more correct to consider radiological imaging as an auxiliary element to the testing process. In other words, PCR test results with low accuracy are supported by radiological imaging. Blood analysis similarly supports the PCR test results with the findings it gives. CT findings are observed for a long time after the onset of symptoms, and patients usually have normal CT in the first 0-2 days [6]. In a study on lung CT, it was stated that the most important lung disease symptoms of patients with COVID-19 pneumonia were observed ten days after the onset of the disease [13]. One of the most important findings of the disease is pneumonia in the lungs. For these reasons, chest radiography, computed tomography, or ultrasonography are requested from patients. The first preferred and used method is chest radiography. However, the sensitivity of chest radiography is low (30%-60%) [14]. Fig. 1 shows the image of a COVID-19 (+) patient included in the X-ray data set we used in our study. One of the most obvious signs of COVID-19 is the ground-glass opacities (GGO). In Fig. 1, the image of GGO is seen. Chest radiography can be misleading in the early stages of the COVID-19 [15]. Fig. 2 shows a comparison of a chest radiograph and a CT image. GGO in the right lower lobe, indicated by red arrows on CT, is not visible on the chest radiograph taken one hour before the CT study [15]. Another CT finding is irregular consolidation [16]. This finding is more common in CT scans taken after the onset of symptoms. One of the most common findings in COVID-19 cases on CT images is the crazy-paving pattern [13]. Fig. 3 shows the percentages of the findings of GGO, consolidation, and crazy-paving pattern on CT images between 0-14 days. It can be seen from Fig. 3 that GGO is at the highest rate in the early stages of the disease and consolidation is the most prominent finding between 9-14 days of the disease. Fig. 4 shows the CT images of a 47-yearold female patient diagnosed with COVID-19 on different days between 0-14 days. It can be seen from Fig. 4 that GGO is intense in the early stages of the disease, the crazy-paving pattern and consolidation increase in the following days, and the symptoms gradually disappear. Looking at Fig. 4, it is possible to see that the COVID-19 virus has spread rapidly to most of the lungs within a few days. The only solution to this situation is early diagnosis and treatment. This study aims to create a computer-aided method that can support the early diagnosis and treatment of the COVID-19 virus. The use of computer-aided systems in the detection and follow-up of this disease will accelerate the diagnosis and treatment. In the last 10 years, machine learning methods based on computer-based algorithms have been used frequently in automatic diagnosis processes in the medical field. Computer-aided automatic diagnosis systems help clinicians in their decisions and shorten the diagnosis time. In particular, machine learning methods are used in many areas such as breast cancer detection [17,18], brain tumor detection [19], heart disease [20]. Since the spread of the COVID-19 virus is very fast, the diagnosis process should be done very quickly. However, the number of experts on the subject is also limited. The rate of spread of the disease has caused the health system to collapse in certain countries [21,22]. Therefore, it can be contributed to the solution of these problems with ML and GI methods. It can be diagnosed in the shortest time with high accuracy rates. And this can be done in less than a minute. The rapid rise of the COVID-19 outbreak, the limited number of radiologists, the difficulty of providing specialist clinicians to each hospital, the insufficient number of available RT-PCR test kits, test costs, and the waiting time of test results show how important it is to use AI approaches. Fig. 5 shows how regions around the world are affected by COVID-19. When the figure is examined, it is seen that there is a continuous increase in the number of 12-month cases and the disease is most intense in America. From the graph, it is possible to see that the number of cases, which was 400,000 in October in the American region, reached 600,000 in November. This situation shows how high the spread rate of the virus is and reveals the importance of our work. Table 1 shows the countries with the highest number of cases and deaths since December 2019. With more than 16 million cases and 298,594 deaths, the United States is seen as the place most affected by COVID-19. Another remarkable point in the table is that there are approximately 9.9 million cases in India. In Brazil, this number is around 6.9 million. Although India has 3 million more cases than Brazil, the death rate is lower. This stands out as an issue that needs to be addressed. From here, it can be deduced that India follows a more effective way in diagnosis and treatment or Brazil is not able to manage the process well. As can be seen from both Fig. 5 and Table 1, the spread rate of the virus is very high. And the death rate is around 2% of the total number of cases. This is also a considerable rate. For this reason, it is deemed necessary to implement all processes that allow early diagnosis and treatment. The main motivation of this study is to produce an effective solution with high accuracy for different imaging techniques. It is possible to summarize the contributions of the study to the literature as follows; 1. A new approach has been proposed that can enable early diagnosis and treatment of COVID-19 with machine learning and image processing methods. 2. Successful results have been obtained with the same approach in both CT and X-ray imaging modes. 3. One of the novelty aspects of the study is that it applies to different data sets. The results obtained for three data sets with different characteristics show that the study is generalizable. In other words, the proposed approach is not data set dependent. The proposed approach can also be applied to a different data set. 4. To create a decision support system that can support medical professionals in their studies and help them in their decisions. 5. The most dangerous aspect of the COVID-19 virus is that it can be transmitted from person to person very quickly and after it is transmitted, it can show its effect on our body very quickly. (See Fig. 4). Therefore, a study that will support early diagnosis with the help of CT and X-ray images will contribute significantly to the solution of the problem. With this study, it has been shown that classical learning methods can produce as many successful results as deep learning methods. 6. While most of the studies in the literature use very small data sets, 3 different data sets with different sizes and qualities are used in this study. Also, not only binary classification, but also multi-class classification was carried out. The rest of this paper is organized as follows. Section 2 introduces the related works while Section 3 is about materials and methods used for the proposed system. Section 4 presents the experimental results of the study and Section 5 presents discussions. Finally, the conclusion and the suggestions for future work are presented in Section 6. Related works With the COVID-19 virus affecting the world since December 2019, academic studies have started. In the past year, many studies have been carried out especially on the detection of COVID-19 with computer-aided systems. Most of the studies have been carried out using deep learning approaches that have become popular in the last few years. The first of these studies is Hemdan et al. [23] is the study where they created a deep learning model called COVIDX-Net using X-ray Images to diagnose COVID-19. In the study, they analyzed using different deep learning models such as VGG19, DenseNet201, ResNetV2, InceptionV3, Inception-ResNetV2, Xception and MobileNetV2 comparatively. The results obtained showed that VGG19 and DenseNet201 models achieved the best performance with 90% success. In the study of Barstugan et al. where classical learning methods are preferred, a new approach has been proposed for the classification of COVID-19 [24]. CT images were used in the study. They have extracted features with the help of patches of different sizes. They obtained an accuracy rate of 98.77% by classifying the obtained features with an SVM classifier. Wang and Wong designed a special deep learning-based framework called COVID-Net [25]. They applied the 1 × 1 convolutional deep learning method to the data sets consisting of chest X-ray images and achieved the success of 83.5%. In another study using convolutional neural networks, Maghdid et al. proposed a system for automatic diagnosis of COVID-19 pneumonia. They achieved an accuracy rate of 94.00% by making changes to existing architectures in their work [26]. Ghoshal and Tucker conducted another study using the convolutional neural network of deep learning to detect COVID-19 from X-ray images [27]. They resized the X-ray images they used in their work to 512 × 512. The approach they recommended achieved 92.86% accuracy. Hall et al. [28] proposed a model with a 10-fold cross-validation strategy on X-ray images using the VGG16 deep learning model. In their studies, they resized the data set numbered [29] and used it (224 × 224). The method they recommended achieved an accuracy rate of 96.1%. Farooq and Hafeez [30] used a pretrained ResNet-50 architecture for the diagnosis of COVID-19 pneumonia. To increase the success of generalization, they used the data set after preprocessing it with vertical flipping, random rotation, and different data enlargement methods. In this study, the success of COVID-19 classification was 96.23%. In the study conducted by Abbas et al. [31], the situation of being positive or negative from the COVID-19 X-ray images was evaluated. The deep transfer learning method was used in the study. The principal component analysis (PCA) method was used to reduce the high number of features in the study. The success of the system was measured with the validation, sensitivity, and specificity metrics, and success rates of 95.12, 97.91, and 91.87%, respectively, were obtained. Singh et al. In their study, [32] classified patients infected with COVID-19 as infected or not, using CT images based on multi-objective differential evolution (MODE) CNN. The sensitivity value of the system is at 95% levels. Also, the sensitivity value obtained with the system was compared with the CNN, ANFIS, and ANN methods and it was seen that even good results were obtained. Kassani et al. used both X-ray and CT images in their study. In the study, modeling has been carried out with many deep learning methods. According to the results, a 99% success rate was achieved as a result of obtaining the features of DenseNet121 architecture and training with the Bagging Tree classifier. In addition to the above studies, the details of which are given, Table 2 shows the data of more studies. When the table is examined, it is possible to see the methods used in the studies, the display formats, the size and type of the data sets. Of course, studies on COVID-19 are not limited to this. Apart from the studies included in this study, there are also many successful studies. When Table 2 is examined, it is seen that most of the studies were carried out with deep learning methods. Similarly, it is seen that the studies in the table work on a single data set. From this point of view, it would not be wrong to say that our study has operated on three different data sets and added a novelty to existing studies. Besides, the successful results obtained in both CT and X-ray imaging modes, despite their different structures, can be considered as the new aspects of our study. Materials and methods Three different data sets were used in our study. Our first dataset used in our study consists of CT images [35]. The data set includes 349 Covid-19 (+) images of 216 patients, 397 Covid-19 (-) images. While 169 people have age information, 137 people have gender information. Positive numbers according to age groups are seen in Fig. 6. While 37% of those with COVID-19 (+) are female, 63% are male. An example of positive and negative COVID-19 images in the data set is shown in Fig. 7. Another data set consists of X-ray images. The data set includes 125 images that are positive for COVID-19. 43 of these images belong to females and 82 of them belong to males. Besides, there are 500 clean lung X-ray images without any findings and 500 images with pneumonia but not COVID. This data set was [29,49] in study number [33]. As stated in study number [33], there is no detailed information about the data set. In the data set, 26 of the positive ones have age information and their average is 55. Sample images of this data set are shown in Fig. 8. Our third and final data set is a data set consisting of CT images, which has considerably more samples than our other two data sets [34]. This dataset created in study number [34] consists of a public COVID-19 CT scan dataset that includes 1252 CT scans with COVID-19 (+) and 1230 CT scans with COVID-19 (-). The data set was collected from patients in Sao Paulo, Brazil. The images in the data set belong to a total of 60 people, 32 of whom are male and 28 of them are female [34]. While 30 of these people are COVID-19 (+), the remaining 30 are negative. Apart from this, there is no other information about the data set in the study. Sample COVID-19 positive and negative images of the data set are shown in Fig. 9. Details of the 3 different data sets we used in our study are shown in Table 3. It is possible to access information on gender distributions in the used data sets, imaging types, number of positive and negative patients, and the countries where the data sets are collected from Table 3. Pre-processing There are three different data sets that we used in our study. The images in each data set were obtained from different sources. For example, while 127 images with COVID-19 (+) in dataset-2 were obtained from the study of Cohen JP [29], 500 healthy images and 500 images with Pneumonia but not COVID-19 were obtained from the dataset created by Wang et al. [49]. For this reason, data sets consisting of images of different sizes obtained from different sources are obtained. Besides, some of the images in the data sets are in jpeg format, while some are in png format. The first process of our preprocessing stage is the resizing process. The dimensions of each of the 3 different data sets we used in our study are different from each other. The sizes of the images in each data set also differ from each other. For this reason, each data set was evaluated within itself and resized. The process here is to normalize the images contained in the data according to their height and width. In the normalization process, first of all, the available images are categorized according to their width and height. Then the frequency of each width and height value is obtained. The final values have been obtained by obtaining the weighted average of the height and width values according to these frequencies. The reason for using the frequency of each image here is to try to get a value closer to those dimensions in the resizing process by increasing the weight if there are more than certain image sizes. The Histogram of the Oriented Gradients (HOG) and Local Binary Patterns (LBP) methods make feature extraction by shifting the image by cell size. For this reason, values that can be divided by cell size were selected in the resizing process of the images. The height and width values obtained after the resizing of the data sets are shown in Table 4 below. The next preprocessing step is to perform the gray level conversion of all images. Such a process has been carried out by the necessity of the images in similar formats to run the computeraided application. Also, this process did not reduce success and significantly reduced transaction cost and time cost. In the pretreatment phase, the image sharpening process was applied to increase the clarity of the image and to increase success. Feature extraction and dimension reduction HOG, GLCM, SIFT, and LBP methods were used in the feature extraction phase in our study. HOG and LBP methods provided the most successful results among these four different methods. For this reason, the results of the HOG and LBP methods will be given in the next part of the study. Histogram of Oriented Gradients (HOG) use the gradient values and orientation angles of pixels in the feature extraction process. Local histograms are obtained with gradient values and orientations, and the image is represented in this way [50]. To determine the HOG features; first, the edges are determined (1). The gradient magnitude and gradient orientation angle is then calculated as seen in 2. G x and G y ::Edges are determined by applying horizontal and vertical Sobel filters, G: gradient magnitude, α: gradient orientation angle While extracting the HOG features, the best cell-size was determined to be 15. This means that the window is divided into 15 × 15 cells in x and y directions and descriptors are calculated for each cell. In this work, the UoCTTI variant was used for HOG. This means that HOG computes the four-dimensional texture energy feature as well as directed and undirected gradients but reduces the result to 31 dimensions [51]. As an example, for 400 × 300 images using cell size 15, the size of the HOG features is 26 × 20 × 31 (16120 size feature vector for an image). Another feature extraction method used in the study is the Local Binary Pattern (LBP), a nonparametric method. It was originally proposed as a tissue pattern analysis technique [52]. The basic functioning of this method is to establish bilateral relationships between the center and neighboring pixels. This method matches and labels neighboring pixels based on central pixel values in a 3 × 3 frame. The neighboring pixel takes the value 1 if it is greater than or equal to the central pixel, otherwise, it takes the value 0. Thus, an 8-bit code is generated for each pixel in the LBP neighborhood. Then, an identifier with the LBP code is created on each image as shown in Fig. 10. These operations are calculated using Eq. (3); x c = central pixel, x p = neighbors of central pixel, R = the distance of central pixel to the, P = number of neighbors traded. In this study, PCA [53] is applied for feature selection and the 500 eigenvectors with the highest eigenvalues are selected. PCA is a method of expressing the variance structure of p number of variables as less number and linear components of these variables [54]. In other words, PCA is a statistical technique used to reduce the size of data by selecting the most important features that provide maximum information about the data set. The PCA method provides advantages in terms of removing the correlation between features, contributing to performance, and reducing overfitting. We can say that the necessity of standardizing the data, the loss of information if the correct number of principal components is not selected, and the formation of less interpretable principal components as disadvantages. Uses the eigenvalues and eigenvectors of the covariance matrix to find the linear components of the p variable in the data matrix. After feature extraction and selection steps, the classification process was started. Classification phase At this stage of the study, k-NN, SVM, Bag of Tree, and Kernel Extreme Learning Machine (K-ELM) methods were used to train the selected features and perform the classification process. These methods are among the classical learning methods and can produce very successful results in both two-class and multi-class learning. With the increasing popularity of deep learning methods in recent years, the rate of using classical learning methods has decreased. However, this does not mean that classical learning methods produce unsuccessful results. In our study, it has been shown that classical learning methods can produce successful results as well as deep learning methods. A study with low complexity and rapid results with classical learning methods has been revealed. The first method we used in our study is the k-NN method, which is used in many different areas and produces simple but highly successful results. The k-NN method is a nonparametric classification method [55] and is frequently used in signal and image processing applications [56][57][58]. k-NN is a method based on the classification of objects according to their nearest examples in the feature space [55]. The k-NN algorithm is easy to use and apply method. The training process for this algorithm consists solely of storing the feature vectors and tags of the training images. In the classification process, the untagged query point is simply assigned to the tag of its k nearest neighbor. Typically, the object is classified by the majority of votes based on the tags of its k nearest neighbors. If k = 1, the object is classified as the class of the object nearest to it. Different metrics are used to determine the distances of the neighbors. The most common metrics are Manhattan, Euclidean, and Minkowski metrics. Let X = (x 1 , x 2 , . . . , x n ) and Y= (y 1 , y 2 , . . . , y n ) be data points and n is the sample size. To find the distance between these data points, Minkowski distance is calculated as follows; In this formula, we can manipulate the distance metric by changing the p-value. For this reason, the Minkowski distance is also called the L p norm. In this formula, if the p-value is 1, the distance metric is Manhattan, and if it is 2, it is Euclidean. The most important point where the k-NN method is disadvantageous is the number of starting k numbers. Besides, we can list the slowness of the algorithm, high computational cost, and high memory requirement as other disadvantages. We can say that its application is simple, it can be applied in both classification and regression problems, and it allows not only binary classification but also multiple classifications as advantageous methods. Another classification method used in the study is the SVM method. SVM method is one of the most commonly used methods in both image processing and medical image processing [59]. Although the foundations of the SVM method date back to the 60s, it has reached its current state in 1995 [60]. We can define the SVM method as a vector-based classification method that finds a hyper-plane between two classes to ensure that the data in each class is at the maximum distance from each other [61]. It is an effective method in cases where the number of dimensions is higher than the number of samples and in high dimensional spaces. Its features such as the ability to use different kernel functions and the efficient use of memory are other aspects where it is advantageous. We can say that it is disadvantageous in situations such as low performance in large data sets and noisy data and high time complexity. Various studies have also been done to overcome these disadvantages of SVM [62,63]. The third classification method we use in our study is the Bag of Tree method, which we can define as an improved decision tree method. The Bag of Tree method uses a group of decision trees for classification or regression operations. A more effective method has been created by using weak decision trees together to create a community, i.e. bagging. Each tree in the community is grown on a copy drawn independently of the input data. Instances not included in this copy are classified as out of the bag. The basis of the Bag of Tree method is a random forest algorithm [64]. The bag of tree method is a method based on the Ensemble Learning technique. Because of this aspect, it reduces the overfitting problem and variance in decision trees. This increases the accuracy. Also, we can say that it is advantageous to work well with both categorical and continuous variables and to be used for both classification and regression problems. It is a method used frequently in image processing and signal processing due to its advantages [65,66]. We can list the disadvantages of very high transaction complexity and long training time requirement. The extreme learning machine (ELM) is a new learning algorithm for a single hidden layer feed-forward neural network (SLFN) [67,68]. In this method, the input weights and biases of ELM are selected randomly and output weights are determined analytically. In 2006, Huang et al. compared the performance of SVM, ELM, and back propagation-based SLFN methods in terms of training time and accuracy [69]. The weight and bias values in the input layer are determined randomly, independent of the data. Output weights are calculated analytically as shown in (4). [G(a 1 , b 1 , x) randomly generated input values. In our study, the sigmoid activation function was used due to its widespread use in the literature. Sigmoid Function: Here, from the input layer to the hidden layer, x is an input sample, a is the weight value and b is the bias value. The {a, b} pair is randomly generated. ELM minimizes training error and the norm of output weights [67,69]. Minimized: ∥Hβ − T ∥ 2 and ∥β∥ where H is the hidden layer output matrix; The least-squares method was used instead of standard optimization methods in the application phase of ELM [70]. T is the tag matrix and H † is the inverse of the Moore-Penrose to minimize the L2 norms of ∥Hβ − T ∥ and ∥β∥. A regularization coefficient C is included in the optimization procedure to increase the robustness and generalization capability of the ELM. Therefore, given a K kernel, the weight set is learned as: The system was modeled with linear, polynomial, and radial basis function kernels, and since it was seen that the most successful result was obtained with radial basis function, RBF kernel was used in the study. The classification stage is the last stage of our work. 10-fold cross validity in the classification process. The parameter values of all methods used in the study are shown in Table 5. The flow diagram of our work is shown in Fig. 11. When the figure is examined, it is possible to see a structure that consists of 5 basic steps and each step has sub-steps. As can be seen from the flow chart, first of all, there is the acquisition of images for three different data sets. Then the pre-processing stage is started. In the preprocessing stage, resizing of images, image transformations, and image sharpening processes are performed. The next step after pre-processing is the feature extraction phase. There are many methods used for feature extraction in the literature. In this study, we used GLCM, HOG, LBP, and SIFT methods. According to the results obtained from the feature extraction methods, the HOG and LBP methods with the highest success were chosen. A large amount of data was obtained after the feature extraction step. For this reason, a dimension reduction process has been carried out. PCA method was used for dimension reduction. After Table 5 The purpose and parameters of the methods used in the study. this stage, the classification stage, which is the last stage of our study, started. Two different classification processes were carried out in the classification stage. The first of these is the binary classification process that distinguishes COVID (+) and COVID (-). Another classification process is the multiclass classification process in which the classes COVID (+), NO Findings, and Pneumonia but not COVID-19 are distinguished from each other. In the classification stage, the 10-fold cross-validation method was used to model the data. Five different metrics, namely Accuracy, Sensitivity, Specificity, Positive Predicted Value, and Negative Predicted Value, were used to evaluate the findings obtained as a result of the classification process. Results In the experimental studies carried out, a computer with an i5 processor, GT 730 4 GB graphic card, and 16 GB RAM was used. MATLAB platform was used to realize all the stages in the flow diagram shown in Fig. 11. Accuracy, Sensitivity, Specificity, Positive Predictive Value (PPV), and Negative Predictive Value (PPV) metrics were used to evaluate the obtained classification results. The necessary formulas for the calculation of these metrics are shown in (9) It is possible to consider positive predictive value (PPV) and negative predictive value (NPV) as the clinical significance of a test. The main difference between PPV and NPV from sensitivity and specificity is that they use a prevalence [71]. Sensitivity is the percentage of true positives. Specificity is the percentage of true negatives. Accuracy is the rate at which all those with and without the disease can be correctly detected. It was previously stated that 3 different data sets were used in our study. The classification process was carried out by dividing these data sets into training and test sets by the 10-fold crossvalidation method. The binary classification process has been carried out on all data sets. Also, a multi-class classification process has been carried out in data set number 2, which consists of X-ray images. Table 6 shows the success percentages obtained for data set number 1, which has the least number of images and uses CT images from three different data sets we used in our study. In the table, the highest rates obtained for each metric are marked in bold. It can be seen from the table that the most successful results are obtained with LBP feature extraction and modeled with a k-NN classifier. When the table is examined, it is seen that the ability to distinguish the patients (Sensitivity) is 86.53%, and the specificity is 91.94%. The correct detection rate (Accuracy) of all patients and non-patients is 89.41%. Data set number 2 was the data set consisting of X-ray images. This dataset includes three classes: those with COVID-19 (+), those with No Findings, and those with Pneumonia but not COVID-19. The success rates obtained for this three-class data set can be seen in Table 7. Although the number of classes is three, it is an important issue that the accuracy rate is around 85.96%. Another remarkable point is that the Sensitivity (94.40%) and Specificity (100%) values are considerably higher than the binary classification data set 1 in X-ray images with three classes. From the table, it can be said that the SVM method produces more successful results than other methods in the multi-class classification process of X-ray images when looking at both healthy and sick patients in general. The binary classification was also made for the X-ray images in dataset 2 in the study. Binary classification is primarily made between the COVID-19 (+) and No Findings classes. However, another binary classification has been made between the Pneumonia but non-COVID-19 class and the COVID-19 (+) class. Finally, the class of no findings and the class of non-COVID-19 Pneumonia are classified together as a class and COVID-19 (+) ones as a class. The results obtained are shown in Tables 8-10, respectively. When Table 8 is examined, it is seen that the highest accuracy rate is obtained when modeled with the HOG feature extraction method and K-ELM classification method. On the other hand, a high accuracy rate of 98.88% was obtained in the process of classifying those with COVID-19 (+) and those without any symptoms in X-ray images. Besides, the highest success rates obtained for Sensitivity, Specificity, PPV, and NPV values are 96%, 100%, 100%, and 99%, respectively. Table 9 shows the results of the comparison of those with COVID-19 and non-COVID-19 pneumonia. 98.56% success rate was obtained in modeling with LBP and SVM. When we compare Tables 8 and 9, it determined those who did not show any findings with a rate of 96%, while non-COVID-19 pneumonia was detected with a rate of 93.60%. The highest accuracy rates (98.88%, 98.56%) obtained for Tables 8 and 9 are close to each other. In addition to these, it can be seen from Table 9 that although there are foggy images in both classes, the ability to distinguish patients with pneumonia as COVID-19 and non-COVID is quite high. This is an indication that the proposed model is very successful. In Table 10, a class was created by combining two non-COVID classes (No Findings and non-COVID pneumonia) of X-ray images, and the results were binary classified. Here, an accuracy rate of more than 99% has been achieved. The success results obtained as a result of applying the proposed approach to our third and last data set are shown in Table 11. Considering the results obtained on this data set, it is seen that the highest success rates are obtained with k-NN, which is a simple but effective method. The number of CT images in this dataset is 2482, which is an indication of the high generalization performance of the proposed approach. The implementation times of the proposed approach in all data sets are shown in Table 12. When the table is examined, it is seen that the feature extraction process with the HOG method is faster than the LBP method. Among the classification methods, it is seen that the k-NN method produces much faster results than other methods. One of the most effective points of our study can be seen by looking at Table 12. The fact that COVID-19 can be detected in less than 1 min can be touted as an indicator of success alone. When Table 12 is examined, it is seen that the slowest classification method is the Bag of Tree method. The confusion matrices of the classification results given in the tables above are shown in Table 13. False-positive and false-negative examples seen in the confusion matrix are shown in Fig. 12. Fig. 12a shows a CT image that is not actually COVID-19 but has been incorrectly diagnosed with COVID-19. The cause of the error is thought to be because the appearance seen in the left lobe resembles the GGO seen in COVID-19. In Fig. 12b, a COVID-19 (+) CT image is classified as COVID-19 (-). Here, as the CT image is taken at the early stage of the disease, there is no obvious finding on the lungs. There is a small GGO at the bottom of the right lobe, but the system missed it. Similarly, Fig. 12c is thought to be an image taken in the early stages of the disease. It appears to be a very similar image when compared to the COVID-19 (-) images. It has been mixed for this reason. Fig. 12 d, e and f show examples of faulty classifications on X-ray images. Both images e and f have pneumonia, but one is caused by COVID and the other is caused by another disease. The system misdiagnosed these two images. In Fig. 12 d, an image with no findings was diagnosed with COVID. Here, too, it is seen that the reason is related to imaging. Looking at the image, it is seen that there are images similar to the GGO. Fig. 13 was created to make the operations we performed in our study more understandable. In Fig. 13, the process steps and the results of the steps are displayed on a sample CT image from the data sets we use. The details will be understood more clearly when zooming on the images. Discussions The main purpose of this study is to establish a system that can support the diagnosis and treatment process by detecting the COVID-19 virus as soon as possible. The study and the results obtained show that there is sufficient evidence to achieve this goal. Table 14 shows the performances of this study and other studies that use one of the three datasets in the literature. It can be seen clearly from Table 14 that almost all of the results obtained in our study are more successful than other studies in the literature. The only classification result with low success; while the accuracy value of the multi-class study performed for data set 2 was 85.96% in our study, it was 87.02% in the study number [32]. However, as can be seen from the Table, our study is ahead in all other metrics. Another important aspect of the study contributing to the literature is the duration of diagnosis. Table 13 The confusion matrices were obtained as a result of the classification process. Our study aims to provide support to radiology specialists by further reducing evaluation times that are prolonged due to these processes. As we have stated in many parts of our study, one of our main goals is to provide mechanisms that can assist medical professionals in the diagnosis. COVID-19 has caused incredible levels of intensity in the medical industry all over the world. The health sector of many countries has come to a collapse level. Tents were built in the hospital gardens to cope ∼1 min with the intensity. It is quite natural that there may be issues that radiologists overlook in such a stressful and tiring work tempo. It is thought that our study will contribute in this sense as well. Tables 6-11 show the results of five different metrics obtained as a result of the classification of features obtained by four different classifiers and two different feature extractor methods. From all these tables, it seems a little difficult to analyze the success of classifiers in general. For this, the Box Plot chart shown in Fig. 14 was created. This figure has been obtained by using all the success rates in the tables mentioned. When Fig. 14 is examined, it is seen that the best results are obtained in k-NN and SVM methods according to Accuracy, Sensitivity, and NPV metrics. On the other hand, it is seen that the Bag of Tree method is good only in Specificity and PPV results, and in other metrics, it is the worst classifier. For K-ELM, it is possible to say that it is good in all metrics. It would not be wrong to say that it is the most appropriate method in overall evaluation even if it does not have the highest success rate in any of the five different metrics. Conclusions In this study, COVID-19 was detected in three different data sets with accuracy rates of 89.41%, 99.02%, and 98.11%. Two of the data sets used to consist of CT images, while one consists of X-ray images. On the other hand, an accuracy of 85.96% was achieved in the X-ray dataset for those with COVID-19 (+), COVID-19 (-), and Pneumonia but not the COVID-19 class. Thus, the study has been shown to produce successful results in both CT and X-ray imaging modes. Also, the fact that the study was applied to three data sets with different characteristics shows its generalizability. This is one of the most important features of the study. As a result of the study, it has been shown that COVID-19 can be detected with a high success rate in less than a minute with image processing and classical learning methods. A model that can classify COVID-19 patients with the help of chest CT and X-ray images is proposed in the study. In the model implemented, the 10-fold cross-validation method was used in the separation of training and test data. The training data set was used to create the model, and the test data set was used for the verification process. In the proposed model, different image processing techniques were applied to CT and Xray images. Afterward, by applying a large number of classifiers to the preprocessed images, it was shown that the system achieved a higher success than the studies in the literature. It has been observed that k-NN and SVM, which are among the classical learning methods, can detect COVID-19 (+) very successfully. On the other hand, in the classification of both (+) and (-) data, it was observed that the K-ELM method produced relatively better results than other methods used, with a holistic perspective. In addition to all these successful results, the implementation times of the system are also remarkably good. The model creation and implementation times of the system are under 1 min. This shows that the system is highly applicable. Looking at the experimental results, it is understood that the proposed model performs faster and better than other models. Especially in a period when deep learning models have become widespread in recent years, it has been shown that very high success rates and very short implementation times can be achieved after the data set of classical learning methods are processed correctly. COVID-19 is a highly contagious virus. For this reason, the primary way to stop the virus is to reduce the transmission rate. This can only be achieved with early diagnosis and treatment. Early diagnosis of the diseases caused by the COVID-19 virus is the common goal of both our study and other studies in the literature. For this reason, a system with both a short duration and a high success rate are inevitable. This proposed system is exactly an example of this. It would not be wrong to say that it is the most original aspect of this study that it can achieve success rates of 99% in less than 1 min. Clinical information was not available for the individuals included in the data sets used in the study. For this reason, only image analysis was performed and it could be analyzed whether the patients had the COVID-19 virus or not. However, conditions such as being in the advanced age group and underlying diseases are matters that can change the course of the disease. Therefore, it would not be wrong to consider the lack of clinical data on patients as a limitation of our study. We will make a comprehensive evaluation by ensuring that the clinical features and laboratory examination information of the cases we include in our next study are also included. Also, there is no information on which day of the disease the images in the existing data sets belong to. This is another limitation of the study. If there is information that the radiological images of patients are taken on which day of the disease, very different and useful evaluations can be made about the course of the disease. In our future studies, it is aimed to make more detailed and different analyzes by including the information on the day of the illness as well as the radiological images of the patients. The future goal of our study is to create a comparative model by including deep learning methods. In this way, the performance and implementation times of both deep learning methods and classical learning methods will be analyzed.
2021-03-18T05:14:44.239Z
2021-03-17T00:00:00.000
{ "year": 2021, "sha1": "45da03a4c4394d819f7274d3bc449cae2a91746f", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.asoc.2021.107323", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "45da03a4c4394d819f7274d3bc449cae2a91746f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
138411687
pes2o/s2orc
v3-fos-license
Effect of agro-industry by-product on soil fertility , tree performances and fruit quality in pear ( Pyrus communis L . ) Organic materials from agro-industry processes can be used in agriculture as a way to recycle materials that still maintain a high fertilizing value. The aim of the experiment was to evaluate the value of soil applied apple juice by-product as fertilizer for pear trees. A 3-year experiment was carried out in a mature pear orchard (cv Abbé Fétel grafted onto quince MC) in the Po valley (Italy), where the following treatments were compared: 1) unfertilized control; 2) mineral N fertilization (60 kg N ha year split in two spring applications); 3) apple juice by-product (1.3 t ha year, equal to 60 kg N ha), fully supplied at petal drop; 4) apple juice by-product, at twice the rate of the previous treatment. Apple juice by-product soil decomposition accounted for 12% in the first 6 months. At the end of the 24-month-assay, the decomposition accounted for 24% on total dry weight that makes 28% of initial C and 36% of initial N. Soil nitrate-N concentration was increased by the mineral N fertilizer, while the application of apple juice by-product increased microbial carbon. Tree growth, yield and fruit quality were not affected by treatments, while mineral N fertilization raised leaf and fruit N concentration. In conclusion, in our conditions the use of apple juice by-product did not show negative effects on tree performances and fruit quality, with some advantages related to the recycling of organic wastes in agriculture. Introduction Soil organic matter (OM) plays an important role in long-term soil fertility preservation, due to the improvement of soil physical, chemical and biological properties [1].However, most of the agricultural soils of the Eastern part of the Po Valley of Italy is showing a low concentration of OM [2], related to the decreased availability of traditional organic fertilizers, the increasing of farm specialization toward fruit tree cultivation [3,4] and the increasing of soil mineralization rates due to soil tillage and warmer climate [5]. The high environmental impact of conventional farming practices has increased interest in strategies that help soil OM preservation such as crop rotations [6], conservative soil tillage systems [7], green manure crops [8] and organic waste application [9].The production of organic wastes (e.g.municipal activities, farming and agro-industrial processes, animal manures, composted residues, etc.) is increasing worldwide; their disposal processes imply considerable social costs, may be responsible for detrimental environmental impacts and represent a loss of valuable biomass and a sourse of nutrients for crops.The recycling of such biomass in agriculture (e.g. as soil amendments or fertilizers) could represent an interesting way to reduce disposal costs, recycle OM and supply mineral nutrients to the soil [10,11].The use of by-products from orange processing as fertilizer has been shown to improve soil fertility in an orange orchard [12] and in durum wheat [13].The mixture of exhausted olive-cake and poultry manure produced an organic fertilizer that improved soil fertility and yield in potato [14].By-products from the apple processing industry are usually used for the production of pectin [15]; however, they can be potentially treated and used as organic fertilizers. The aims of this experiment were to evaluate the effect of soil-applied by-products from the apple juice industry in a commercial pear orchard on 1) the dynamics of release of carbon (C) and nitrogen (N) under field conditions; 2) soil fertility; 3) tree growth, yield and nutritional status and 4) fruit quality. Trees were spaced 3.8 m between rows and 0.9 m along rows (2924 trees ha −1 ) and trained as spindle bush.Irrigation water was daily supplied during vegetative season by drip irrigation, using 3.8 L h −1 emitters, to return moisture lost through evapotranspiration.Tree rows were sprayed with herbicide while the alleys were maintained with a grass cover, which was regularly mown, 4-5 times per year. The following soil-applied treatments were compared in a randomized complete block design with six replicates: 1) unfertilized control; 2) mineral nitrogen (N) fertilization (60 kg N ha −1 year −1 ) split in two equal applications (50:50) at petal drop and 40 days after the first application; 3) by-product of apple juice production (AJBP) supplied at a rate of 1.3 t ha −1 year −1 (equal to 60 kg total N), at petal drop; 4) AJBP supplied at a rate of 2.6 t ha −1 year −1 (equal to 120 kg total N) at petal drop.In 2009 and 2010, the rate of N supplied per year was increased to 80 kg ha −1 in treatments 2 and 3 and to 160 kg ha −1 in treatment 4, to satisfy the increased tree N demand according to a higher fruit set.Regarding phosphorus (P), potassium (K) and magnesium (Mg), each plot, including control, was fertilized in order to reach the same amount per hectare and per year.Mineral fertilizer (urea) and AJBP were localized along the tree row and tilled into the top soil (10 cm). The AJBP, provided by ILSA SpA (Arzignano, Vicenza, Italy), was made from sludge generated by the apple juice industry, after screening, drying, grinding and homogenizing the sludge; chemical characteristics of AJBP used are summarized in Table 2 Mineralization assay of AJBP The study was conducted from 2009 to 2011, using the litter bag technique [16,17] to assess the decay dynamics of AJBP.Portions of non-woven fabric were cut and sewn in 18 cm × 18 cm bags; 4 g of apple juice by-product, corrisponding to the amount (in g m −2 ) of AJBP applied in treatment 3 trees were placed into each bag.Thirty bags in total were prepared and placed on 16 April 2009 along tree row into the soil, at 10 cm depth. Collection of five bags each sampling time randomly chosen along the row was performed in 2009 on 16 May (+1 month), 16 July (+3 months) and 16 October (+6 months), in 2010 on 13 April (+12 months) and 14 October (+18 months) and in 2011 on 13 April (+24 months).Decomposing AJBP litter was cleaned, oven dried at 65 °C and milled (0.2 mm mesh) for chemical analysis.Concentration of C and N was evaluated by C/N elementar analyser (Carlo Erba, Milan, Italy). Soil analysis and microbial C biomass To evaluate the effect of the treatments on soil fertility, ammonium-N (NH4 + -N), nitrate-N (NO3 − -N), pH, soil OM, total N, humic and fulvic acids (HA+FA) were measured throughout the experiment.From 2008 to 2010, soil cores were collected 4 times per year at 0-40 cm depth to monitor NH4 + -N and NO3 − -N concentration before and 40 days after first fertilization, in mid-July and mid-October (3 and 6 months after first fertilization, respectively).Soil NH4 + -N and NO3 − -N were determined by extracting 10 g of soil with a 2 mol KCl solution (1:10 w/v) [18] and using an auto analyser (Auto Analyzer AA-3; Bran+Luebbe, Norderstadt, Germany).Soil samples collected in October were also analysed for pH, OM, total N [18] and (only in 2010) HA+FA [19]. Microbial C biomass was determined in soil samples taken from a depth of 5-15 cm on the same dates of mineral-N sampling, using the substrate induced respiration method [20].This involved the sieving (2 mm mesh) of 50 g of fresh soil, placing the screened soil in a 500 mL glass jar and allowing it to equilibrate at room temperature for at least 24 hours.The soil was then mixed with 200 mg of glucose and incubated at 22 °C for 3 hours.Carbon dioxide evolution was measured by an infrared gas analyser (EGM-4; PP system; Hitchin, UK) and converted into microbial C [20]. Tree performances and nutritional status The effect of treatments on tree performances was evaluated by measuring, at the end of growing season, trunk cross-sectional area (TCSA) 15-20 cm above the grafting point; tree yield and fruit weight were also determined at commercial harvest.To assess the effect of treatments on tree nutritional status, leaves were sampled in summer from annual shoots, washed, oven dried at 65 °C for 72 h and milled (0.2 mm mesh).At harvest, two slices were taken from each of the 20 fruits of the sample, lyophilized and milled for analyses.Nitrogen was determined by Kjeldahl method [21], while phosphorus was spectrophotometrically quantified at 700 nm after extraction with acid mineralization [22].Finally, calcium (Ca), K, Mg, iron (Fe), manganese (Mn), zinc (Zn) and copper (Cu) were determined by atomic absorption spectrometry (SpectrAA-200, Varian, Mulgrave, Australia), after acid digestionby a microwave lab station (Ethos TC-Milestone, Bergamo, Italy) [23]. Fruit quality At commercial harvest, a representative sample of fruits was collected from each plot and used to determine soluble sugars and organic acids [24] on lyophilized fruit flesh by high performance liquid chromatography (HPLC).Analyses of sugars were performed using a Jasco PU-1580 HPLC (Jasco Inc., Easton, MD, USA) with a refractive index detector Jasco RI-930 (Jasco Inc., Easton, MD, USA), using an Aminex HPX-87-C, 300 × 7.8 mm (Bio-Rad, Hercules, CA, USA) column.The analysis was performed maintaining the column at 85 °C, using ultra pure water as mobile phase, flux at 0.6 mL min −1 while the running time was set at 25 min.Soluble sugars were identified and quantified by the comparison of the retention time with those of standard solutions (Sigma-Aldrich Co. LLC., St. Louis, MO, USA) of known concentration.Soluble organic acids were measured by Jasco PU-1580 (Jasco Inc., Easton, MD, USA) HPLC coupled with a UV-visible detector (MD-1530, Jasco Inc., Easton, MD, USA) using a Phenomenex Rezex ROA-Organic Acid H+ 300 mm × 7.8 mm column (Phenomenex, Torrance, CA, USA).The analysis was performed maintaining the column at 25 °C, using sulphuric acid 0.08M as mobile phase, flux at 0.6 mL min −1 while the running time was set at 25 min.Soluble organic acids were identified and quantified by the comparison of the retention time with those of standard solutions (Sigma-Aldrich Co. LLC., St. Louis, MO, USA) of known concentration. Statistical analysis Data were submitted to analysis of variance as in a complete randomized block design.When analysis of variance showed statistically significant effects of treatments (p  0.05), means were separated by the Student Newman Keuls (SNK) test. Mineralization assay of AJBP The loss of mass during the AJBP decomposition was more rapid during the first 6 months, when about 12% of the original mass was lost; between October 2009 and April 2010 the mass remained relatively stable and the decrease was negligible (Figure 1).During summer 2010 a small but significant decrease of litter mass was recorded (−3% at October sampling), while a consistent and significant mass decrease was measured during winter 2010-2011: after October 2010, the mass decreased by 9.3% and at the end of the study (+24 months after placement) was 76% of initial value (Figure 1).The C and N loss dynamics over the 2 years were similar except in the first month (Figure 1), when the release of N was twice that measured for C (17% and 8% of original amounts, respectively).Starting from the second sampling (+3 months), the dynamic of C and N release showed a similar trend and the amount of nutrient found in the litter gradually decreased untill April 2011 (+2 years after bags placement); at the end of study the C and N still present in the litter were respectively 72% and 64% of the amount originally set (Figure 1).Regarding N, the loss was statistically significant only at the beginning (first 6 months) and at the end (last 6 months) of the trial (Figure 1).Our findings are partially in agreement with results obtained in a similar study on apple leaves [26]; however, the mass, C and N lost by apple leaves after 2 years were ≥ 50% of initial values, higher than that observed in AJBP litter bags of our experiment.Moreover, the N dynamic found in AJBP litter bags was different than that observed in apple leaves litter, where N concentration in litter constantly increased during the first year after deposition, while decreased slightly only during the second year [26].The incorporation of external nitrogen in the decomposing litter was observed by other authors and was associated with microbial immobilization of N from external sources [27][28][29]. Soil parameters No treatment effects were observed on soil NH4 + -N (data not shown).Soil NO3 − -N concentrations showed a similar trend during the three-year-study, so that in Table 3 the 3-year averages for each of the four annual sampling dates are reported.In April, before fertilizer application, nitrate-N concentrations were similar (9 to 11 mg NO3 − -N kg −1 DW) in all treatments (Table 3).The use of mineral N fertilizer significantly increased NO3 − -N availability in May, July and October compared with the control and AJBP treatments (Table 3).Soil nitrate-N availability was fairly even in the absence of fertilizer supply throughout the year.Nitrate concentrations in spring (April-May) ranged between 9 and 10 mg NO3 − -N kg −1 DW, while values in summer and autumn were somewhat higher (17-18 mg NO3 − -N kg −1 DW) (Table 3) due to nutrient release during the decomposition of soil OM.The main parameter to quantify the risk of nitrate-N leaching is the concentration of soil nitrate-N at the end of the growing season.In our experiment, fertilization with mineral N resulted in high NO3 − -N levels in the soil (70 mg NO3 − -N kg −1 DW in the 0-40 cm layer) at the end of the growing season, when tree is entering in the dormancy stage and root uptake is low, and the risk of nitrate leaching is considerable in our conditions [30]; the experimental site, in fact, is located in the Po Valley, a vulnerable zone for nitrate directive.The use of AJBP had no influence on the post-harvest soil nitrate-N and hence did not increase substantially the risk of nitrate leaching during winter [4,31].The lower nitrate-N concentration observed in AJBP plots than in Mineral N treatment was the consequence of high stability in the soil of the by-product (Figure 1) used in this study; however, the soil amendment could stimulate a development of soil microbial biomass that absorbed part of the mineral N released by the soil OM mineralization process [4,32]. As a matter of fact, the application of apple juice by-product increased microbial C biomass only in 2010 (Table 4), when the application of AJBP at highest rate significantly increased microbial C, compared to the unfertilized plots (Table 4).In May, microbial C was similar in all treatments (data not shown), but in July and October AJBP application significantly increased microbial C compared to the control and mineral N treatment (Table 4).In 2009 and 2011 the organic fertilization did not modify microbial biomass (data not showed).Our results are partially in agreement with literature, which reported that the application of compost [4] and other organic materials [10] can increase soil microbial C. In our study, however, the highest microbial biomass was found in summer, in contrast with those reported by Baldi and co-authors, who found the highest values in spring and autumn [4], because of the different climatic conditions of the two experimental sites. Soil pH and total N were unaffected by treatments, but after three years soil amended with AJBP showed higher organic matter concentrations than the control (Table 4).Considering the data of OM concentration at the beginning (Table 1) and at the end of the trial, AJBP has maintained or even increased soil OM, while it decreased significantly in the control and mineral N plots (Table 4).Soil humic and fulvic acid (HA+FA, Table 4) concentrations were not affected by AJBP supply, as expected in a short-term experiment, and ranged between 0.46% (control) and 0.52% (AJBP).Our results are in agreement with other reports [4,33] and confirm that the use of organic residues in horticulture is an effectiveness strategy to preserve and eventually ameliorate soil fertility.In this study, soil OM was maintained with relatively small amounts of organic fertilizer (1.6 t ha −1 in 2009 and 2010) and soil NO3 − -N release was compatible with the N requirements for pear orchard, which range between 60 and 80 kg N ha −1 year −1 depending on the graft combination [34,35].The use of AJBP in pear fertilization can also contribute to soil carbon sequestration, in agreement with literature [4,36,37]. 2Within columns, values followed by the same letter are not significantly different (SNK test, p = 0.05). Tree growth, yield and nutritional status Tree growth and crop yield were not affected by the different fertilization strategies, as previously published [38].The absence of significant effects of the N rate on tree growth and yield can be explained by considering the tree age and grafting combination.The use of dwarfing rootstock (i.e.quince MC) and the age of the trees contributed to achieve an optimum balance between tree vigor and production. 2 ns and **: not significant and significant at p < 0.01, respectively. Trees fertilized with mineral N showed significantly higher leaf and fruit N concentrations compared with untreated trees, while AJBP treatments showed intermediate values (Tables 5-6).No statistical differences were observed in leaf P, K, Ca and Mg concentrations (Table 5).Regarding fruit mineral composition, trees fertilized with mineral N produced fruits with significantly higher N concentration (0.21%) than unfertilised trees, while K levels (0.50%) were significantly lower (Table 6).Regardless the rate, both fruit and leaves of tree treated with AJBP showed a similar K concentration of control trees.Leaf and fruit micronutrient concentrations were not affected by use of different fertilizer treatments (Tables 5-6). Data of leaf mineral composition were generally in line with the reference values considered optimal for cv Abbé Fé tel grown in the same area [39], except for N which was below the optimum range (2-2.4%) in all treatments, probably due to the grafting combination, that included the very dwarfing quince MC. Fruit quality Soluble carbohydrates concentrations and organic acids were not increased by mineral or organic N supply (Table 10).Fructose resulted the sugar most abundant in the fruit (32% of DW), followed by glucose and sorbitol (15-20% of DW), and sucrose (less than 5%, Table 7).Regarding organic acids, succinic acid was the most representative (2.3% of DW) followed by malic acid (1.7-2% of DW); citric acid was the least representative, with a concentration ten times lower than the succinic acid (Table 7).The concentration of fruit soluble sugars and organic acids were in agreement with data reported in literature [36,40].The antioxidant activity of fruits, an important aspect for defining fruit quality because of the increasing attention to functional food, was not also modified by treatment in 2008, 2009 and 2010 (Table 8), confirming the low reactivity of fruit crop to soil management in term of fruit composition and functional activity [36,41]. Conclusions Our results indicate that the use of by-products from apple juice production in pear fertilization can be an interesting strategy for maintaining or improving soil OM content and soil fertility.In addition, the use of AJBP promoted soil microorganisms activity, increased soil N availability for plant uptake during the first months after its addition to the soil without increasing the risk of nitrate leaching.At the rates supplied in this study, no effects were observed on tree performances and fruit quality. Pear fertilization can be managed with the use of organic fertilizers made from fruit processing residues and this study confirms that by-products from the apple juice industry is an effectiveness strategy to preserve soil organic matter and can replace mineral fertilizers as a source of N, without negative effects on tree performances and fruit quality.The recycling of these organic wastes in agriculture can also reduce disposal costs and environmental pollution. Table 1 . Physical and chemical soil properties. . Table 5 . Effect of fertilization strategies on leaf nutrient concentration (means of 3 years). Within columns, values followed by the same letter are not significantly different (SNK test, p = 0.05). Table 6 . Effect of fertilization strategies on fruit nutrient concentration (means of 3 years). 1Within columns, values followed by the same letter are not significantly different (SNK test, p = 0.05).Values are the mean of 3 years.2nsand *: not significant and significant at p < 0.05, respectively. Table 8 . Effect of fertilization treatment on fruit antioxidant activity (AA). 1 ns: not significant at p < 0.05.
2019-04-29T13:07:55.762Z
2016-01-07T00:00:00.000
{ "year": 2016, "sha1": "9081cb1787fceab56b9db0f6250272225ebb8fad", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/agrfood.2016.1.20", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9081cb1787fceab56b9db0f6250272225ebb8fad", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }
119110737
pes2o/s2orc
v3-fos-license
Low Scale Inflation at High Energy Colliders and Meson Factories Inflation occurring at energy densities less than (10$^{14}$ GeV)$^4$ produces tensor perturbations too small to be measured by cosmological surveys. However, we show that it is possible to probe low scale inflation by measuring the mass of the inflaton at low energy experiments. Detection prospects and cosmological constraints are determined for low scale quartic hilltop models of inflation paired with a curvaton field, which imprints the spectrum of scalar perturbations observed in large scale structure and on the cosmic microwave background. With cosmological constraints applied, low scale quartic inflation at energies GeV--PeV, can be mapped to an MeV--TeV mass inflaton resonance, discoverable through a Higgs portal coupling at upcoming collider and meson decay experiments. It is demonstrated that low scale inflatons can have detectably large couplings to Standard Model particles through a Higgs portal, permitting prompt reheating after inflation, without spoiling, through radiative corrections to the inflaton's self-coupling, the necessary flatness of a low scale inflationary potential. A characteristic particle spectrum for a quartic inflaton-curvaton pair is identified: to within an order of magnitude, the mass of the curvaton can be predicted from the mass of the inflaton, and vice-versa. Low scale inflation Higgs portal sensitivity targets are found for experiments like the LHC, SHiP, BEPC, and KEKB. Introduction Cosmic inflation describes the initialization of our observable universe with remarkably simple elements [1][2][3][4][5]. A scalar inflaton field rolling down its potential is stalled by Hubble friction, so that a ubiquitous negative pressure drives a rapid e 20 -e 60 (20 -60 efold) increase in the physical distance between spatial points in the universe. While it rolls down its potential, quantum fluctuations of the inflaton source primordial perturbations, whose amplitude is determined by the energy density during inflation and how fast the inflaton rolls. Inflationary scalar and tensor perturbations give rise to correlated variations in the primordial plasma of our expanding universe, which eventually manifest as large scale inhomogeneities. The amplitudes of the scalar and tensor power spectrum of these inhomogenities are given by the dimensionless quantities A s and A t respectively [6]. Over the preceding decades, measurements of the cosmic microwave background (CMB) and large scale structure have revealed a scalar power spectrum of amplitude A * s = (2.206 ± 0.076) × 10 −9 , with perturbations slightly diminishing over smaller distances, n * s = 0.968 ± 0.006 where n * s − 1 ≡ d log A * s /d log k, Refs. [6,7]. (Quantities with " * " attached are evaluated at an experimentally-determined pivot scale. In this paper the Planck collaboration's pivot scale is used, k * = 0.05 Mpc .) On the other hand, the size of primordial tensor perturbations have only been bounded from above, and as this bound tightens, so does the bound on the maximum energy scale at which inflation occurred. This is because the energy scale of slow-roll inflation can be directly inferred from the size of the tensor power spectrum, A * t 2V * /3π 2 M 4 p , where V * is the energy density during inflation and M p ≡ 1/8πG is the reduced Planck mass. The energy scale during slow-roll inflation is often expressed as a combination of the scalar and tensor power spectra (r ≡ A t /A s ), where this expression has been normalized to the 95% confidence bound on r * reported in [8]. Remarkably, the observation of primordial tensor perturbations could provide some guidance for theories of quantum gravity. A relation known as the Lyth bound indicates that for r * 10 −1 , the inflaton traversed a field range greater than M p [9][10][11][12][13][14]. A super-Planckian inflaton field range (aka large field inflation) indicates that the underlying theory of inflation must, with some symmetry or fixing of parameters, suppress non-renormalizable operators like "φ 6 /Λ 2 ," that otherwise render the theory non-perturbative for all Λ < M p . On the other hand, models of inflation with a sub-Planckian inflaton field range (small field inflation) have the advantage of being describable with a low energy effective field theory. Another reasonable objection to large field inflation, is that many theories predict axions, either as a solution to the strong CP problem [15] or as a facet of extra dimensions [16]. Large field inflation often leads to an overabundance of axion dark matter and observationally-excluded axion isocurvature fluctuations [17][18][19]. Setting aside theoretical considerations, the bound on tensor perturbations given in Eq. (1) has already substantially limited viable models of large field inflation. For example, the simple large field potential, V (φ) = m 2 φ 2 , already lies well outside the 2σ bound set by BICEP and Keck [8]. However some well-known large field models with non-standard gravitational couplings, most notably Starobinsky and non-minimally coupled Higgs inflation [2,20], could still be found by future astrophysical searches for tensor perturbations. But a future measurement of tensor perturbations is not guaranteed. The only firm constraint on the inflationary energy density V * is that it must exceed the energy density required for big bang nucleosynthesis V 1/4 * 10 MeV [21][22][23]. Thus inflation could have occurred at energies ranging over V 1/4 * ∼ 0.01 − 10 16 GeV, corresponding to r * ∼ 10 −74 − 10 −1 . Planned experiments such as PIXIE and LiteBIRD may probe down to r * ∼ 10 −3 , or equivalently V 1/4 * ∼ 5 × 10 15 GeV [24,25]. But it will be challenging for future cosmological experiments to probe much below this, since the intrinsic B-mode polarisation of the CMB in our universe has size r * ∼ 10 −7 , caused by density non-linearities present at recombination, which provide an irreducible background [26][27][28][29][30]. In summary, axion cosmology and an increasingly tight upper bound on the energy scale of inflation point towards low scale inflation. But low scale inflation cannot be confirmed by astrophysical searches for primordial tensor perturbations. Therefore, it is imperative to find non-astrophysical methods to uncover low scale inflation, including terrestrial searches for scalar resonances. (1a. and 1b.) While the inflaton field φ slowly rolls to its minimum, the curvaton field is perturbed by de Sitter vacuum fluctuations δσ ∼ H/2π, where H is the hubble constant during inflation. (2.) The inflaton field rolls to its minimum and decays. (3.) Sometime later, when the curvaton energy density is the predominant energy density in the universe, the curvaton decays. V 0 and V 0σ are the change in potential energy of the inflaton and curvaton, respectively. In viable parameter space studied here, V 0σ V 0 . Finding low scale inflation at low energy experiments To begin unmasking the realm of low scale inflation, this paper shows that particle colliders and meson factories are already poised to probe inflation when V 1/4 * 10 15 GeV. This study will focus on a simple case, where the inflaton's dynamics during inflation are determined by a single polynomial term in the Lagrangian ("single-term-dominated"), and find that small field quartic hilltop inflation arising from a Z 2 -symmetric scalar potential can be discovered through a Higgs portal coupling at upcoming experiments like SuperKEKB, SHiP, and the LHC. Specifically, the following single-term-dominated hilltop potential is considered, where φ is the inflaton field, V 0 is a constant energy density, and V (σ) is the potential of any other scalar fields, subdominant during inflation, that we address shortly. Hilltop inflation begins with φ having a small field value φ * , then rolling to its minimum at a larger field value, φ min , thereby diminishing the vacuum energy of the universe, i.e. canceling V 0 . For the hilltop potential in Eq. (2), a small, negative quartic self-coupling results in a very flat potential around φ ∼ 0, permitting slowroll inflation. The φ 6 Λ 2 term is a non-renormalizable effective operator, which stabilizes the potential at its minimum, so that V (φ min ) 0. Broadly speaking, hilltop inflation captures the dynamics of many models, in which the inflaton rolls to a large field value [10]. While a small initial field value (φ * ) and a small self-coupling (λ φ ∼ 10 −13 ) permit slow-roll inflation, making either φ * or λ φ too small can result in inflaton perturbations that are too large. On the other hand, making φ * or λ φ too large results in too short an epoch of inflation. These competing considerations, along with methodical computations of the power spectrum, reveal that single-term-dominated small field hilltop potentials cannot both inflate the universe and produce the perturbations observed on the CMB. Therefore, for single-term hilltop inflation, a second "curvaton" field with potential V (σ), can produce the observed CMB perturbations [31][32][33][34]. A curvaton is a second scalar field displaced from the minimum of its potential during inflation, which rolls to its minimum and decays after the inflaton. Perturbations to the curvaton's field value during φ-driven inflation become the predominant primordial perturbations in the universe, so long as the curvaton's energy density is the predominant energy density in the universe when it decays. One simple possibility explored in this study, is that the curvaton has a quartic hilltop potential with the same form as the inflaton, but with a smaller quartic self-coupling. A schematic diagram of quartic hilltop inflation with a quartic hilltop curvaton is given in Figure 1. Some of this study's findings can be summarized: • For the inflaton potential in Eq. (2), simply mandating ∼ 20−40 efolds of inflation, sufficiently small scalar primordial perturbations (P ζφ 2.2×10 −9 ), 1 and a sub-Planckian cutoff Λ < M p , creates a predictive map between the energy scale of inflation and the mass of the inflaton at its minimum. For example, V 1/4 * ∼ TeV scale inflation corresponds to an inflaton scalar resonance m φ ∼ 30 MeV − 1 GeV. • Low scale inflation can be detectably coupled to Standard Model particles through a Higgs portal operator (λ φh φ 2 |Φ| 2 ), without upsetting the flatness of the inflaton's potential. A low scale inflaton's self-couplings must be tiny to provide a potential flat enough for inflation, λ φ 10 −13 , which means that any Higgs portal coupling must be small, λ φh 10 −6 , or else spoil the inflaton's self-coupling through radiative corrections. However, because the vacuum expectation value (VEV) of a quartic hilltop inflaton at its minimum is 10 3 − 10 9 GeV, and Higgs-inflaton mixing scales with the inflaton VEV, sin θ φ ∝ λ φh v φ , this permits sin θ φ ∼ 0.1. • Using a quartic curvaton potential, and requiring that the curvaton generate the observed scalar perturbation spectrum, fixes the curvaton quartic self-coupling to 1.9 × 10 −14 ≤ λ σ ≤ 6.9 × 10 −14 (for the 1σ measured values of n * s and A * s in [6]). Using this, and with the inflaton mass specified, the lighter curvaton mass can be predicted (and vice-versa). Similarly, the decay width of the inflaton sets an upper bound on the decay width of the curvaton, and the curvaton decay width sets a lower bound on the inflaton decay width. Therefore, searches for scalars across a range of masses at experiments like the LHC, SHiP, SuperKEKB, BEPC II, and Babar, could identify an inflaton-curvaton pair. Note that it has been appreciated in many contexts that small field inflation requires an extremely flat potential, and as a consequence is naively fine-tuned (e.g. Refs. [37][38][39][40][41]). For Eq. (2), this manifests as the requirement that the inflaton's quadratic term is negligible during inflation. This study does not seek amelioration of small field fine-tuning, with additional symmetries or a UV theory. However, note that the requirement m 2 φ 2 * V * in small field inflation might be compared to the requirement φ 6 * /Λ 2 V * in models of large field inflation, for which φ * M p . Altogether, this paper demonstrates that low scale inflation can be probed by low energy experiments. Some prior studies have developed similar links between high scale inflation and low energy experiments, in the context of either the Higgs boson or another scalar non-minimally coupled to gravity [42,43], as well as Ref. [44], determining LHC bounds on supersymmetric low-scale inflation. The remainder of this paper proceeds as follows. In Section 2 a simplified low scale quartic model of inflation is introduced, and it is shown that once cosmological constraints are applied, there exists a map between the scale of inflation and the inflaton's mass at its minimum. Section 3 further constrains the inflaton potential, such that the cosmological epochs of inflation, reheating, radiation, and matter dominated expansion match observations. (Results in Sections 2-3 apply with or without a curvaton model.) Section 4 studies a quartic curvaton that produces the observed primordial perturbations and identifies viable reheating epochs for a low scale quartic inflatoncurvaton pair, in terms of the average equation of state during reheating (w re ) and temperature at the end of reheating (T re ). Section 5 demonstrates how prior sections can be used to determine an inflaton-curvaton particle spectrum. General prospects for finding low scale inflation through a Higgs portal at colliders and meson factories, and in particular signatures of an inflaton-curvaton spectrum, are explored in Section 6. In Section 7, conclusions are presented. Appendix A discusses the fundamentals, feasibility, and fine-tuning of a variety of small field models, especially small field quartic inflation. Appendix B details the Higgs portal paramaterization used in this paper. Low scale quartic hilltop inflation Inflation occurs when in some region of spacetime,ä > 0, where a is the scale factor of the universe 2 and˙≡ d/dt. "Slow-roll" inflation occurs when, uniformly within a Hubble horizon, defined as H ≡ȧ/a, a scalar field is slowly rolling down its potential V (φ), such that the slow roll parameters and η each are much less than unity, that is For an introduction to inflation, see e.g. [45,46]. This study considers the small field quartic hilltop potential, where the effective operator φ 6 Λ 2 in this potential is negligible during inflation, but is responsible for stabilizing the potential at large field values. The enforcement of the requirement that vacuum energy shut off at the minimum of the potential, for an effective operator with a sub-Planckian cutoff Λ < 10 19 GeV, will provide an important constraint on the inflationary parameter space. From the standpoint of effective field theory, a term like φ 6 Λ 2 is expected if φ couples to new states with masses ∼ Λ. For the sake of brevity, this study focuses on potentials without φ 3 , and φ 5 terms, which are forbidden if the inflaton potential respects a Z 2 symmetry. (Appendix A addresses such models.) Throughout, this paper assumes canonical kinetic terms for all fields. For hilltop potentials like Eq. (3), inflation begins when within a Hubble size patch (of radius ∼ 1/H), φ uniformly has a small field value, that is close to zero. This is a circumstance which might occur subsequent to a phase transition. 3 With a uniform field value set, the inflaton slowly rolls down its potential until ∼ 1, at which point inflation ends. See Figure 1 for a schematic illustration. As discussed in the introduction, there are simultaneous requirements that the inflaton's potential be flat enough for inflation, but not so flat that it over-produce primordial perturbations. A quantification of these requirements follows in this section. (Appendix A provides further discussion.) Using these quantifications, it can be shown with a numerical survey of polynomial hilltop models, that single-term-dominated small field hilltop inflation cannot both produce the observed spectrum of scalar primordial perturbations and enough inflation. Therefore, Section 4 details how a curvaton field, with potential V (σ), would produce the observed perturbations. There is some fine-tuning associated with the small field φ 4 hilltop inflation potential given in Eq. (3). (Fine-tuning is apparently generic for small field models of inflation, Refs. [38][39][40][41].) In the small field quartic hilltop model, tuning arises because at the outset of inflation, when φ is close to zero, φ's mass term must be small enough not to upset the flatness of the inflationary potential (m 2 φ, * λ φ φ 2 * ). 4 In the absence of some symmetry that forbids the mass term while allowing the quartic, this implies a tuning dependent on the cutoff of the theory, since the quartic term is expected to generate a mass term at one-loop order. One might suppose that an alternative hilltop inflation model which uses a quadratic term, −m 2 φ 2 * , as the dominant term during inflation, could be constructed to be technically natural. Appendix A surveys hilltop models and shows that, terms higher order in φ, necessary to stabilize such a quadratic potential, re-introduce comparable fine-tuning, assuming an effective field theory with a sub-Planckian cutoff. For the inflaton potential specified in Eq. (3), constraints from cosmology shape the allowed parameter space. Hereafter, the requirements that inflation last for N * ∼ 20 − 50 efolds, that vacuum energy vanishes at the minimum of the potential V (φ min ∼ 0), and that the inflaton not produce perturbations larger than those on the CMB will be used to pinpoint the mass and vacuum expectation value of the inflaton at its minimum, for a given set of V 0 and Λ. In the slow-roll limit, for a given V 0 , Λ, and initial inflaton field value φ * , the number of e-folds generated by the quartic hilltop potential, when φ rolls from φ * to the end of slow-roll inflation . The requirement that the universe inflate by 40 efolds and not produce scalar primordial perturbations larger than those observed on the CMB (P ζφ 2.2 × 10 −9 ), excludes the region shaded red. The inflaton's quartic self coupling (λ φ ) is fixed in terms of Λ and V 0 by requiring that the inflaton's potential, Eq. (3), is zero at its minimum, yielding the relation given in Eq. (5). The inflaton's mass (m φ ) and vacuum expectation value (v φ ) at its minimum are indicated by dotted and dotted-dashed lines. where φ end φ * . In order that V (φ) does not contribute to the cosmological constant or create anti-de Sitter collapse after φ rolls to its minimum, it is required of Eq. (3) that V (φ min ) = 0, where φ min is the value of φ at the minimum of the potential. (Given the vacuum energy observed in our universe, technically this requirement could be relaxed to V (φ min ) meV 4 , but this would not change the inflaton's couplings enough to alter results.) This fixes λ φ in terms of V 0 and Λ. Specifically, V (φ min ) = 0 implies With this expression for λ φ , the number of efolds can be expressed in terms of V 0 and Λ, Similarly, using Eqs. (4) and (5), the vacuum expectation value of the inflaton at its minimum, along with the mass of the inflaton at its minimum, can also be determined as a function of Λ and V 0 . It should be required that the spectrum of scalar perturbations produced by the inflaton in Eq. (3) not be larger than that observed on the CMB (P ζφ 2.2 × 10 −9 ). As detailed in Section 4, a curvaton is assumed to produce the perturbations observed on the CMB. However, if the inflaton perturbations are too large, these can be transferred via gravitational coupling, increasing the curvaton's perturbations [35,36]. Using slow-roll formulae for scalar primordial perturbations, φ's perturbations should be subdominant, In the limiting case of a Planck-scale cutoff Λ = 1.2 × 10 19 GeV, scalar primordial perturbations are small enough to accommodate observation so long as λ φ 10 −13 and V 0 10 9 GeV. This is the maximum energy scale for small field quadratic hilltop inflation, given the observed primordial power spectrum, A * s 2.2 × 10 −9 . To show this, first, φ * is fixed by Eq. (6) and the requirement that inflation last for ∼ 40 efolds. Then φ * is substituted into Eq. (9), and the relation of Eq. (5) is incorporated. Altogether, the requirements of sufficient efolds, a small enough primordial power spectrum, and that the inflaton's potential energy vanish at its minimum imply which can be re-cast as a bound on Λ and V 0 with Eq. (5), In Figure 2, parameter space is shown in terms of V 0 and Λ, consistent with ∼ 40 e-folds of inflation and sufficiently small inflaton perturbations, P ζφ 2.2 × 10 −9 . This plot additionally demonstrates that, assuming the minimal Z 2 symmetric hilltop potential of Eq. (3), the mass of the inflaton at its minimum predicts the scale of inflation to within an order of magnitude. For example, a GeV mass inflaton implies V 1/4 0 ∼ 0.3 − 10 TeV. This raises the possibility of inferring the scale of low energy inflation by measuring the mass of the inflaton at a low energy experiment, as detailed hereafter. Note that so far, no assumptions about the curvaton sector has been made, and so the preceding relationship between the sub-Planckian effective operators stabilizing an inflaton, and its mass and vacuum expectation value at its minimum, could be applied to any hilltop inflaton, with a weakenough self-coupling to generate enough efolds of inflation, but not so weak as to over-produce primordial perturbations. Cosmological consistency and low scale quartic hilltop inflation This section shows that requiring the shrinkage of the comoving horizon during inflation, match its subsequent expansion during reheating, radiation-dominated, and matter-dominated expansion (e.g. [47][48][49]), provides another constraint on plausible combinations of V 0 , N * , and Λ. The relevant formalism is derived and extended to accommodate a curvaton, then applied to low scale quartic hilltop inflation. The key point is that after restricting the equation of state and temperature of reheating to plausible values (w re ∼ 0 -1 3 and T re ∼ 4.7 MeV -V 1/4 0 respectively), inflation has both a minimum and maximum corresponding duration. These considerations restrict the number of efolds of inflation (N * ) to a narrow window of possible values, for a given set of (V 0 , Λ). One use of this narrowed range of plausible efold values, is to help tighten maps between inflaton and curvaton parameters in Sections 5 and 6. Cosmological consistency for low scale inflaton-curvaton models One advantage of inflationary cosmology is that it explains the uniformity of the observable universe: during an epoch of inflation, the comoving horizon (≡ (aH) −1 ) of the universe shrinks. As a result, an observer sees a smaller casually-connected volume in the future, in contrast to an observer watching a universe dominated by matter or radiation, which grows to a larger causally-connected volume in the future. It is well-established that our universe underwent a period of radiation and matter dominated expansion, implying that the most distant regions presently observed were once far outside of causal contact. Inflation serves to drive pieces of the present causally-connected universe out of causal contact, before radiation and matter dominated expansion, thereby allowing for a present-day homogeneous universe. This also means a consistent inflationary cosmology requires that the shrinkage of the comoving horizon during inflation, is equal to the growth of the comoving horizon after inflation. The amount the comoving horizon grows after inflation will depend upon the equation of state of the expanding universe -though eventually the universe must become radiation dominated to accommodate big bang nucleosynthesis, after which the growth of the comoving horizon can be determined from observation. Bounds on inflaton models from a consistent cosmology have been explored in [47][48][49][50][51]. See Figure 1 of Ref. [47] for an illustrative schematic. To bound inflation using a consistent cosmology, we begin by considering modes relevant to observations of the CMB. During inflation, when the comoving horizon (1/aH) shrinks to a size Figure 3: Constraints are given for inflationary energy density (V 0 ) and initial reheating temperature (T re ) for a cutoff Λ = 10 19 GeV, assuming quartic hilltop inflation, defined in Eq. (3). The solid pink, dashed green, dotted blue, and dotted-dashed black lines indicate a post-inflation average equation of state of w re = 1, 1 3 , 0, − 1 3 , respectively, as described in the text. As in Figure 2, the region shaded red is excluded because the inflaton produces primordial perturbations that are too large. The bottom horizontal line marks a BBN reheat temperature; space below this line is excluded. The upper horizontal marks the approximate electroweak symmetry breaking temperature (100 GeV). As N * decreases in each panel, so does the maximum allowed reheating temperature and V 1/4 0 values, contained within a wedge of sensible equation of state values, w re = [0, 1 3 ]. Note that this leads to substantially different y-axis (T re ) and x-axis (V 1/4 0 ) ranges, as N * is varied. smaller than 1/k, modes of size k depart the comoving horizon. The mode corresponding to the CMB pivot scale has already been defined as k * , and the Planck collaboration uses k * = 0.05 1 Mpc in their analyses. Therefore, with a pivot scale of k * , Planck's measurements of n * s and A * s are determined by inflationary dynamics occurring when the comoving scale was of size ∼ 1/k * , in other words k * = a * H * . Using the relation k * = a * H * , multiplied by the present day comoving scale, where each a is the physical scale of the universe at the transition between cosmological epochs. a end is the size of the universe when inflation ends ( ≥ 1). In a number of studies, namely Refs. [47,48,51], a re was defined as the scale of the universe after the inflaton has finished decaying, also called the end of reheating. However, our model utilizes a curvaton field, which will decay into radiation sometime after the inflaton decays. In this case, the epoch of reheating lasts until the curvaton decays, and so we define a re as the physical scale of the universe after the curvaton has finished decaying, at which time the universe begins radiation-dominated expansion. As the universe cools, matter and radiation will come to equally populate the energy of the universe when the physical scale is of size a eq . Following standard conventions, a 0 denotes the present-day scale of the universe. Equation (12) can be rewritten using the identity where e Nre ≡ a end are is the number of efolds between the end of inflation and when the curvaton finishes decaying, and e N RD ≡ are aeq is the number of efolds between the time of curvaton decay and matter-radiation equality. Note that in the preceding, the equation of state of the universe w re has not been specified for the period when a end grows to size a re . The equation of state during that era of expansion will depend on the decay rate and energy density of both the inflaton and curvaton. More precisely, in a straightforward curvaton cosmology, the inflaton decays more rapidly than the curvaton after inflation, so that while inflaton-sourced radiation energy density dilutes like a −4 as the universe expands, the un-decayed curvaton field behaves more nearly like matter, w ∼ 0, so that its energy density dilutes like ∼ a −3 . Then as the universe expands and a increases, the curvaton's energy density grows to exceed the inflaton's energy density. Sometime after it comes to dominate the energy density of the universe, the curvaton decays. This process results in a universe with primordial perturbations that depend (almost) solely on the curvaton's field perturbations during inflation [31]. Hereafter we refer to this entire epoch (a end → a re ) as the era of reheating. As we will see, Eq. (13) can be used to relate the parameters of inflation to those of reheating. First, the Friedmann equations can be combined to yield an expression relating the initial and final energy densities of an expanding, isotropic universe, ρ re = ρ end e −3Nre(1+wre) . For the case we are interested in, here ρ re is the total energy density at the end of reheating, ρ end is the total energy density at the end of inflation, and w re is the average equation of state during reheating, during which time the energy density of the universe flips from being inflaton-dominated to curvatondominated. As already mentioned, to avoid largely excluded isocurvature perturbations, the energy density at the end of reheating, ρ re , must be predominantly energy density sourced by the decayed curvaton. Next we re-express the energy density at the end of reheating as a temperature, using the number of relativistic degrees of freedom, ρ re ∼ π 2 30 g re T 4 re , see e.g. [52]. Assuming conservation of entropy after reheating and using the fact that the relativistic species of the present-day universe are photons and neutrinos, where T 0 2.725K and T ν0 = 4 11 1/3 T 0 . Putting Eqs. (13)-(14) together, we find that so long as w re = 1 3 , 5 the amount the comoving horizon grows (in efolds) and the temperature at which the universe reheats are given by and In the preceding expressions, the substitution ρ end = V end V 0 can be made, because the inflaton's energy density will be the predominant energy density in the universe at the end of inflation. In computations that follow, H * is similarly determined by the inflaton's energy density during inflation, i.e. 3H 2 * V 0 /M 2 p , and the number of relativistic degrees of freedom in the Standard Model is taken to be g re ∼ 100. Using the Planck defined pivot scale, and the standard normalization for a 0 , in Eq. Results for small field quartic hilltop inflation In Figures 3-5, we plot the implied final reheating temperature T re for indicated values of N * , by substituting Eq. (16) into Eq. (17). Parameter space lying between the solid red and dotted-dashed black lines, where these correspond to an average equation of state between − 1 3 and 1, is technically permitted, but more realistically one should only consider parameter space lying between the dotted blue and dashed green lines, which correspond to a matter-or radiation-like equation of state during reheating. The bottom horizontal line marks the temperature at big bang nucleosynthesis, T BBN ≥ 4.7 MeV -any realistic cosmology must reheat at a higher temperature [21]. The upper horizontal line marks an estimate for the temperature of electroweak symmetry breaking ∼ 100GeV . A cosmology which assumes electroweak baryogensis would need to occupy parameter space above this line. The red shaded regions are excluded by requiring that inflaton perturbations not be too large, as discussed around Eq. (11); the same bound is indicated with red shading in Figure 2. Altogether, Figures 3-5 indicate a number of constraints on viable quartic hilltop parameter space. For Λ = 10 19 , 10 15 , 10 12 GeV, no consistent cosmology can be constructed when N * > 40, 31, 25 efolds, respectively. More generally, as the cutoff is lowered from ∼ 10 19 GeV to 10 12 GeV, the number of efolds consistent with a given inflationary energy density V 0 also shrinks; this can also be seen directly from Eq. (4). Further inspecting Λ = 10 15 GeV parameter space, we find that requiring however in a straightforward curvaton cosmology, wre < 1 3 during reheating, so that the curvaton's energy density grows to exceed the inflaton's energy density. a reheat temperature above the scale of electroweak symmetry breaking, restricts the number of efolds to N * = 24 − 31. Quartic hilltop curvaton perturbations and cosmology Small field quartic hilltop inflation would not generate the observed spectrum of primordial perturbations observed in our universe (see Section 2 and Appendix A). Therefore, a low scale quartic hilltop inflaton requires an additional curvaton field (σ) to produce the observed spectrum of perturbations. In a curvaton cosmology, the inflaton and its decay products dominate the universe's energy density after inflation, but eventually, the curvaton's energy density grows to exceed the inflaton's energy density in the expanding universe. At this time, the curvaton decays, and the perturbations of the curvaton field become the predominant primordial perturbations observed in the universe. In the case of high scale inflation, a simple curvaton potential like V (σ) = m 2 σ 2 can be employed, but such curvaton potentials cannot produce the observed primordial perturbations in the case of low scale inflation. The hilltop curvaton is arguably the simplest practicable curvaton for low scale inflation [53], and so a quartic hilltop inflaton is employed here. In the remainder of this section, perturbations from a quartic hilltop curvaton are detailed, along with the application of a consistent history for low scale curvaton cosmology. Once all cosmological constraints are applied, a limited range of reheating efolds (N re ), equations of state (w re ), and inflationary energy densities (V 0 ) are permitted for a given cutoff (Λ) -this relationship is surveyed in Figure 8. We begin with a curvaton potential that is identical to the quartic hilltop inflaton potential. where for simplicity we assume the cutoff for the curvaton effective operator (Λ) is the same as that of the inflaton. Also for the sake of simplicity, we assume that φ and σ only couple substantially through gravity. As will be shown in this section, requiring the curvaton produce the observed spectrum of scalar perturbations, i.e. n * s ∼ 0.97 and A * s ∼ 2.2 × 10 −9 , will be enough to uniquely determine λ σ and the curvaton's initial field value, σ * . As for the inflaton, V 0σ is determined by the curvaton field's self-couplings, and equivalently its field value at is minimum σ min = λσ 6 Λ, such that V (σ min ) = 0. As illustrated in Figure 1, this section will show that in viable curvaton parameter space, the curvaton will cancel a much smaller portion of vacuum energy as it rolls to its minimum, V 0σ ≡ λ 3 σ Λ 4 /432 V 0 . Hence to good approximation one is justified in neglecting the curvaton's contribution to vacuum energy during inflation. Hereafter we provide a self-contained derivation of the quartic hilltop curvaton's perturbation spectrum. In the standard curvaton scenario [31,34], at the onset of inflation, the curvaton is fixed to some field value σ * such that it is slowly rolling, |V σσ | H, and so the equation of motion for the curvaton perturbations is given by which in turn implies that for modes which have exited the comoving horizon (i.e. in the limit k aH [46]), δσ 2 = H 2 2k 3 . We calculate the curvaton power spectrum using ζ = −H δρ ρ , where ζ parameterizes the scalar perturbations of a scalar field in de Sitter space, and ρ is the energy density of said field. One can define separate ζ i for each scalar field i present during inflation. Each ζ i will be seperately conserved outside the horizon, provided that the fields only interact gravitationally with each other and have canonical kinetic terms. A violation of either of these conditions would result in time evoluation of ζ i on superhorizon scales [54]. Note that the conservation of each scalar field's perturbations can be applied to other multifield inflationary scenarios. The main difference in the case of a "curvaton" field, is that the curvaton is not determining how quickly inflation is ending, which alters the spectrum of perturbations it induces on the CMB (relative to an "inflating" field). With these provisos, ζ σ = −H δρσ ρσ . To unpack this expression, we first expand the curvaton potential using σ = σ 0 + δσ, Where as with the inflaton, the inflationary field values of σ are small enough, that the φ 6 /Λ 2 term can be dropped so that δρ σ = −λ σ σ 3 0 δσ. With δρ σ specified, we can calculate the curvaton's power spectrum, Inserting ζ σ = −H δρσ ρσ , δσ 2 = H 2 2k 3 , and δρ σ = −λ σ σ 3 δσ into this expression yields The change in time of curvaton energy density,ρ σ , can be calculated using the slow-roll formula: 3H . Inserting this into Eq. (22) leads to Surveys of the CMB completed by the WMAP and Planck satellites require that P ζσ 2.2 × 10 −9 . These experiments have also measured the scale dependence of primordial scalar perturbations, defined here as n s − 1 = d log P ζ d log k . Writing n s as a derivative of the power spectrum with respect to time, again using the relation that a comoving momentum k will exit the horizon when k = a(t)H(t), using Eq. (23), the definition = −Ḣ H 2 , and the slow-roll equation 3Hσ = −V σ . These expressions for the power spectrum and spectral index constrain the curvaton's quartic self-coupling λ σ and initial field value, σ * , in terms of n * s and A * s , and . We find and To good approximation, especially in the case of small field inflation, ∼ 0. As a result, we can express λ σ as a function of only A * s and n * s . Inserting the Planck collaboration's 1σ preferred values for A * s and n * s , Note that this prediction for λ σ is independent of Λ, N * , and the inflaton's potential. with Planck observations of n * s and A * s , for the quartic hilltop curvaton potential, Eq. (18). As explained in the text, the required value of σ * is independent of Λ and N * . Allowing variations corresponding to Planck's 1σ bounds on n * s and A * s has little effect in this plot, generating a thickness less than that of the blue line. The orange line is a lower bound for σ * , such that σ is slowly rolling during inflation rather than dominated by quantum fluctuations -the quartic curvaton satisfies this requirement. Turning to Eq. (25), and inserting the relation , one finds that σ * depends only on V 1/4 0 and is independent of N * or Λ. Using this, in Figure 6 σ * is plotted as a function of V 1/4 0 with a blue line. Note that setting the thickness of the blue line to coincide with Planck's 1σ constraints on n s and A s would generate a line too thin to be seen, so we plot this line with a machine thickness. Figure 6 also shows an orange line, which is the minimum σ * such that σ will be slowly rolling during inflation, rather than in a regime dominated by quantum fluctuations. The blue line, corresponding to σ * is always above the orange line, and so the curvaton will be slowly rolling in parameter space matching primordial perturbations observed on the CMB. This demonstrates that it was appropriate to use the slow-roll approximation in our treatment of the curvaton. To plot the orange line, we have used the curvaton's equation of motion, along with the standard requirement that the field distance the curvaton rolls in one Hubble time (∆σ * ≈ ∂σV 3H 2 * ) is larger than its quantum fluctuations in de Sitter space (δσ ∼ H * 2π ), Next, to validate our use of a perturbative expansion in computing curvaton primordial perturbations, we must ensure the initial field value σ * is greater than de Sitter-induced variations to the curvaton's field value (δσ) during inflation. In other words, the fluctuations of the curvaton during inflation should be small, compared to the initial field value of the curvaton, δσ σ * . So long as this is satisfied, the perturbative formulae used to calculate curvaton primordial perturbations will be valid. σ * was calculated above so that it would produce the observed A s and n s , which implies σ * H * = 6.408 × 10 5 . Thus, for the curvaton under consideration, the perturbative regime holds. Some comments are in order about non-Gaussian perturbations from the hilltop curvaton studied here. The non-Gaussian perturbations produced by a hilltop curvaton are characteristically small enough to lie within Planck's 1σ bound on the lowest-order non-Gaussian parameter, f N L = 2.7 ± 5.4 [55], but exact conclusions depend upon the curvaton's evolution after inflation, prior to its decay [56,57]. It would be interesting to further analyze the non-Gaussian curvaton signatures produced by various reheating scenarios, and relate them to findings in this study, for the quartic hilltop curvaton and other low scale models. This would allow for verification of a putative inflaton-curvaton pair found at a collider, with a future measurement of non-Gaussianity, assuming a vigorous 21-cm cosmological program that permits the detection of such small non-Gaussianities [58][59][60]. So far, the self coupling (λ σ ) and initial field value (σ * ) have been determined, for a curvaton with a quartic hilltop potential, to reproduce the primordial perturbations observed on the CMB. There are some additional stipulations on curvaton parameter space. Section 4.1 addresses the requirement that the curvaton not produce a substantial second wave of inflation. Section 4.2 discusses the curvaton energy density, which must become the predominant energy density of the universe sometime after the inflaton decays. This places a bound on the duration and average equation of state during reheating. Limiting curvaton-induced inflation One constraint on curvaton parameter space arises from the assumption that the curvaton does not generate a second period of inflation after the end of φ-driven inflation. At the end of inflation, by construction the inflaton dominates the energy density of the universe. Then in a curvaton cosmology, the inflaton decays into lighter fields more rapidly than the curvaton. After the inflaton has decayed, the curvaton energy density V (σ end ) will at first remain nearly constant, as the curvaton is slowly rolling down its potential during radiation-dominated expansion. More precisely, it can be verified using the slow-roll equations, that the requisite flatness of the quartic curvaton potential used in this study, results in the curvaton slowly rolling at least until the energy density of the universe dilutes to ∼ V 0σ . Once the energy density of the universe dilutes to ∼ V 0σ , the curvaton begins oscillating in its potential. However, if at this time the curvaton is still slowly rolling, a short period of curvaton-driven inflation can occur. To rule out a substantial period of curvaton-driven inflation, it is sufficient to estimate the amount of curvaton-driven inflation that would result if the curvaton's energy density became predominant immediately at the end of φ-driven inflation. The energy density in the curvaton field at the end of inflation is ∼ V 0σ λ 3 σ Λ 4 /432. Here, V 0σ has been calculated by using the quartic curvaton potential given above. In other words, we wish to check whether the subdominant portion of the total vacuum energy, V 0σ , will be a substantial source of inflation as σ rolls to its minimum. This will depend on how slowly σ rolls to its minimum. First, it is necessary to calculate the field value of the curvaton at the end of inflation, σ end . We employ the slow roll formula to find how far the curvaton rolls during inflation, 3Hσ ≈ −∂ σ V, which integrates to This field value can be re-written in terms of N * using that N = Hdt, i.e. ∆N ≈ H∆t. The curvaton potential matches that of the inflaton, so the same formula, Eq. (6), gives the number of efolds for curvaton-driven inflation. Requiring curvaton-driven inflation last less than one efold, Using Planck's central values for n s and A s , this in turn yields, which is plotted in Figure 9. While it might be possible to consider curvaton inflation that lasts for up to ∼ 15 efolds, before the "curvaton" would be an inflaton producing (disallowed) perturbations on CMB scales, the cosmological consistency conditions in Section 3, which are accurate to within about an efold, would have to be recalculated for each point in this parameter space. This would greatly complicate the treatment of curvaton and inflaton parameter space presented hereafter, without qualitatively changing results, because the bound of Eq. (34) would change by less than a factor of two. Curvaton during reheating For a curvaton cosmology, the period of reheating lasts until the curvaton decays. At the time of curvaton decay, the curvaton's energy density must have grown substantially larger than that of the inflaton (more precisely, the inflaton's decay products), so that isocurvature perturbations are minimized, in accord with Planck's 2015 bound on isocurvature modes [6]. Planck's bound on the leftover inflaton energy density is β iso ≤ 0.0013 (using data from TT, TE, EE + lowP), which is equivalent to requiring that the curvaton comprise 99.1% of the total energy density of the universe before the end of reheating, where ρ tot ≡ ρ σ + ρ φ , and here | re indicates the end of reheating. Of course, each of ρ φ , ρ σ indicate the summed energy density of φ, σ, and their respective decay products. Assuming the inflaton φ decays promptly after inflation into radiation, the energy density of φ in the universe at the end of reheating will be where | ei indicates the end of inflation, and N re is the number of efolds during reheating, as in Section 3. The total energy density on the other hand is given by, where again w re is defined as the average equation of state during reheating. The total energy density at the end of inflation is approximately the inflaton's energy density. Thus, combining Eqs. (35)-(37), sets a requirement on the number of efolds during reheating, where here the numerator is simply the natural logarithm of Eq. (35). Figure 7 plots this lower bound on N re as a function of w re . The closer w re is to a radiation-like equation of state (w re ∼ 1 3 ), the longer reheating must last so that the curvaton grows to dominate the universe's energy density. Next we constrain w re in terms of Λ and V 0 . Note that since w re is the average equation of state during reheating, Figure 7: The plot shows the minimum number of efolds during reheating, N re , as a function of w re , the average equation of state during reheating, using Planck's constraint on isocurvature perturbations [6], β iso ≤ 0.0013. The region below the curve is excluded. A stiffer equation of state during reheating (w re 0) implies that the period of reheating must last longer, because it will take the curvaton energy density longer to become the predominant energy density in the universe. If we now assume that w σ ≈ 0, in other words that the curvaton behaves like matter during reheating, then w σ ρ σ 0. By assumption, we know that ρ φ,ei V 0 . The energy density of the curvaton after inflation, ρ σ,ei V 0σ = λ 3 σ Λ 4 /432, where the final equality follows from the form of the curvaton potential, explained around Eq. (18). Substituting these into Eq. (40) and integrating, Combining equations (38) and (41) yields the left side bound on parameter space shown in Figure 8. Note as w re → 1 3 the constraint becomes stronger. As in Figure 7, this is expected, since if w re = 1 3 , then the equation of state of the curvaton would also be w σ = 1 3 , and the curvaton energy density would not grow larger than the inflaton's energy density. Counting efolds for a quartic inflaton-curvaton pair The constraints given by Eqs. (11) and (34), along with a line demarcating where the height of the inflaton potential exceeds the curvaton potential, V 0 = 10V 0σ , are displayed in Figure 9. In this plot, the upper and lower bounds on the energy density during inflation (V 0 ) given in Eqs. (11) and (34) have been sharpened by using the cosmological consistency equations, (16) and (17). The . Bottom: the curvaton field must not produce more than an efold of inflation, Eq. (34). Left side: reheating must last long enough that the curvaton comes to dominate (constitute ≥ 99.1%) of the energy density of the universe, Eqs. (38) and (41). Note that the "left side" bound assumes the curvaton equation of state after inflation is immediately matter-like, i.e. w σ,ei 0 -relaxing this assumption would allow for a broader range of parameters. The viable parameter space is shown for cutoffs Λ = 10 19 and 10 15 GeV, as indicated. line demarcating V 0 = 10V 0σ , demonstrates that the inflaton energy density exceeds the curvaton energy density in all un-excluded parameter space. Therefore it is correct to assume that V 0 V 0σ after inflation. This will be important for relating curvaton and inflaton decay widths in Section 5. Precisely, for fixed Λ, test values of N * can be specified, and a maximum and minimum N * value can be converged upon by requiring a solution to Eqs. (16) and (17), for a reasonable equation of state during reheating, w re = [0, 1 3 ]. These minimum and maximum N * values can be used in Eqs. (11) and (34). To illustrate the iterative procedure for computing N * ,max , N * ,min , note that in Figure 3, where the cutoff Λ = 10 19 GeV has been specified, the maximum number of efolds allowed by both the inflaton perturbation bound, Eq. (11), and the requirement w re = [0, 1 3 ], is N * ,max 41. By the same reasoning, for Λ = 10 19 GeV, the bound on curvaton-driven inflation Eq. (34), implies a minimum number of efolds, N * ,min 32. To see the provenance of this lower bound, note that V Put another way, visual inspection of Eqs. (11) and (34) reveals that they have a weak dependence on N * , for the overall allowed range N * 20 − 40. For a given value of Λ or V 0 , one can iteratively specify test N * values and use Eqs. (16) and (17) Both the maximum and minimum number of efolds increase with V 0 , because a higher energy density at the end of inflation implies a lengthier expansion of the comoving horizon during reheating and the radiation-dominated epoch. In Figure 9, these expressions for the minimum and maximum number of efolds have been used to produce more precise bounds on V 0 as a function of Λ, specifically the bound on the inflaton over-producing perturbations, Eq. (11) and the bound on the curvaton producing a second epoch of inflation, Eq. (34). Predicting the curvaton from the inflaton and vice-versa This section shows how results in Sections 2-4 can be used to predict the mass of the inflaton from the mass of the curvaton, and vice-versa, to within about an order of magnitude. Also, it will be demonstrated that the inflaton decay width sets an upper bound on the decay width of the curvaton, and the curvaton's decay width sets a lower bound on that of the inflaton. In Section 6, these relations will be used to relate low scale inflaton-curvaton pairs, coupled to the Standard Model through a Higgs portal. In Section 4, the average equation of state (w re ) and number of efolds (N re ) during reheating were employed to parameterize the collective cosmological behavior of the inflaton and curvaton. This provided a set of viable cosmological histories, assuming the inflaton decayed instantaneously at the end of inflation, as illustrated in Figures 8 and 9. Importantly, this analysis also bounded the number of efolds of inflation for reasonable reheating scenarios, Eqs. (42) and (43). In this section, these results will allow us to directly address the decay widths of the inflaton and curvaton, Γ φ and Γ σ , without having to precisely specify w re or N re . Before exploring parametric maps between inflaton and curvaton parameter space, we can first calculate a general upper bound on the decay width of the curvaton (Γ σ ) to Standard Model particles. After inflation, the curvaton must not decay for a long enough time period that it comes to dominate the energy density of the universe. An upper bound on the curvaton decay width can be derived by noting that to good approximation, while it is oscillating in its potential, the curvaton field dilutes like matter in an expanding universe. As explored in Section 4, the curvaton will slowly roll, and its energy density V 0σ will remain approximately constant, until the inflaton's energy density (initially ∼ V 0 ) has diluted enough that ρ φ ∼ V 0σ . At this time, the curvaton begins oscillating in its potential and diluting like matter (∝ a −3 ), while the inflaton's radiation-like energy density dilutes as a −4 . Thereafter, once the universe has expanded further by a factor of ∆a 100, the curvaton energy density will exceed the inflaton's energy density by a factor of 100 -as required by the Planck bound on isocurvature fluctuations ( ρ φ ρtot < 0.0089) given in Section 4. Using the instantaneous decay approximation, the total decay width of the curvaton will be approximately equal to the Hubble constant when the curvaton decays Γ σ ∼ H σ . Thus, using the relation 3H 2 = ρ/M 2 p , that the energy density of the universe when the curvaton begins oscillating (ρ φ ∼ V 0σ ) will dilute as a −3 , and that ∆a 100, the maximum conceivable curvaton decay width consistent with a quartic hilltop inflaton cosmology is where this expression was re-phrased in terms of the inflaton mass by incorporating Eqs. (8), (27), (34), V 0σ = λ 3 σ Λ 4 /432, the limiting case of N * = 40 and λ σ = 6.9 × 10 −14 , and that the Hubble constant H = V 0σ /3M 2 p scales with a −3/2 during matter-dominated expansion. An upper bound on the curvaton decay width is given above; a trivial lower bound on the curvaton decay width arises from requiring that the curvaton decay before the onset of BBN, Γ σ 10 −23 GeV (T re > 4.7 MeV). Note that these bounds on the curvaton decay width hold, irrespective of the curvaton model assumed. Therefore, any terrestrial experiment sensitive to new scalar states with decay widths ranging from 10 −2 -10 −23 GeV are potentially sensitive to a low scale curvaton. In more detail, Section 6 will show that upcoming searches of Higgs portal parameter space, probe the desired range of decay widths, and importantly within a parameter space that does not spoil the flatness of the hilltop inflaton and curvaton potentials. Mapping a quartic inflaton to a quartic curvaton Because the inflaton and curvaton have potentials of the same form, the formula for the mass of the curvaton at its minimum matches that of the inflaton, Eq. (8), with the replacement V 0 → V 0σ . As explained in the preceding section, the height of the curvaton potential is given by V 0σ = λ 3 σ Λ 4 /432, so altogether, With Λ specified, the curvaton mass at its minimum can be determined within observational bounds, since Planck's 1σ bound on n s and A s restrict the curvaton quartic coupling, 1.9 × 10 −14 ≤ λ σ ≤ 6.9 × 10 −14 (see Section 4). Furthermore, inspecting Figure 9, it is clear that if an inflaton of mass m φ is discovered, Λ and as a consequence m σ , will be restricted to within about an order of magnitude. Combining Eqs. (8), (11), and (45), we find that requiring the inflaton not over-produce perturbations during inflation results in where here we take the lower Planck 1σ preferred value λ σ = 1.9 × 10 −14 , and normalize so that the inflaton perturbations are one-fifth as large as those observed, to avoid substantially altering curvaton perturbations [35,36]. Similarly, combining Eqs. (8), (45), and (34), which requires that the curvaton not over-inflate the universe, where this expression takes the limiting case of λ σ = 6.9 × 10 −14 . An upper bound on the curvaton decay width can be set directly using the inflaton's decay width, by noting (as derived at the outset of this section) that the Hubble constant of the universe must dilute by ∼ 100 −3/2 before curvaton decay, (51). The main factors setting the range of predicted values, are the requirements that the inflaton produce small perturbations, P ζφ < 4×10 −10 , that curvatoninduced inflation lasts for less than an efolding, that isocurvature perturbations are small, and that the curvaton produce the perturbations observed on the CMB (to within 1σ of Planck's reported values for the power spectrum and spectral index). For the inflaton and curvaton Higgs portal parameter space considered in the next section, this bound on the curvaton decay width is stronger than that of Eq. (44). Some example inflaton masses and decay widths are listed in Table 1, along with a predicted range of values for corresponding curvaton masses and decay widths. In this Table, Eqs. (43) and (42) are employed to determine the number of efolds in Eqs. (46) and (47). Mapping a quartic curvaton to a quartic inflaton With a similar procedure, we can find upper and lower bounds on a small field quartic inflaton from measurements of a small field quartic curvaton. Using the Planck collaboration bound on λ σ , Eq. (27), along with Eqs. (8), (11), and (45), where again we use that in the limiting case, λ σ = 1.9 × 10 −14 , (27). Next we again combine Eqs. (8), (45), and (34) to find a lower bound on the inflaton mass, where we have used the limiting value λ σ = 6.9 × 10 −14 . Finally, a lower bound on Γ φ arises directly from Eq. (48), Some inflaton mass and decay width predicted from curvaton masses and decay widths are shown in Table 1, again using Eqs. (42) and (43) to iteratively determine N * . Table 1 also gives a range of Λ, V 1/4 0 and N * values predicted by the quartic inflaton-curvaton model for a given value of m φ or m σ . These ranges are visually apparent in Figure 9, where a quartic inflaton with fixed m φ is confined to a range of permitted Λ and V Higgs portals to low scale inflation A simple way for the inflaton and curvaton to couple to Standard Model particles in a renormalizable fashion (in this case allowing the inflaton or curvaton to dump its energy into a bath of Standard Model particles after the end of inflation) is through a Higgs portal operator [61][62][63]. A scalar field coupled to the Higgs in this manner can be probed at the LHC [64][65][66][67][68][69] and other low energy experiments, Refs. [70][71][72][73][74][75][76][77]. In the case of low scale inflation, this section demonstrates that the inflaton-Higgs and curvaton-Higgs couplings can be small enough not to spoil the flatness of the inflaton or curvaton potentials through radiative corrections, while allowing for enough inflaton/curvaton-Higgs mixing to efficiently reheat the universe after inflation, all within parameter space accessible at upcoming low energy experiments. A Higgs portal inflaton appearing at meson factories has been studied previously in the context of large field inflation, specifically for a scalar inflaton field non-minimally coupled to gravity [42,43]. Non-minimally coupled inflation models rely on the inflaton potential becoming flat at large field values, as determined by the ultraviolet running of the inflaton's coupling to gravity. In the nonminimally coupled scenario of Refs. [42,43], the observed spectrum of primordial perturbations restricts the inflaton mass to m ϕ ∼ 0.27 − 1.8 GeV. However, it is important to note that predictions in non-minimally coupled models of inflation, which by necessity have couplings that change substantially as they are RG-evolved to large field values, are sensitive to corrections from nonrenormalizable operators -and equivalently the unknown ultraviolet dynamics of the theory [78]. On the other hand, the low scale inflaton and curvaton sectors we consider here are very weakly coupled, both to themselves and to the Higgs boson. In spite of a miniscule coupling to the Higgs boson, the remainder of this section shows that the quartic hilltop inflaton (and its lighter curvaton partner) detailed in Sections 2-5 can be found through a Higgs portal at the LHC and other low energy experiments, over a broad mass range, m φ , m σ = MeV − TeV. The key point will be that the large VEV predicted for the inflaton and curvaton at their minima allows for sizable mixing with the Higgs, even though the actual Higgs portal coupling is tiny. In the treatment that follows, we will begin referring exclusively to the inflaton. Because the form of the quartic inflaton and curvaton potentials are identical, an identical treatment applies to the curvaton. For the parameter space we are interested in, the Higgs-inflaton and Higgs-curvaton couplings are each small enough, that the computation of a full 3 × 3 mixing matrix does not alter results. We begin by extending the potential given in Eq. (3) to include the Higgs sector of the Standard Model, with the addition of a quartic inflaton-Higgs portal operator. Starting with where Φ is the SM Higgs doublet. Prior to electroweak symmetry breaking, the potential is given by where λ φh is the portal coupling and Φ is the Standard Model Higgs, which after electroweak symmetry breaking can be replaced with Φ → (v h + h)/ √ 2, where h is the neutral component of the SM Higgs doublet and v h 246 GeV. In Appendix B we give a complete treatment of Higgs-inflaton mixing, and point out that the Higgs-inflaton portal term does not introduce a substantial tree-level inflaton mass term in the parameter space under consideration. The Higgs boson's observed branching fractions already indicate with 2σ certainty that it decays at least four-fifths of the time like a Standard Model Higgs boson. Thus it is appropriate to refer to a mostly-Higgs-like, and a mostly-inflaton-like mass eigenstate, since the mixing between the two must be small to fit observations. Consistent with Section 2, we designate the mass eigenstate which is mostly-inflaton as "m φ ," and the mass eigenstate which is mostly-Higgs-like as "m h ." For states which are mostly Higgs and mostly inflaton, the mixing angle between the Higgs and inflaton gauge eigenstates is defined as where we set m h 125.7 GeV in calculations. 6 Contributions to m φ and v φ from the Higgs portal interaction are negligible (see Appendix B), and so the mass and vacuum expectation value of the mostly-inflaton state, v φ and m φ , are given by Eqs. (7) and (8). The preceding definition of θ φ has been chosen, so that in the limit of small θ φ , the mostly-inflaton state mixes less with the Higgs boson, whether m φ > m h or m φ < m h . In other words, as θ φ → 0, the inflaton's decay width to Standard Model particles vanishes, regardless of whether the inflaton-like state is heavier or lighter than the Higgs-like state. Examining the relative sizes of v h , m h , m φ and v φ , for the parameter space shown in Figure 2, it is clear from Eq. (53), that because v φ ∼ 10 3 − 10 9 GeV, it is possible for θ φ to be sizable even if λ φh is small enough that it does not substantially correct the inflaton's quartic self-coupling (λ φ ). The correction to λ φ from the inflaton's portal coupling to the Higgs is up to O(10) logarithmic corrections. Therefore, to prevent the Higgs portal coupling from upsetting the flatness of the inflaton's potential, we can require λ φh < 4π λ φ . In Figure 10, the resulting prompt φ decay after inflation (Λ=10 19 GeV) Belle,LHCb B→Kμ + μ - Figure 10: Parameter space for low scale inflation, which reheats the universe through a Higgs portal coupling. Constraints from meson decay and collider searches are indicated with thick dashed lines. Indirect constraints from the muon's lifetime along with W, Z-boson masses (∆r) is indicated with a thin orange line, and the indirect constraint from the Higgs boson's decay width measured at the LHC is indicated with a thin gray line. The long-dashed blue line excludes parameter space where the Higgs-inflaton coupling (λ φh ) spoils the flatness of the inflaton's potential during inflation. The dotted pink lines show parameter space where φ decays promptly at the end of inflation for Λ = 10 19 GeV and λ φ = 10 −13 , where φ decays when the energy density of the universe is ∼ (100 GeV) 4 , and excludes where φ decays after before big bang nucleosynthesis. On top of the plot, the correspondence between the energy scale during inflation and the quartic inflaton mass is indicated. The range of inflationary energy scales is derived from relations shown in Figure 2; these ranges hold for a generic quartic inflaton, irrespective of the possible addition of a curvaton. With a curvaton model specified, the scale of inflation is more tightly predicted, see Table 1. constraint on the size of the Higgs-inflaton mixing angle θ φ is shown in terms of m φ , with a longdashed blue line. It is interesting that, plotted in the (sin θ φ , m φ ) plane, the line λ φh = 4π λ φ is independent of the size of the quartic self-coupling, λ φ . This is because making the replacement λ φh → 4π λ φ in the Higgs portal mixing angle, results in a mixing angle proportional to m φ , tan (2θ φ ) ∝ λ φ v φ ∼ m φ . Portal decay widths Assuming that the inflaton's only non-gravitational coupling to other particles is through its Higgs portal interaction, the decay widths of the mostly-Higgs and mostly-inflaton states are given by where Γ h,SM (m) is the decay width for a boson of mass m, with Yukawa and gauge couplings identical to those of the Standard Model Higgs boson. (As in the prior subsection, all this discussion applies equally to the Higgs portal curvaton, with the replacement φ → σ in all equations.) With this prescription, θ φ fully determines how fast the inflaton decays after inflation, and also how diminished the total decay width of the Higgs-like state will be, compared to Standard Model expectations. Because we are interested in parameter space where λ φh 10 −6 , φ → hh and h → φφ decays are neglected. Many calculations of the partial decay widths of a Standard Model Higgs boson have been undertaken. Here we split the calculation of Γ h,SM (m) into two pieces. For m > 8 GeV, a scalar which couples like the Higgs boson, will decay predominantly to pairs of bottom quarks (and top quarks for m > 350 GeV), and pairs of weak bosons. The decay of a heavy Standard Model Higgs boson has been calculated in a number of publications, including QCD corrections to hadronic decays of the Higgs, e.g. Ref. [79]. To compute Γ h,SM (m) for m > 8 GeV, we utilize output from HDECAY [80], based on the calculations in [79]. For m < 8 GeV, the Higgs-like scalar can decay to photons, leptons, and hadronic states, depending on whether each decay is kinematically permitted. The partial width for Higgs decay to photons is given by [81,82], where α EM is the fine structure constant, the displayed sum is over Standard Model fermions, N c counts the colors of each fermion, Q f is the electromagnetic charge of each fermion, τ i ≡ m 2 /4m 2 i where m i is the mass of particle i, and the loop amplitude functions, with the scaling function f (τ ) given by Figure 11: The bounds and prospects are the same as in Figure 10, but here we show example quartic curvaton parameter points (Curv 1, Curv 2, Curv 3), alongside the corresponding predicted quartic inflaton parameter space (Inf 1, Inf 2, Inf 3), where these have been found using results in Section 5. Note that the curvaton and inflaton parameters roughly match those shown in Table 1. Following [75], in the preceding expressions we use the pion mass and kaon mass for the up, down, and strange quarks, i.e. τ u = τ d = m 2 /4m 2 π , τ s = m 2 /4m 2 K . This mass choice results in decay widths that match results from chiral perturbation theory [83,84]. The decay width to Standard Model leptons is given by where Low scale quartic inflatons mapped to curvaton space Figure 12: The bounds and prospects are the same as in Figure 10, but here we show example quartic inflaton parameter points (Inf 1, Inf 2, Inf 3), alongside the corresponding predicted quartic curvaton parameter space (Curv 1, Curv 2, Curv 3), where these have been found using results in Section 5. Note that the inflaton and curvaton parameters roughly match those shown in Table 1. Finding low scale inflation through a Higgs portal Using Eqs. (55)-(60) to calculate the decay rate of φ, cosmological limits can be placed on Higgs portal parameter space, for models of low scale inflation that reheat by coupling to the Higgs boson. First, there is an absolute lower bound on the inflaton's (or curvaton's) decay rate, from the requirement that the universe reheat before big bang nucleosynthesis, namely that decay occurs before T BBN 4.7 MeV, which excludes parameter space in the lower left of Figure 10. Similarly, one might require that the inflaton decay before the universe reaches a density of ρ uni ∼ (100 GeV) 4 , which is necessary for some cosmologies that incorporate electroweak baryogenesis. Using the relation Γ φ ∼ H = ρ uni /3M 2 p , Figure 10 shows parameter space consistent with φ decay before ρ uni ∼ (100 GeV) 4 with a pink dotted line. Next, one might require that φ decay promptly at the end of inflation, i.e. Γ φ ∼ V 0 /3M 2 p . For a given value of m φ , specifying either λ φ ∼ 10 −13 or Λ ∼ 10 19 GeV uniquely determines V 0 , using Eqs. (5) and (8). In Figure 10, we show parameter space consistent with nearly instantaneous reheating after inflation for λ φ ∼ 10 −13 and Λ ∼ 10 19 GeV. Altogether, most low scale quartic inflaton models that reheat through the Higgs portal could be probed by more extensive Higgs measurements and searches for low-mass scalars. Next, relations derived in Section 5 have shown a characteristic mass and decay width spectrum for a quartic inflaton-curvaton pair. These inflaton-curvaton pairs could become apparent through Higgs portal interactions. Figures 11 and 12 each indicate three points in Higgs portal parameter space, identify them as a quartic curvaton or inflaton, respectively, and show where a corresponding low scale quartic inflaton or curvaton would appear. The curvaton and inflaton points shown in Figures 11 and 12 match parameters given in Table 1. Here the decay width bounds have been recast as bounds on θ φ , θ σ , by using the definition of the mixing angle and the portal decay width, Eqs. (53) and (55), and the decay width of a Higgs-like scalar for a given mass, detailed in 6.1. Looking at Figures 11 and 12, it is apparent that in some cases, an extended run of the SHiP [76] experiment would suffice to uncover both a quartic inflaton and curvaton field. If a scalar state is discovered at SHiP with a mass 0.3 -4 GeV, then if the state is a quartic inflaton, one should expect a quartic curvaton in the mass range 0.01 -1 GeV. On the other hand a quartic inflaton should show up in the mass range 1 -100 GeV if the discovered 0.01 -1 GeV scalar is a quartic curvaton. Furthermore, it is apparent from Figure 12, which plots down to very small mixing angles (θ φ , θ σ ∼ 10 −11 ), that while substantially expanded meson production would be necessary, as is planned at experiments like SuperKEKB and SHiP [76,87,88], future Higgs portal searches could conceivably be sensitive to cosmological scalars with decay widths corresponding to BBN reheat temperatures, and thereby discover or rule out classes of low scale inflatons and curvatons. The addition of a Higgs portal singlet scalar with a large mass can alter the relationship between the W -boson mass, the Z-boson mass, the Fermi constant, and the decay rate of the muon. If the Higgs portal singlet is massive enough, its corrections to the electroweak bosons' self-energy are too large, given the observed lifetime of the muon, leading to a bound sin θ φ 0.2 for m φ 300 GeV at 95% confidence [89]. This indirect bound is shown in Figure 10 with a thin orange line. Experimental probes of MeV-TeV mass Higgs portal scalars A m φ 140 GeV Higgs portal scalar can be detected at the LHC, largely through its decays to leptons, pp → φ → ZZ → 4 and pp → φ → W W → ν ν [90,91]. Statistical combinations of ATLAS and CMS results [92] yields the tightest bound on a Higgs portal scalar in the mass range m φ ∼ 150 − 250 GeV (shown in dashed green in Figure 10). A portal interaction would diminish the effective width of the Higgs boson that has been observed at the LHC. ATLAS and CMS have placed the most restrictive lower bound on the Higgs width using Higgs decays to leptons and photons, (h → ZZ → 4 ) and (h → γγ) [93]. The combined limit on the signal strength of the Higgs (µ higgs ≡ σ meas. /σ SM ) is µ higgs > 0.87, at 95% confidence, which corresponds to an upper bound of sin θ φ < 0.36 at 95% confidence. This indirect bound would not be sensitive to a Higgs portal scalar mass-degenerate with the observed Higgs boson, and applies to m φ < 120 GeV and m φ > 130 GeV. Because a Higgs portal scalar couples to quarks, it contributes to the amplitude for Standard Model meson decay. The portal scalar considered here couples to Standard Model fermions with the same proportions as a Standard Model Higgs boson with mass m φ , as described in Section 6.1. Therefore, the Higgs portal scalar preserves quark flavor at tree level, but can induce meson decay processes like B → Kµ + µ − at loop-level, through "penguin" diagrams containing internal W, Z boson lines. References [71,73,75,76] have catalogued the bounds on Higgs portal scalars from loop-induced decays of mesons in the mass range m φ ∼ 0.001 − 5 GeV, displayed in Figures 10-12. Conclusions The spectrum of tensor perturbations produced by low scale inflation models -and by extension the energy scale of inflation -is too small to be uncovered by cosmological surveys. However, this study has shown that low scale inflatons which roll to large field values, and to some approximation, the corresponding energy scale during inflation, can be probed at colliders and meson factories. Broadly speaking, low scale inflation deserves attention, because recent cosmological surveys have begun ruling out high scale models. In addition, low scale inflation may be a necessity if our universe contains axions. From a theoretical standpoint, low scale inflation can be described with a low-energy effective field theory, whereas high scale inflation requires suppression of radiative corrections to trans-Planckian dynamics. The possibility of finding an inflaton at a collider may seem exotic, partly owing to an assumption that inflatons are too heavy for terrestrial production. In the regime of large field inflation, this is often true (in the case of m 2 φ 2 large field inflation, m ∼ 10 −6 M p ). However, in the case of low scale, small field inflation, it is natural to suppose that the inflaton begins with a nearly null field value, subsequent to a phase transition. In this "hilltop" case, the inflaton rolls down its potential, settling at a large vacuum expectation value. This large VEV, along with the tiny self-couplings required of a low scale slow-roll inflaton, result in a small inflaton mass detectable at a low energy experiment. This study has shown that a small field quartic hilltop potential implies an inflaton mass ranging from an MeV to PeV, corresponding to an inflationary energy scale ranging from GeV to EeV, which can be probed at terrestrial collider experiments through a Higgs portal interaction. The Higgs portal cosmology and low-energy phenomenology developed for a simplified quartic hilltop model of inflation, could be applied to broader classes of small field inflation that initiate with a nearly null inflaton field value, and roll to a large vacuum expectation value. It is particularly interesting that, owing to its large vacuum expectation value at the end of inflation, such an inflaton can have a tiny coupling to the SM Higgs boson (λ φh 10 −6 ), yet still have a sizable enough mixing to rapidly reheat the universe, all without spoiling the flatness of the inflaton's potential through radiative corrections. This constitutes one clear mechanism for a low scale inflaton with an extremely flat potential to substantially couple to the Standard Model, without fine-tuning. This also reinforces the cosmological import of Higgs portal scalar searches, both at high energy colliders like the LHC, and in flavor-violating meson decays at experiments like KEKB, BEPC, and SHiP. Intriguingly, this study has demonstrated that once a complete cosmology is specified, and primordial perturbations accounted for, it is possible to make sensible predictions for the relative masses and decay widths of scalars associated with low scale inflation. The fairly simple case of a quartic hilltop inflaton paired with a quartic curvaton has been studied in detail, and maps between the masses and decay widths of each have been charted. The same methods can be used to infer the energy density during low scale inflation. Specifically, using a simplified quartic hilltop inflaton in Section 2, the requirement that the inflaton's potential be stabilized by operators in an effective field theory with a sub-Planckian cutoff, is sufficient to map the mass of the inflaton to the energy scale during inflation, to within roughly an order of magnitude. After adding a realistic curvaton cosmology, twinned with the requirement that the average equation of state and temperature during reheating have physically permissible values detailed in sections 3 and 4, this map tightened -as shown by the restricted values for the inflationary energy density and number of efolds given in Table 1. While this study has focused on a low scale quartic hilltop inflaton, the same cosmological analysis could be applied to any low scale inflaton (or curvaton) model, which maintains the necessary flatness of its potential with an initially small field value and small selfcouplings. When these scalars roll to their minima and acquire large vacuum expectation values, the same reheating and perturbation considerations which constrained the mass and decay width of quartic hilltop inflaton and curvatons, apply to other low scale inflatons and curvatons. Particularly, it will be interesting to extend these techniques to additional hilltop, pseudo Nambu Goldstone boson, and inflection point models of low scale inflation, to further determine how meson factories, high energy colliders, and other experimental probes of scalar fields could unmask low scale inflation. A Small field hilltop models In this section we examine a number of small field hilltop models, quantifying fine-tuning in quadratic, cubic and quartic hilltop inflation. Partly, this will justify the choice of a quartic inflaton plus quartic curvaton model, as the simplest practicable case of small field inflation that is driven by a single Lagrangian term. 7 Small field inflation requires an especially flat potential (compared to large field inflation), so that it is natural to consider a hilltop potential, e.g. of the form V (φ) = −λ n φ n + V 0 . For this potential, a very flat portion exists at the origin of field space. Typically, the self-coupling terms of a hilltop inflaton must be very small to permit inflation. To understand why, it is instructive to consider a scalar potential familiar to particle theorists, the potential of the Higgs boson in the Standard Model, and examine why, with its comparatively large self-coupling, the Higgs potential does not permit hilltop inflation. (Sometimes the Higgs boson, with an additional large coupling to gravity, is considered as the inflaton [20]. In that non-minimally coupled case, the Higgs begins inflation at very large field values. Here we study the Higgs hilltop inflation scenario, where the Higgs has no new coupling to gravity, and has a nearly null initial field value.) To attempt Higgs hilltop inflation, one considers a Higgs rolling from its hilltop at a nearly null value h 0, to its electroweak minimum h ∼ 246 GeV. First we must address how the Higgs might have a nearly null initial field value. One might suppose that after electroweak symmetry breaking, the Higgs automatically starts near the top of hill. However, assuming a Standard Model-like phase transition, the thermal fluctuations of the Higgs would be too large (O(100 GeV)) and inflation would not occur. For the moment we will ignore thermal fluctuations, and assume that the Higgs field can begin with an arbitrarily uniform null field value; some discussion about how this can be achieved for hilltop potentials was provided in Section 2. However, even setting aside thermal fluctuations, another restriction on a nearly null initial field value, comes from fluctuations in scalar fields induced by the de Sitter (inflationary) space they presumably occupy. In other words, if we specify that the initial Higgs field value is very nearly null, we may violate the intrinsic quantum uncertainty of a scalar field in de Sitter space. A scalar field in a de Sitter space with Hubble constant H fluctuates as δh ∼ H/2π. We will see that the initial field value necessary for 20 − 50 efolds of Higgs hilltop inflation is much smaller than this, h (60 efolds) ini H/2π. We begin with a toy Higgs hilltop potential, 8 with µ h v √ λ where v is the Higgs vev and V 0 = v 4 λ 4 such that when the Higgs is sitting at its electroweak minimum, it does not over-contribute to the dark energy of the universe (V (h min ) 0). We can compute how close to h = 0 the Higgs field must be in order for the universe to inflate by where we have dropped the quartic Higgs term, which will be irrelevant at small field values, and defined the Higgs field value at the end of inflation h end , and at the start of 60 efolds of inflation, h 60 . We determine h end by solving for the field value at which = 1, and use v = 246 GeV to obtain h 60 10 −17 e −10 34 GeV, which is absurdly infinitesimal compared to quantum fluctuations in the Higgs field, Therefore to inflate our universe to the extent implied by CMB observations, the initial Higgs field value would need to be specified well within the de Sitter quantum uncertainty limit. Conversely one might ask the maximum number of efolds achievable with the Higgs hilltop potential, while staying within the de Sitter quantum uncertainty limit. The answer is tiny, the maximum number achievable is N (Higgs hilltop) max = 10 −32 . This means that the Standard Model Higgs potential would not generate enough inflation in a hilltop scenario, as a consequence of its relatively large selfcoupling. We now discuss fine-tuning and primordial perturbations generated by small field hilltop models of inflation, where the involved scalar fields have tiny self-couplings. φ 2 hilltop inflation If practicable, it would be preferable to consider a hilltop potential where a −φ 2 term drives inflation, e.g. similar to the φ 4 case utilized in the bulk of the paper. The advantage of such a potential is that, in the absence of explicit quartic and cubic terms, this potential is technically natural. However, with the additional requirement that inflation ceases when the inflaton rolls to the minimum of this potential, under the stipulation that Λ ≤ 10 19 GeV, m becomes too large to be compatible with inflation. Specifically, one finds that the power spectrum resulting from such a potential is too large, and that generating 60 efolds of inflation typically requires φ * < H * 2π , in violation of the de Sitter space quantum uncertainty limit discussed above. This model can be mended by suppressing the high scale operator, with δ 1. However, this implies that corrections from trans-Planckian dynamics are somehow suppressed. The other option is to add in a negative φ 4 term, The added λ φ φ 4 /4 term generates a contribution to the mass at loop order, and because (unsurprisingly), its value is equal or greater than that given in (5), fine-tuning of the model is not ameliorated (as compared to just using the quartic term as the dominant term during inflation). φ 4 hilltop inflation To quantify fine-tuning in the quartic case, it is required that the mass term for the inflaton while rolling through its pivot scale is no more then 10 % of the quartic term, m 2 φ, * < m 2 φ,max ≡ 0.1λ φ φ 2 * /4, where we remind the reader that m φ, * is the sum of bare and loop contributions to the inflaton's mass. Note if the preceding inequality is satisfied at the pivot scale, then it is automatically satisfied at larger field values, during and after inflation (φ grows during and after inflation). We then compare this to the mass generated at one-loop order, Using Eqs. (5) and (6) It can be shown that fine-tuning is not greatly improved in the case of small field hilltop inflation driven by a φ 3 term. One might expect fine-tuning decreases, because the cubic one-loop-induced mass term depends on two factors of the cubic coupling (instead of one in the case of the quartic). For the potential V = V 0 − 1 3 gφ 3 + φ 5 Λ , 9 the leading loop contribution to the mass is m 2 φ,cubic loop g 2 Λ 2 9 · 2 4 π 2 m 2 0 , where g is the dimensionful coupling of the φ 3 term. Comparing this to the maximum mass as defined in the prior section, Taking the ratio of m φ,max cubic and m φ,cubic loop , using that for hilltop φ 3 inflation, φ * ≈ V 0 gM 2 p N * , m φ,max cubic m φ,cubic loop 1.59 The requirement that V (φ min ) = 0 determines g in terms of V 0 and Λ, as for the quartic self-coupling in Section 2, Inserting this into Eq. (72), m φ,max cubic m φ,cubic loop 0.7 V 3/10 0 For fixed V 0 , Λ, the tuning of the cubic hilltop model does not improve over the quartic case. For example for V B Higgs-inflaton and Higgs-curvaton portal particulars In what follows, as in Section 6, we address quartic inflaton-Higgs mixing, with the understanding that an identical treatment applies to quartic curvaton-Higgs mixing. For the potential of Eq. (52), the vacuum expectation values of h and φ are the mass matrix for the neutral Higgs component and inflaton is given by for which the mass eigenstates are A common definition for the mixing angle between the two Higgs portal mass eigenstates (S 1,2 ) is α, such that S 1 = h cos α + φ sin α S 2 = h sin α + φ cos α, (78) where in turn α is given as In this study, it is convenient to define the mixing angle differently, as discussed in text surrounding Eq. (53). Specifically, we wish to define the mixing angle so that as the mixing angle vanishes, so too does the decay width of the mostly-inflaton state to Standard Model particles. Because the mass of the Higgs boson has been measured, one of the mass eigenstates M 1 , M 2 must be 125 GeV. We consistently refer to the Higgs-like mass state as "m h " in this document, the inflaton-like state as m φ , and the mixing angle between the Higgs-like and inflaton-like states as θ φ , where where we have dropped the portal mass contribution, 2λ φh v 2 h , in the final expression. This term will not contribute substantially to the inflaton's mass at its minimum for two reasons. The first reason, is that by necessity, the inflaton's effective mass term at the outset of inflation must be much smaller than its quartic term, λ φ φ 2 * m 2 φ, * , as detailed in Appendix A. One consequence is that the Higgs portal contribution to the inflaton mass must be much smaller than m φ,max , which is smaller than m φ . However, it should be stressed that the Higgs portal mass contribution "2λ φh v 2 h " will be negligible anyway in most of the parameter space we consider, from the requirement λ φh 10 −6 , discussed in Section 6 (one might also consider whether v h ∼ 0 during inflation). The smallness of the Higgs portal operator, λ φh 10 −6 , was required so that the Higgs portal quartic would not upset the inflaton's self-quartic coupling through radiative corrections. In fact, in all un-excluded m φ 0.05 GeV inflaton-curvaton parameter space in Figures 11 and 12, the portal contribution to the inflaton mass can be neglected without invoking "the first reason" given above. Note again, that all of the preceding (including discussion of the smallness of the Higgs portal mass contribution) is equally applicable to the quartic curvaton, which requires a quartic self-coupling about an order of magnitude smaller than the inflaton.
2016-11-02T20:53:01.000Z
2016-08-30T00:00:00.000
{ "year": 2016, "sha1": "91967fb2a2df8082c8ba1c9a6d5b6f7dc2a364a9", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.94.115012", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "91967fb2a2df8082c8ba1c9a6d5b6f7dc2a364a9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258985122
pes2o/s2orc
v3-fos-license
Some Elementary Combinatory Properties and Fibonacci Numbers Abstract INTRODUCTION Significant traces of Combinatorics principles can be found in several civilizations and remote times.In this primitive scenario, the authors Wilson and Watkins (2013) indicate multiple meanings, such as: energetic, poetic, mystical, educational, etc.We have an example in Figure 1, where the authors discuss manuscripts by Ramon Llull, Duke of Venice, in 1210. In this example, a chapter of the work Ars Compendiosa Inveniendi Veritatem (The Concise Art of Finding the Truth, English translation) began by listing sixteen attributes of God: goodness, greatness, eternity, power, wisdom, love, virtue, truth, glory, perfection, justice, generosity, mercy, humility, sovereignty, and patience.So, Ramon Llull wrote, combinatorially 120 2 , 16  C short essays of about 80 words each, considering God's goodness related to greatness (Wilson & Watkins, 2013). Figure 1.Wilson and Watkins (2013) recover primitive concepts that gave rise to Combinatorics. As a contemporary motivation on the use of Combinatorics, and for a preliminary discussion, let us consider particular decompositions of the positive integer '7' Indeed, trivially, we have: ; . In the previous set we can see some examples called compositions of the number 7. On the other hand, when we examine the order of the terms, we can write 6 1 7   or 1 6 7   , which represent different compositions for the positive integer 7  n . Furthermore, compositions of the particular type or species may occur 3 1 3 7    , which represents the same composition, in any of the senses that we consider.When this occurs, we say we have a palindrome (Grimaldi, 2012).Let's see another example in more detail. In fact, we can say that there are 16 ways to write the positive integer 5  n as the sum of integers (see Figure 2).Note that when order is not relevant, compositions 4 1 1 4 5     are considered equal.In any case, following the reasoning employed by Grimaldi (2012), we seek to determine a formula for the number of compositions of a positive integer   n . In a heuristic way, the author advises to consider a trivial composition for the integer , counting the number of installments (5) and the number of additive operations (4 times) (Figure 2): Intuitively, and when comparing with the data in Figure 2, for the set we could determine that 16 2 4  represents the number of subsets of all partitions.On the other hand, if we take a subset     , we can form the following compositions: considering the position of additive operations '+' in 1st and 3rd position in the decomposition. Grimaldi (2012) observes that the subset indicates that parentheses should be placed around the '1' in the 1st and 3rd positions, where addition operations occur. Still on Figure 2, we identify the subset of compositions that involve only the digits '1' and '2' in the composition of the positive integer 5  n . Note that if we eliminate all compositions (8 compositions), except those with the digits '1' and '2', we can determine (later, Table 1).We can see (Figure 2) that only two palindromes occur: We can easily observe the existence of central terms in both palindromes.In Figure 3 Before proceeding to the subsequent sections, it is essential to point out the considerations of De-Temple and Webb (2014), when they examine some standard procedures in solving combinatorial problems, namely: (i) introduce a notation, with respect to which ℎ represents an answer to be determined in the nth case; (ii) determine some particular initial values h1, h2, h3, h4, etc. by direct count; (iii) employ combinatorial reasoning in order to determine the recurrence relation that expresses hn depending on the previous values of the sequence; (iv) solve the recurrence relation in order to find a unique solution. In this work, we seek to carry out a bibliographical survey that supports a mathematical discussion about some combinatorial problems that have varied interpretations, based on works of great names within the research in this field of knowledge such as Benjamin and Quinn (1999;2003), Hemenway (2005), Koshy (2001;2019), Grimaldi (2012)), Singh (1985) and Vajda (1989).His works relate to the emblematic Fibonacci Sequence and its development.Such an approach is usually neglected by History of Mathematics textbooks, which place too much emphasis only on its anecdotal aspects and a bias that does not involve a content that overcomes curiosity for the history of the production of the 'immortal rabbits' (Figure 4).Gullberg (1997) discusses approaches to the Fibonacci Sequence. Given the above, we seek to discuss the following question: "What elementary properties of Combinatorics allow a meaning and/or interpretation for the set of numbers that occur in the Fibonacci Sequence?"Thus, in the following sections we bring a theoretical discussion based on a mathematical approach to the subject. COMPOSITIONS, PALINDROMES AND FIBONACCI NUMBERS In the introductory section we found, in a heuristic way, that there is a correspondence involving the set of compositions of the positive integer 5  n and the subsets of the set   , which correspond to the arithmetic expression 16 2 4  and that represents the subset quantity of all your partitions.Inductively, according to Grimaldi (2012, p. 25), we could write that, "given a positive integer   n , we can determine the number 1 2  n as the total of compositions".However, how do the previous properties relate, more precisely, to the Fibonacci sequence defined by the relation , with initial values About the Fibonacci Sequence and Some Elementary Properties For the purpose of the problem that we seek to discuss, let us consider the recurrence and the initial values 0 0  F and 1 1  F , that determine the values in Table 1. .Grimaldi (2012, p. 25) , for example.In this case, for the odd integer 11  n , when we consider their decompositions in terms of '1' and '2', each palindrome must contain an odd central term.For example, when we write: because the digit '1' is the only possibility for a central term in the palindrome.Grimaldi (2012) observes that, for this palindrome we can consider, the set of compositions of the integer 5  n , that correspond to the value 8 5 . Furthermore, for this set of 8 compositions, with the central term fixed at '1', we determined all the palindromes present in the decomposition of the integer, when considering the digits '1' and '2'.From this particular case, Grimaldi (2012, p. 26) states that "in general, if the positive integer n is odd, when we consider the set of compositions , we determine that 2 1  n F correspond precisely to the set of palindromes".On the other hand, when dealing with an even integer we have, for example, 144 12 . Let's take the palindrome: for which the following correspondence holds 13 6 . However, compositions (palindromes) of the form: , whose central term is an even number and, necessarily, the central term could not be odd, whose number of compositions are determined by 233 Before finishing this section, we once again refer to Grimaldi (2012), which establishes a way to determine the number of palindromes in a composition.Indeed, considering the positive integer 12  n , to determine the palindromes in the set of compositions 233 13  F , we consider the cases: (i) if the central term of a composition is a plus sign '+' we consider it in the form of a 'reflection in the mirror'; examining the compositions of the integer 6  n , such as in the case of and which are equivalent to the amount of 13 6 7    F n palindromes; (ii) If a number occurs as the central term, Grimaldi (2012) states that it must be even (that is, the digit '2' must occur), as in the previous example we wrote: and one must consider, in this case, the compositions of the integer 8 5 palindromes.Finally, by an additive principle, when considering the whole set of palindromes, we write 21 palindromes present in the composition of the positive integer 12  n . WAYS TO COMPLETE A BOARD AND SOME THEOREMS In the previous section we pointed out some properties that, through combinatorial arguments, reveal properties intrinsically related to the Fibonacci Sequence. Preserving some of the previous arguments, we have Table 2, which allows relating the number of compositions of an integer n : Table 2. Compositions n c for a positive integer from the digits '1' and '2'. In the interest of providing greater rigor and increasing details in our discussion, we have established, from now on, the following definition: The board is a formation of squares called squares, cells or positions.These positions are enumerated and this enumeration describes the position.Such a board will just be called n−board (Spreafico, 2014). Next, we visualize the example of a 4-board.For its filling and possible configurations, the authors Benjamin and Quinn (2003) use only squares 1 1 (in the lighter color) and dominoes 2 1 (in the darkest color).Easily, if we wish to determine all possible tiling sets, including squares and dominoes, with the support of Figure 6 we can conclude that they are a total of 5 5  F possibilities (Figure 6, left).We could still choose just the set of tiles that have at least one domino (in the darkest color) 2 1 and, in this case, determine compositions of squares and dominoes.That is, we eliminate the first configuration only with the presence of squares 1 1 .In Figure 7 we visualize the generalization of an nboard and then establish a relationship, in view of Theorem 1, with the Fibonacci sequence: Before proceeding, it is urgent to formalize certain heuristic and intuitive operations and arguments used just now, considering Theorem 1. Theorem 1: The number of ways to cover a board n  1 with squares 1 1 and dominoes 2 1 is equal to 1  n f (Spivey, 2019). Demonstration: We can define as n c the number of forms to cover a board n  1 with squares 1 1 and dominoes 2 1 .For a board of the type 1 1 we use only a square 1 1 , i.e., we have . For a board of the type 2 1 , we have two possible configurations: with two squares or with one domino 2 1 .In this case, note that . Then, when we consider the number as the number of ways to cover a board n  1 , there are only two possibilities, namely: (i) a set of partitions whose first piece is just a square 1 1 ; (ii) a set of partitions whose first piece is exactly a rectangle 2 1 .if it occurs (i), that is, the first piece is a square, so the other positions   With the same reasoning, if (ii) occurs, that is, the other positions   . To consider the set n c as the total number of ways to cover a board n  1 , by an additive principle, we will add In Figure 8, Grimaldi (2012) seeks to determine the amount of filling a board 3 2 .The author notes that horizontal and vertical dominoes can be used (see Figure 7, right).The author explains that if we have a board 2 2  we have two ways of tiling: using two horizontal dominoes 2 1 or two vertical dominoes 1 2  .If we consider board 3 2 (Figure 8a), the author suggests decomposing the previous figure and counting the tiles for the case of the board 2 2  , in which we have 2 = 2 (Figure 8c).Theorem 2: The number of ways n q to cover a board n  2 with squares 1 1 and dominoes 2 1 is equal to Demonstration: In the general case, for a board n  2 , we have two possibilities: (i) the first domino is vertical, so the remaining board will be of the type and we will cover it of 1  n q distinct ways; (ii) by two horizontally juxtaposed squares, then the remaining board will be of the type   2 2   n and we'll cover it for tiling for a total of 2  n q distinct ways.Considering both possibilities, by a combinatorial principle, Grimaldi (2012) establishes that 3 , . COMBINATORY INTERPRETATION OF ELEMENTARY IDENTITIES In the preceding sections, we used arguments and reasoning eminently of a combinatorial nature, aiming to show a character of arithmetic invariance of the recurrent sequence defined by , with the initial values 0 0  F and 1 1  F , whose combinatorial properties are usually neglected in History of Mathematics textbooks.Now, let us remember some identities that we found in the literature of this area of knowledge, such as: which, according to Koshy (2001) was demonstrated by the French mathematician François Édouard Anatole Lucas in 1876, and similarly, the finite sums Grimaldi (2012) comments that other mathematicians, such as Giovanni Domenico Cassini (1625-1712) and Robert Simson (1687-1768) also found several ways to verify such combinatorial identities.The author also explains that in 1901, Eugen Neto (1846-1919) studied the set of compositions of a positive integer , with a different method, except for the occurrence of the digit '1', as we can see in Table 3.The author suggests observing arithmetic relations and that, for these initial cases, we can establish the relation 1 , . On the other hand, Grimaldi (2012) adds that the determination of compositions can be calculated from the recurrence 3 , .Grimaldi (2012, p. 28) observes that, in the case where is odd, "then the central term of the sum will always be an odd number".For example, taking the particular composition: The smallest central term to be used will be '3'.In this case, we consider the compositions corresponding to each side of the central term.For example, in the decomposition: (same to the left side).Therefore, there are 5 5  F palindromes in the compositions of 15  n , as central term equal to the digit '3'.If the middle term is We determine that Consequently, the total amount of palindromes present in the integer decompositions 15  n (Table 4) occur, according to the above conditions and using the identity: Before wrapping up this section, we examine a combinatorial interpretation for the identity 1 , supported by the arguments recorded by Benjamin and Quinn (2003).By Theorem 1, we know that the number of ways to cover a board n  1 with squares 1 1 and dominoes 2 1 is equal to 1 , In Figure 8 we can immediately see that for a   board 2   n , there are possible tilings, however a tiling will not contain any dominoes 2 1 (which in the figure is indicated in gray color).This is the case of tiling only with squares 1 1 .It follows from this fact that the total quantity containing at least one domino will correspond to the number 1 1 and we exclude the possibility with squares only, similarly to what we did in Figure 6.Now, by examining Figure 8, we begin to consider the existence of dominoes and the position of the respective domino 2 1 .Preliminary, it may occur: i) The position on the domino in   , in which there will be a square in the positions   and, in this case, only Continuing with the previous steps, step by step, we can identify in Figure 9 and observe that the last domino will be in the position   2 , 1 .So, there must be 1 0 F f  tilings, as there will be a first piece (the domino) and all the rest are made up of squares 1 1 : Figure 9. Representation via tiling for identities 1 Finally, summing all contributions involving tiling in steps   1 1 2 3 2 1 0 2 3 º of the tilings on the (n+2)-board excluding one tilings with the squares 11 are exactly the same, except for the addition of one term.Still supported by Figure 7, Benjamin and Quinn (2003) provide an interpretation for the identity  . Immediately, as there are an odd number of positions, therefore, every tiling must have at least one square   1 1 .In Figure 7 (on the right), the authors Benjamin and Quinn (2003a;2003b) In the previous sections we approached some elementary problems in combinatorics, whose arguments and representations used culminate, surprisingly, with the emergence of relations with the Fibonacci Sequence that, in certain specialized textbooks of History of Mathematics, tend to be neglected. The topic discussed in this article has the potential to instigate future research on the applications of these sequences in combinatorial number theory and, possibly, other areas involving matrix algebra.In addition, we aim to broaden the study of other algebraic properties of the Fibonacci sequence in general, as well as the possibilities of applications of these sequences in teaching sessions, focused on the initial training of mathematics teachers. FINAL CONSIDERATIONS In the previous sections we addressed some elementary problems in Combinatorics whose arguments and representations used culminate, surprisingly, with the emergence of relations with the Fibonacci Sequence which, in certain specialized textbooks of the History of Mathematics are often not addressed and are rarely discussed in mathematics teacher training (De-Temple & Webb, 2014;Koshy, 2019;Spivey, 2019;Vorobiev, 2000). In particular, the way to consider certain compositions of a positive integer , with or without the presence of the digits '1' and '2' and, for example, we found that palindromes make it possible to relate and determine sets and subsets that, from a numerical point of view, correspond precisely to the numerical values that we indicate, in addition to the theorems involving tiling with squares and dominoes. In our works, Alves (2017;2022) we have indicated a non-static and evolutionary understanding of mathematical knowledge, from the birth stage of more primitive ideas, culminating in a specialized scenario, in which researchers and mathematicians from different countries express an interest in same math problem.In this way, elementary identities like      that held the interest of professional mathematicians in the past (Stillwell, 2010) can be revisited, through combinatorial arguments and expressing heuristic properties for numerous compositions involving Fibonacci numbers (Hoggatt Jr & Lind, 1969). Finally, the problems and approach we discussed in the preceding sections fall under what in Pure Mathematics we call "Combinatorics, which is often called Finite Mathematics, because it studies finite objects.But there are infinitely many finite objects, and it is sometimes convenient to reason about all the members of an infinite collection."(Stillwell, 2010, p. 554).In these terms, Combinatorics stimulates the Mathematics teacher's understanding, from an elementary stage to the realization of a modern research scenario that confirms the vigor of research around Fibonacci numbers. we can see the example of a palindrome, with an odd central term, within the case of integer compositions 5  n . Figure 3 . Figure 3. Grilmaldi (2012) discuss the compositions of the positive integer 5  n considering the operations. F0 let's consider compositions of a positive integer   n , considering only when only the digits '1' and '2' occur.To exemplify, let's consider Figure5, suggested by Grimaldi (2012).Indeed, we determine some compositions of the positive integers , Figure 7 . Figure 7. Benjamin and Quinn (2003) provide a representation via 'Board' related to the Fibonacci sequence. Figure 8 . Figure 8. Grimaldi (2012) presents a board 3 2 and describes a relation with the sequence. three compositions of the palindromic type, which are: Determination of palindromes by the Eugen Neto method.Central term Number of palindromes Central term Number of palindromes Table 1 . Description from the recurrence of the Fibonacci numbers, for Table 3 . Compositions of a positive integer, except for the digit '1'.Now, let's return to the set of palindromes that occur in the compositions of the integer (excepting the digit '1').When we consider the method proposed by Eugen Neto, we determine the number of compositions 1 , consider the position of this square, which remains present in all tiling possibilities, since the length of the board is odd.Systematically, it should occur that: i)
2023-05-31T15:12:55.711Z
2023-05-26T00:00:00.000
{ "year": 2023, "sha1": "ec841c0e0fad268386cd05f8f2b2cb9c8384fef4", "oa_license": "CCBYSA", "oa_url": "https://jurnal.stkipkusumanegara.ac.id/index.php/jim/article/download/1756/1077", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ad3a7078c957204c0eed409399eea34846593324", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
140640718
pes2o/s2orc
v3-fos-license
On maritime transport costs, evolution, and forecast During recent years of economic euphoria, globalization, motor of development, has been possible due to the existence of a fast, efficient, and economic maritime transport. The technological development based on scale economies, improvement of cargo-handling systems, and specialization has allowed putting on the market vessels with very competitive costs. In these years of a bright freight market, increased costs seemed not to be of much concern, so that the problem was not approached with the necessary firmness. Nowadays, things have changed: freight rates have sunk, prices of ships have fallen, and the crisis has reached ship owners. The costs, nevertheless, are still rising, with the exception of 2009 in which, for the first time in ten years, costs of operations were sensibly reduced. In the paper, the evolution of the main components of the costs of the maritime transport are analyzed, studying the current situation of such, as well as the forecasts for the near future. The operating costs of ships have traditionally been a topic difficult to investigate for lack of reliable data.In fact, at the beginning of my professional life, many ship owners thought that it was too delicate a matter and were reluctant to furnish said data, even in relative terms.Fortunately, things have changed and nowadays there are trustworthy sources providing valuable information to study the evolution of those costs.And this is a matter of maximum importance because, in a globalized market, such as that of the maritime transport, competitiveness is the main weapon that allows ship owners to reach the comparative advantages against their competitors.Of course, it is quite necessary to have benchmarking data with regard to what is being done in other places -often very different, very distant, and with cultures and ways of life very diverse from ours.This way, one can get at least a base to build the business framework to develop the shipping activity. On the contrary, important changes that have taken place in nearly all economic sectors, but especially in the maritime sector, as a consequence of the world economic crisis, have produced significant modifications in the operating costs of ships, which have to be considered not only by the ship owners, who carry out the activity of maritime transport, but also by the rest of the participants of the maritime industries: shipyards, auxiliary industry, ports, etc., whose managerial developmentis strongly tied to that of the shipping activity and, therefore, pressed for all things that affect it. Namely, in the past years, an important escalation of costs has taken place in the international maritime market, which has led a great number of ship owners to situations of difficult survival.Does this have something to do with the rest of the maritime industries stated above?Certainly yes, because ship owners have ordered the building of new vessels in shipyards and the continuity of the work is seriously threatened by the financial situation of the former; the auxiliary industry, ship suppliers, etc., also creditor of the ship owners; and of course banks, who granted credits for the construction of ships, credits that can be difficult to recover; for the world trade, whose exports and imports depend mainly on the availability of maritime transport that is not only efficient, but specially economic and reliable, among others. It again places on the table the problems of the operating costs of ships, a matter of enormous importance for ship owners and for the whole international economy.This is why we decided to present this paper on the operating costs of maritime transport, its evolution throughout the past years and the forecasts that, nowadays, experts in the sector conduct for the future evolution of these costs. The traditional subdivision of the operating costs of ships between fixed costs and voyage costs is well known. In fact, according to the classical subdivision of maritime economics, costs can be divided as far as the production volume is concerned and so the costs are considered fixed when they are independent from the above-mentioned volume and variables when they depend on the same one.Nevertheless, it is difficult to find in maritime transport costs of voyages that are really proportional to magnitudes related to production: namely, and always with some restrictions, the bunker consumptions could to be considered as proportional to the distance -or to the time employed to cover them at a certain speed-and certain costs relative to the loading/ unloading of cargo proportional to the transported tonnage or, perhaps better, moved in every port; on the other hand, it is not clear at all what should be considered to be the production volume in the maritime transport: the transported Introduction The cost structure of maritime transport Thus, the cost of maritime transport can be classified as fixed costs and variable costs (in our case, voyage costs); within these, some of their components can be considered proportional to magnitudes more or less related to the level of activity.Independent from the previous ones, there are costs of sales, which are generally proportional to the earnings or commissions on the sales, but these are often considered a reduction of income, which are entered by the net amount. From a strictly theoretical point of view, the fixed costs, independent from the activity developed, remain constant even though there is no activity; in other words, though the ship remains idle.But, as far as our classification is concerned, we will recognize as fixed costs those whose objective is to maintain the ship in seaworthy conditions to offer transport services, even though the vessel can be laid-up.So, the ship must have its crew on board, certificates in order, engines in operating conditions, insurance policies in order, etc., (aspects to a certain extent damaged when a ship is really laid-up).Bearing in mind these issues, the fixed costs will really have such a character, and it will only be necessary to add the voyage costs to obtain the total costs. The voyage costs -or variable costs-are a function of the activity the ship develops, taking place only when the vessel is in service.Unlike the fixed costs, they depend on every specific voyage and, especially, on the ports of call, distance crossed, cargo handling operations, the possible need of passing some channels, etc. Within the fixed costs it is necessary to distinguish the capital costs -CAPEX, costs derived from the property of the ship-and the running costs or fixed operation costs -OPEX, costs that are necessary to have the vessel ready for operation-: by means of both types of costs the operator fulfils his basic aim already indicated of having the ship seaworthy to give the service of transport.Depreciation and financial costs are the capital costs; and crew, insurance, maintenance and repairs and the administration costs are the running costs or operation costs. Certainly, as soon a voyage starts, the ship owner incurs in voyage costs, costs that can be classified into a single category of voyage costs or segregating from them the cargo-handling costs. The voyage costs, always depending on the specific trip, are costs inherent in the activity of the operator, that is to say, they are costs for the ship owner or the time charterer to develop the maritime activity of the voyage. Among the voyage costs, the following items are usually distinguished: the bunkers consumptions, port costs, channel tolls, and cargo-handling costs. Ships in the open tramp market usually do not assume cargo handling costs, as they are chartered on FIOST conditions, but when ships are on regular service, the ship owner includes in the freight the cost of handling the goods for loading and unloading.In the regular services, voyage costs are also fixed, as the ships repeat itineraries and scales, so port costs and channels tolls, as well as bunker costs are fixed costs and the cargo handling costs are the only variable costs. In any case, a brief analysis of the above referred cost structure shows that a very important part of the fixed costs is a direct function of the building cost of the ship (through depreciation, interests and insurance costs), though the rest depends on multiple factors of diverse types.On the other hand, the main components of the voyage costs -that can be grouped in two big items: bunker consumption during the navigation time and costs produced during the stay in port-depend, to a great extent, on the speed of the ship, the prices of the fuel and the time of stay in port. So, the most important factors of the maritime transport costs show something simple, but often forgotten.That time of navigation, time in port, and cargo handling costs constitute the basic framework of costs of the shipping economy.All this based on a few technical factors of the costs -speed, specific consumptions, general arrangement Tramp Cost Concept Regular Lines of ship and cargo handling systems (on board and in port, infrastructure)-and a few economic factors on which, once the ship is on operation, it is difficult or impossible to be changed or, in any case, action on these is very limited -fixed costs of the ship, fuel prices, costs of port, cargo handling costs-.All of them outline the final cost structure of the maritime transport.Schematically, it is possible to put: Costs of capital As far as the economic factors is concerned, it must be clear that ship owners have very few possibilities of modifying them to act on the evolution of their costs, so the factors on which those depend are, fundamentally, the price of acquiring the ship, its financing conditions, and a series of factors completely beyond the ship owner's influence, given that they are imposed by the market of crude oil and of marine fuels, the ports, etc.; only, but with numerous limitations, ship owners can act on their fixed costs of operation (crew, maintenance and repairs, insurance, etc.); the rest of the costs escape the action of ship owners, who can do nothing to control them.that, except the eventual alterations in the general arrangement of the ship to adapt it better to the traffic giving her some specific cost saving system or an improvement of the productivity, such as the installation of new cranes to make a better performance of cargo handling operations, from a practical point of view the shipowner only can modify his costs by means of the alteration of the service speed, and to this respect, the slow steaming is a good proof of it. And as for the technical factors, it is quite clear In any case, it can be seen that probably the main specific characteristic of the costs of maritime transport is the remarkable inflexibility, which is one of the major difficulties to face the traditional crises of the freight market. During the last years, and up to the great economic crisis begun in the middle of 2007 and generalized on the following year, and affecting the whole international economy, the freight market, basis of the globalization, grew up to unbelievable levels in many years, with values of the indexes that had never been reached before.In this context, during these years of prosperity, the costs -nearly all the costs-grew very remarkably.And the shipowners, more attentive to the freight market than to their internal costs did not fight against this problem so important for them. First of all, the rise of the fuel prices is well-known, whose evolution during recent years was really devastating for the shipping economy: the 380 Cst fuel oil, whose price in Rotterdam in 2003 reached 150 US Dollars per metric ton, reaching 720 US Dollars per metric ton in 2008, when the price of the barrel of crude oil reached 146 US Dollars.And though later the pressure diminished, we are now in a new upward stage, with fuel oil prices that at the beginning of January, this year have surpassed 500 US Dollars per metric ton, so the invoice of the bunkers continues being very high, and the lines of big container ships have reduced the speed of service, incorporating new vessels in their lines, while the oil tankers are also considering to reduce their speeds and to return to the slow steaming. On the other hand, the evolution of the Euribor during the last years has become another important factor in the increase of the costs, in this case the capital costs.In fact, the average of the one year Euribor, which in 2003 was 2.34%, was 3.44% in 2006, 4.45% in 2007, and in 2008it went beyond 5.50%. That led, between 2003and 2008, to cost increases of interests close to 130%, which translated to increased needs for cash flow to take care of the debt between 12 and 13%.Fortunately, the fall of interest rates from the generalization of the crisis has again reduced the capital costs, which are now even below those existing in 2003, in spite of the light recovery produced in Euribor during recent months.Anyhow, at the end of this year, or maybe during the following year, substantial increases of interest rates will be seen once more, given that as soon as the economies recover their production pace, the danger of inflation will have to be attacked by means of a new increase of interest rates. But it has not been only a problem of fuel or financial costs.Also, other costs of operationmainly crew, maintenance and repairs, supplies and insurance -have followed the rise of those costs.The year 2007 was particularly sensitive to these increases: according to Moore Stephens (7), the average increase of the costs of crew was over 10%, though in some types of ships the figures reached were above that percentage -namely, the container ships saw a 20% increase in this item.Costs of supplies also experienced important increases, over 16%, though below the figure of the previous year, which reached a 20% increase.As far as maintenance and repairs are concerned, the average increase in 2007 was 12%, though with differences among the different types of ships.And the insurance costs also rose, experiencing an average 7% increase.Globally, the average increase of the costs of operation of the fleet through the year 2007 was 11.2%. But it has not only been a matter of one year.Between 2003 and 2008 -the years of the freight market boom -the running costs of operation of the ships (crew, maintenance and repairs, insurance, etc.) endured very important increases.The following table shows the representative indexes of operations costs of bulk carriers, oil tankers, and container ships between 2000 and 2008, as well as the year-on-year percentage variations (also according to Moore Stephens). Evolution of costs As noted, the total increase of costs of operation during only eight years have been very important, namely 72% for bulk carriers and 84% for oil tankers.The container ships, whose statistics only reach 6 years, saw their costs grow to 73%.Percentages that correspond to rates of annual accumulative growth of costs between 7.0% and 9.6%, absolutely unbearable figures except under situations of a freight market boom; otherwise, they can collapse the economies of a great majority of ship owners.What is the reason for such an important rise in costs?Simply because at the same pace of the growth of the economies of most countries, the prices of raw materials also rose in a immoderate way, so that not only the fuels -whose multiplier effect on the prices is clear and important -, but also aluminum, copper, nickel, silver, etc., that is to say, the main commodities of international trade, were multiplying their prices and their influence on the world economy.Especially, the evolution of the prices of coal, iron ore, and steel have had muchrelevance in this explosion of price increases. Actually, the behaviour of the main commodities during these years was very much alike that of the BDI (Baltic Dry Index), which is considered to be the most suitable measure of the evolution of the freights corresponding to the dry bulk goods on the world market.A remarkable correlation can be observed, according to Cotzias, between the BDI and the CRB Index Reuters-Jefferies, considered one of the most accurate indexes of raw materials on the international trade market. On the other hand, it is certainly interesting to observe the evolution of the fixed total costs of operation of ships (capital costs plus operation costs) throughout the last 20 years, according to Clarksons (information presented by Stopford at the end of last year 2010).These calculations were made with capital costs computed on a 20-year depreciation period, interest at Libor rate, and all calculations referred to the price of a new building in every month of the year. The information corresponding to three types of ships (VLCC oil tanker, Capesize bulkcarrier and 3.300 TEUs containership) is illustrated in the lower part of the graphs, that show in US$/day the total cost of each type of ship, in front of the curve of the corresponding income (TCE), left apart in this moment.In this figures we can see that between 2003 and 2008, the years of the freight boom, the total costs per day of the ships grew between 100% (3300 TEUs containership) and 200% (VLCC oil tanker and Capesize bulkcarrier) to a great extent because, besides the rising of the operation costs, the capital costs 1 also suffered an important augment. The costs of operating a ship, which, as explained above, can be divided into fixed costs and voyage costs, have experienced remarkable growths especially important since 2003.Let us summarize their evolution. The capital costs depend mainly, as explained before, on the purchase price of the ship.This is a parameter closely correlated with the evolution of freight index.In the years of the last boom, astronomic prices were paid not only for new ships or ships still being built in the shipyard -between 50 % and 100 % above the price of the shipyard for a newbuilding-, but, in addition, in the second hand market prices reached figures higher to those corresponding to newbuildings -bulk carriers 5 and 10 years old and some 5 years old oil tankersand even old ships of 20 years of age, were sold at prices between 75 % and 80 % of the newbuilding prices by those dates.We refer to the first half of the year 2008.This was possible because the market was paying extraordinarily high freights which allowed a very quick recovery of the capital, in spite of that the old ships required an annual income much higher than the new ones to face to the capital payments (devolution of principal and interests of the debt).The information included has been taken from Compass (3) and corresponds to October, 2007, but figures were even higher in the first half of 2008.Evidently, the fall of the market has been a hard knock for the shipowners, many of which have not been able to support the new conditions, some have been led to sell their ships with big losses and others even have disappeared.Banking also suffered much of the problem, as financial basis of the investments, and many financial institutions have been immersed in the crisis.Certainly, the fall of the market has also brought a better price of the ships, though the financing of them has become much more difficult to obtain and the number of operations has been reduced. Anyhow, every ship is a particular case, and although the general evolution of prices has been the indicated one, the fact is that every ship owner has acquired ships at different prices, in different moments and with different financing conditions.gross freight [source: Charles R. Weber (2)] and the TCE is below 10,000 US$/day, which means that the ship does not cover its daily running costs. The fact is that already the price of a metric ton of fuel-oil is on 510 dollars This shows the average capital cost, the average capital payment and the highest capital payment (which corresponds to the first year after the purchase) in dollars per day for different purchase prices ranging from 40 to 120 million dollars (29 to 87 million euro).This note is taken from Reference 9. 2 One year later (first quarter of 2012), the price per metric ton was about or nearly 200 dollars higher, and it continues this tendency. Fig. 1 .A Fig. 1.Evolution of Index of Commodities and Baltic Dry Index Fig. 6 . Fig. 6.Evolution of crude oil price and bunker price in Rotterdam Table 1 . Cost structure of maritime transport Table 2 . Technical and economics factors of maritime transport Table 3 . Evolution of running costs of operation of ships Table 4 . Newbuilding and second hand prices of ships in October 2007 Where are we going in the present circumstances, with increasing costs and important reduction of freight rates?I think we are again going to live something already lived in other past crises, which led the maritime sector to situations of real distress, with significant economic losses, lay-up of vessels, scrapping, cancellation of building contracts, unemployment in shipyards, etc., circumstances that we are already seeing since more than two years ago..POLO.Capital Costs vs. Cash Flow: Keys to a better understanding of the freight market, Infomarine, No. 190, October 20111 Detailed calculations with standard international market data on capital costs for vessels with prices ranging from 40 to 120 million dollars (29 to 87 million euro), with external financing and straight-line depreciation with a residual value of 10% the price of the ship after an useful life of 20 years, lead to the estimate of capital costs and payments schematically shown in the Figure below.
2018-12-15T07:47:13.951Z
2012-01-09T00:00:00.000
{ "year": 2012, "sha1": "500aadfe6b00e5ae4ea4cd56b2cfff619e0b1903", "oa_license": "CCBY", "oa_url": "https://shipjournal.co/index.php/sst/article/download/57/197", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "500aadfe6b00e5ae4ea4cd56b2cfff619e0b1903", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography" ] }
14767575
pes2o/s2orc
v3-fos-license
Somatosensory Stimulation With XNKQ Acupuncture Modulates Functional Connectivity of Motor Areas Xingnao Kaiqiao (XNKQ) acupuncture is an acupuncture technique used for stroke patients. In 24 healthy volunteers, we applied this complex acupuncture intervention, which consists of a manual needle-stimulation on five acupuncture points (DU26 unilaterally, PC6, and SP6 bilaterally). XNKQ was compared to three control conditions: (1) insertion of needles on the XNKQ acupuncture points without stimulation, (2) manual needle-stimulation on five nearby non-acupuncture points, and (3) insertion of needles on the non-acupuncture points without stimulation. In a within-subject design, we investigated functional connectivity changes in resting-state functional magnetic resonance imaging (fMRI) by means of the data-driven eigenvector centrality (EC) approach. With a 2 × 2 factorial within-subjects design with two-factor stimulation (stimulation vs. non-stimulation) and location (acupuncture points vs. non-acupuncture points), we found decreased EC in the precuneus after needle-stimulation (stimulation<non-stimulation), whereas the factor location showed no statistically significant EC differences. XNKQ acupuncture compared with needle-stimulation on non-acupuncture points showed decreased EC primarily in subcortical structures such as the caudate nucleus, subthalamic nucleus, and red nucleus. Post-hoc seed-based analysis revealed that the decrease in EC was mainly driven by reduced temporal correlation to primary sensorimotor cortices. The comparison of XNKQ acupuncture with the other two (non-stimulation) interventions showed no significant differences in EC. Our findings support the importance of the stimulation component of the acupuncture intervention and hint toward the modulation of functional connectivity by XNKQ acupuncture, especially in areas involved in motor function. As a next step, similar mechanisms should be validated in stroke patients suffering from motor deficits. ClinicalTrials.gov ID: NCT02453906 Xingnao Kaiqiao (XNKQ) acupuncture is an acupuncture technique used for stroke patients. In 24 healthy volunteers, we applied this complex acupuncture intervention, which consists of a manual needle-stimulation on five acupuncture points (DU26 unilaterally, PC6, and SP6 bilaterally). XNKQ was compared to three control conditions: (1) insertion of needles on the XNKQ acupuncture points without stimulation, (2) manual needle-stimulation on five nearby non-acupuncture points, and (3) insertion of needles on the non-acupuncture points without stimulation. In a within-subject design, we investigated functional connectivity changes in resting-state functional magnetic resonance imaging (fMRI) by means of the data-driven eigenvector centrality (EC) approach. With a 2 × 2 factorial within-subjects design with two-factor stimulation (stimulation vs. non-stimulation) and location (acupuncture points vs. non-acupuncture points), we found decreased EC in the precuneus after needle-stimulation (stimulation<non-stimulation), whereas the factor location showed no statistically significant EC differences. XNKQ acupuncture compared with needle-stimulation on non-acupuncture points showed decreased EC primarily in subcortical structures such as the caudate nucleus, subthalamic nucleus, and red nucleus. Post-hoc seed-based analysis revealed that the decrease in EC was mainly driven by reduced temporal correlation to primary sensorimotor cortices. The comparison of XNKQ acupuncture with the other two (non-stimulation) interventions showed no significant differences in EC. Our findings support the importance of the stimulation component of the acupuncture intervention and hint toward the modulation of functional connectivity by XNKQ acupuncture, especially in areas involved in motor function. As a next step, similar mechanisms should be validated in stroke patients suffering from motor deficits. BACKGROUND The acupuncture procedure consists of at least two components: stimulation and point location (Nierhaus et al., 2016;Langevin and Wayne, 2018). According to Chinese Medicine (TCM) theory, different types of stimulation will provide different clinical effects. The stimulation of the needle is mostly accompanied by a needle sensation that is called "deqi" in Chinese Medicine (Kong et al., 2007;Pach et al., 2011). Stimulation that elicited deqi was shown in a PET study to increase blood flow in the hypothalamus, insula, and subcortical structures compared with minimal or non-stimulation after needle insertion (Hsieh et al., 2001). Another study showed that acupuncture impacted selective attention networks, enhancing the efficiency of the alerting and executive control networks, and that acupuncture had a significantly greater effect on the alerting network compared to painful stimulation (Liu et al., 2013). Therefore, the subjective quality and the intensity of the stimulation seem to have an impact on the brain activity changes observed (Hui et al., 2005(Hui et al., , 2009Huang et al., 2012). The role of location or point specificity in acupuncture is still controversial (Choi et al., 2012;Langevin and Wayne, 2018) and might depend on whether this role is evaluated (i) from the perspective of acupuncture with its concept of meridians and extra points or (ii) from the perspective of modern anatomy and physiology, which can take into account dermal, muscular, and neural components as well as connective tissue and chemical aspects (Nierhaus et al., 2016). In a previous trial of our group, point-specific cerebral responses were shown for one acupuncture point (ST36) in comparison to two control locations (Nierhaus et al., 2015b;Long et al., 2016). Numerous neuroimaging studies that evaluated the impact of acupuncture on the brain hinted toward specific brain activity and functional connectivity changes due to acupuncture (Dhond et al., 2007;Huang et al., 2012;Chae et al., 2013). Most of these studies either investigate manual acupuncture only on one single point or investigate the effects of electroacupuncture (Huang et al., 2012). The latter applies an electrical current between acupuncture needles inserted into the skin and is therefore easier to evaluate due to its better potential for blinding, standardization, and easier simultaneous multipoint application. However, in most clinical settings (in China and the West), manual needle-stimulation on multiple acupuncture points is applied. Xingnao Kaiqiao (XNKQ) acupuncture is a semi-standardized manual acupuncture technique developed in Tianjin, China, by Professor Shi Xuemin using a specific set of acupuncture points and strong needle-stimulation for different neuropathological conditions such as acute and chronic stroke symptoms (Shi, 2013) and multiple sclerosis. In a clinical setting, it was shown that stroke patients suffering from motor deficits react especially well to XNKQ acupuncture (杜蓉 et al., 2015). This suggests an impact of XNKQ on central mechanisms. However, modulation of brain activity following XNKQ acupuncture has not yet been fully investigated. In the present study, we aimed to evaluate the impact of XNKQ acupuncture (and its components "stimulation" and "point location") on resting-state functional MRI connectivity. For this, we developed a 2 × 2 design that varied the stimulation component (stimulation vs. non-stimulation) and the location component of acupuncture (acupuncture points vs. non-acupuncture points). We applied data-driven eigenvector centrality mapping (ECM) to evaluate functional connectivity differences for the stimulation component (comparison of stimulated and non-stimulated acupuncture conditions) as well as the point location component (comparison of conditions with acupuncture points and non-acupuncture points). We had assumed that it would be possible to detect a difference between stimulated and non-stimulated acupuncture conditions as well as between acupuncture conditions which used the established acupuncture point location according to Chinese medicine and conditions which used non-acupuncture point locations. Moreover, we hypothesized that XNKQ acupuncture differed from the three control conditions. Subjects We studied 24 healthy volunteers between 18 and 36 years of age (26.1 ± 4.3 years (SD); 12 females). They gave written informed consent to participate in the experiment according to the declaration of Helsinki. The ethics committee of Charité -Universitätsmedizin Berlin approved the study (Ethics No EA1/338/14) and the study was registered (ClinicalTrials.gov NCT02453906). Prior to participation, all volunteers underwent a clinical neurological examination and they confirmed they were not taking any medications for acute or chronic diseases. Design In a 2 × 2 factorial within-subject design we evaluated four different interventions in four different sessions (four different days with at least 24 h in between each) in a random order: (a) XNKQ acupuncture as manual needle-stimulation on five acupuncture points (DU26 unilaterally, PC6 and SP6 bilaterally); (b) insertion of needles on the five XNKQ acupuncture points without stimulation; (c) manual needle-stimulation on five nearby non-acupuncture points; and (d) insertion of needles on five non-acupuncture points without stimulation. The intervention had a duration of 5 min and was applied in the MRI scanner room. Resting-state fMRI was acquired before and after each intervention to compare changes (post minus pre) in functional connectivity between interventions. The break between pre-and post-resting state scans was about 10 min. Before informed consent, subjects were informed that they would receive acupuncture with five needles at the lower leg, the forearm, and above the upper lip on four separate days, on 2 days with stimulation of the needle and on the other two without. Subjects were blinded regarding the point specificity (acupuncture points vs. non-acupuncture points). Point Locations The following acupuncture points have been used: PC6 (nei guan, bilateral), DU26 (ren zhong, unilateral), and SP6 (san yin jiao, bilateral) (Shi, 2002(Shi, , 2013, see Figure 1. DU26 is located at the junction of the upper 1/3 and middle 1/3 of the philtrum. PC6 is located 2 cun above the transverse crease of the wrist, between the tendons of radial wrist flexor and palmaris longus. SP6 is located 3 cun above the tip of the medial malleolus, behind the posterior border of the medial aspect of the tibia (Shi, 2002). The control points have been used in an earlier study on XNKQ acupuncture (李筱媛 and 李军, 2009) and were chosen after a discussion process with acupuncture experts. They are not located on a meridian or above a main nerve and are within a radius of 2.5 cm of the respective acupuncture point (Figure 1). Control point 1 (control of PC 6) is located laterally to PC 6, between the lung meridian of hand-taiyin and the pericardium meridian of hand-jueyin (李筱媛 and 李军, 2009). Control point 2 (control of DU 26) is located on the vertical line of the mouth, left-horizontal to DU26 (李筱媛 and 李军, 2009). Control point 3 (control of SP 6) is located six cun above the tip of the medial malleolus, between the spleen meridian of foot-taiyin and the liver meridian of foot-jueyin. It is three cun above the tip of the medial malleolus, in front of the inner of the tibia, 1.25 cun in front of SP 6 (李筱媛 and 李军, 2009). Acupuncture Procedures The acupuncture was performed while the subjects lay in a supine position on the scanner bed by an acupuncturist from Tianjin University of Traditional Chinese Medicine (YC) trained for 12 years and with 8 years of clinical experience. The acupuncturist was trained directly by the developer of the XNKQ acupuncture and had long-time experience in the application of XNKQ acupuncture in patients in China and was also familiar with German study settings. For the acupuncture, sterile, single use, individually wrapped acupuncture needles (0.20 × 30 mm; titanium, DongBang, Acupuncture, Inc., Boryeong, Korea) were used. For the XNKQ acupuncture PC6 was punctured bilaterally to a depth of 0.5-1.0 cun and stimulated with the reducing method by lifting and thrusting with simultaneous twirling manipulation for 1 min (twirling anticlockwise with the left hand and clockwise with the right hand). After this, DU26 was punctured obliquely toward the nasal septum to a depth of ∼0.3-0.5 cun with birdpecking needling until the eyes became wet or developed tears. Subsequently, SP6 was punctured on both sides obliquely along with the medial border of the tibia to a depth of ∼0.5-1.0 cun, with lifting and thrusting reinforcing manipulation, thrusts with heavy strength and lifting with gentle strength for 1 min. The needles were removed directly after stimulation [called "quick needles" technique (Shi, 2013)]. Control condition 1 consisted of the insertion of needles on the same five acupuncture points in the same order as used for XNKQ acupuncture (PC6 bilaterally, DU26 unilaterally, SP6 bilaterally), but without needle-stimulation. Control condition 2 consisted of manual needle-stimulation on the five nearby non-acupuncture points (control point 1 bilaterally, control point 2 unilaterally, control point 3 bilaterally), identically to the XNKQ needle-stimulation. Control condition 3 consisted of the insertion of needles on the five non-acupuncture points (control point 1 bilaterally, control point 2 unilaterally, control point 3 bilaterally) without manual needle-stimulation. Needle sensation as a proxy for deqi and pain sensation was measured after each session by the Massachusetts General Hospital Acupuncture Sensation Scale [MASS, (Kong et al., 2007)]. MRI Data Acquisition Before all measurements, participants were instructed to keep the eyes open and to stay relaxed. Data was acquired using a 3T Tim Trio Siemens MRI System (Siemens Medical, Erlangen, Germany) equipped with a 12-channel head coil. For restingstate fMRI images, we used a T2 * -weighted echo planar imaging (EPI) sequence (37 axial slices, in-plane resolution is 3 × 3 mm, slice thickness = 3 mm, flip angle = 70 • , gap = 0.3 mm, repetition time = 2,000 ms, echo time = 30 ms). A structural image was acquired for each participant using a T1-weighted MPRAGE sequence (repetition time = 1,900 ms, inversion time = 900 ms, echo time = 2.52 ms, and flip angle = 9, voxel size 1 × 1 × 1 mm). Subjects' heads were immobilized by cushioned supports, and they wore earplugs to protect against MRI gradient noise throughout the experiment. Resting-State fMRI Data Analysis We removed the first ten volumes of each resting-state scan (RS_pre and RS_post for each subject and all four interventions) to account for adaptation of the participant to scanner noise and environment. We performed slice time correction, head motion correction, and spatial normalization to MNI152 space with SPM12 (www.fil.ion.ucl.ac.uk/spm/). The toolbox REST (www.restfmri.net) was used for temporal band-pass filtering (0.01-0.08 Hz). We did not regress out the global mean signal since this step might affect the correlation between time courses (Buckner et al., 2009;Lohmann et al., 2010;Fransson et al., 2011;Taubert et al., 2011). The anatomical T1-images were normalized to MNI152 space and then segmented into gray matter, whiter matter and cerebral spinal fluid (CSF). Average masks were generated for gray matter, white matter, and CSF derived from the segmented T1 images of all subjects. Principal component analysis (CompCor) was done by the DPABI toolbox (toolbox for Data Processing & Analysis of Brain Imaging, http://becs.aalto. fi/~eglerean/bramila.html) within the CSF/white matter mask on the resting-state data (Behzadi et al., 2007). The first five principal components and six head motion parameters were used as nuisance signals to regress out associated variance. We did not apply spatial smoothing before the centrality analysis, as this could generate artificially high correlation coefficients (Zuo et al., 2012). To compare differences in head motion across restingstate scans, we calculated the frame-wise displacement (FD) using BRAMILA tools (Power et al., 2012) (http://becs.aalto. fi/~eglerean/bramila.html). The average FD for all scans was examined with two two-factorial ANOVAs, including (a) the factors "session" (1-4) and "time" (pre-post) and (b) the factors "condition" and "time." We used the data-driven ECM approach to characterize whole-brain functional connectivity without prior assumptions (Nierhaus et al., 2015a;Long et al., 2016;Antonenko et al., 2018). This graph theoretical network approach quantifies the correlation of each voxel with all other voxels in the brain, aiming to identify how "central" (or prominent) this region is within the whole-brain network (Lohmann et al., 2010). For each individual resting-state scan, the EC map has been generated within the gray matter mask by using fastECM, which provides a more efficient way to perform the centrality analysis without calculating the voxel-wise correlation matrix (Wink et al., 2012). Z-standard transformation (i.e., for each voxel, subtract the mean value of the whole brain then divide by the standard deviation of the whole brain) and 6 mm FWHM smoothing was performed on the individual ECM maps (Zuo et al., 2012;Yan et al., 2013). To evaluate the impact of the four different acupuncture conditions on EC, we analyzed the difference EC-maps (post minus pre) in a 2 × 2 flexible factorial design [correlated repeated measures (Gläscher and Gitelman, 2008)] with the factors "stimulation" (stimulation vs. non-stimulation of the needles) and "location" (acupuncture points vs. non-acupuncture points) within SPM12 with age, gender, and MASS index as covariate. For statistical analysis, we determined cluster-extent thresholds with Monte Carlo simulation (AlphaSim procedure) as implemented in Neuroelf Version 1.1 (http://neuroelf.net) using a family-wise error (FWE) cluster level correction of pFWE <0.05. The ECM analysis identifies brain regions with altered overall (whole-brain) connectivity, however it does not show to which specific brain areas the connectivity has changed. A complementary seed-based functional connectivity analysis can be applied to characterize the "origin" of observed EC changes. The comparison of XNKQ with stimulation on nonacupuncture points revealed EC changes in subthalamic brain regions which are known to be involved in motor control, such as the subthalamic nucleus and red nucleus. In order to show that this result might be connected to cortical motor areas, we have performed a complementary seed-based analysis. As two examples, we chose the red nucleus and subthalamic nucleus as seed regions: For all resting-state scans, we calculated the temporal correlation between the time series of the seed region and all other voxels within the gray matter mask. The difference in the resulting correlation maps were analyzed in a 2 × 2 flexible factorial design with the factors "stimulation" and "location" with age, gender, and MASS index as covariate (similar to the ECM analysis). It should be noted that no formal statistics was applied in this step (double dipping), rather it was used to identify the most affected connections, which contributed most to the statistically significant effect of the ECM analysis. For visualization of the most prominent clusters, we used voxel-wise whole brain FWE correction with peak-level p < 0.05. Head Motion There was no significant difference in head motion (mean FD) across all resting-state scans (4 pre-and 4 post-acupuncture scans) for either the comparison between the four session or the comparison between the four acupuncture conditions (all p > 0.46). Over all 8 scans, the mean FD was 0.14 ± 0.06 mm [mean ± std] and the average percentage of volumes exceeding an FD-threshold of 0.5 mm was 1.9 ± 2.8% [mean ± std]. Further evaluation of the four different conditions showed that the decreased eigenvector centrality in the precuneus was driven by the stimulation of non-acupuncture points (non-points with stimulation, npws). It was significant for the stimulation of non-acupuncture points compared to the two non-stimulation conditions (npws vs. points non-stimulation, pns and npws vs. non-points non-stimulation, npns), and it was not significant when XNKQ was compared to the two non-stimulation conditions (XNKQ vs. pns and XNKQ vs. npns). However, the comparison of the two stimulation conditions (XNKQ vs. npws) also showed no significant difference in precuneus. The factor location showed no statistically significant different eigenvector centrality. Additionally, no effect was found for the comparison of the non-stimulation conditions (pns vs. npns). DISCUSSION To elucidate cerebral effects of manual acupuncture using more than one acupuncture point, we applied XNKQ acupuncture and three control conditions in a neuroimaging study in healthy subjects. In a 2 × 2 factorial within-subject design, we investigated the impact of the factors stimulation and location, and of XNKQ acupuncture specifically, on resting-state functional connectivity. While the factor location appears to have no significant effect on centrality, we found decreased eigenvector centrality in the precuneus for the factor stimulation. This result was driven by the stimulation of non-acupuncture points, as the comparison of XNKQ acupuncture with the two non-stimulation interventions showed no significant differences. However, when comparing XNKQ acupuncture with manual needle-stimulation on nonacupuncture points, we found significantly decreased functional connectivity for areas involved in motor function. Our results support the assumption that (1) needlestimulation drives the cerebral effects, (2) point location only impacts connectivity when the acupuncture points are stimulated, and (3) XNKQ acupuncture, as a complex form of acupuncture, modulates functional connectivity in motor areas minutes after the acupuncture. Our study design shows strengths and weaknesses that should be considered when interpreting the results. With our factorial within-subject design including a relatively large number of healthy subjects measured on four separate days, we were able to separate the factors stimulation and location, as well as to reduce variance and carry over effects. The design we chose aimed at a study question relevant for the understanding of clinical acupuncture which primarily uses manual needle-stimulation on more than one point. So far, numerous imaging studies evaluate only one-point acupuncture and/or apply electro-acupuncture (Huang et al., 2012;Chae et al., 2013). Such a setting might be better suited for standardization and blinding options but does not represent clinical acupuncture in usual care settings. Although in our study design only five locations were acupunctured for a relatively short time interval (hence still does not fully represent the clinical setting), we were able to observe cerebral changes that illustrate the impact of a complex acupuncture, such as XNKQ acupuncture, on the brain. Because the duration of the sustained effect of acupuncture is not known, we do not know whether pausing the intervention for at least 24 h is sufficient to avoid carry-over effects. However, the order of interventions was randomized to minimize the risk of a systematic impact. We chose a design that can evaluate rapid effects on restingstate functional connectivity, which are observable after the intervention, but not the instant evoked responses of the different acupuncture conditions. This design decision may decrease the sensitivity to identify differences between the conditions. For the evaluation of instant effects, an event-related design with needling during the scanning phase would have been necessary. However, this would be much more difficult to achieve, especially when evaluating manually stimulated acupuncture on multiple acupuncture points. Although we included a relatively large number of subjects, the sample size might be too low to show robust effects. The level of statistical significance we chose was liberal. For future studies with a similar design, an even larger sample size might be recommended, especially for the evaluation of effects in patients. Based on our findings, it is now possible to evaluate a hypothesis-driven approach, which might create more robust results in contrast to the data-driven approach we chose as the primary analysis. In our study, we included only healthy subjects for an easyto-standardize setting to understand the neurophysiology of the different acupuncture conditions. However, usually XNKQ acupuncture is only applied in a clinical setting for patients with neurological deficits such as multiple sclerosis or stroke as part of a multi-component intervention that also includes physiotherapy. Therefore, it is possible that effects in healthy subjects differ from effects expected in patients. However, a study on patients would have created more variance and is more prone to bias, which is not ideal as a first step. However, only the subjects were blinded for the applied acupuncture conditions as well as the researchers analyzing the data during the first stages of analyses. The acupuncturist applying the manual acupuncture could not be blinded for the different conditions and this might have had an effect on needlestimulation. However, we measured needle sensation as a proxy for stimulation strength and included it into our statistical model. The choice of control points for an acupuncture study is very challenging because it is still not clear what constitutes an acupuncture point, and it is difficult to combine the traditional concept of acupuncture with modern anatomy (Nierhaus et al., 2016;Langevin and Wayne, 2018). Therefore, it is possible that the control points chosen for our study were not inert, either from the perspective of acupuncture or from the perspective of anatomy. To our surprise, we found no significant differences between XNKQ and the two non-stimulated acupuncture conditions. This means that the main effect that we found in precuneus for the factor "stimulation" is driven by the needle-stimulation on non-acupuncture points. However, the comparison within the two stimulated acupuncture conditions (XNKQ vs. stimulation on non-acupuncture points) revealed a significant differencemainly in subcortical regions-that is not observed in the other comparisons. It seems that the stimulation of acupuncture points (XNKQ) induces subcortical connectivity changes (decreased centrality) that are opposite to the connectivity changes induced by needle-stimulation of "neutral" non-acupuncture points. This result supports the view that both "stimulation" and "point location" contribute to the acupuncture effect. Other studies have also shown that acupuncture can affect functional connectivity of brain networks such as the default mode network (DMN) or sensorimotor network in pain, stroke, or mental conditions (Dhond et al., 2008;Bai et al., 2009;Hui et al., 2009;Chae et al., 2013;Napadow et al., 2013;Liang et al., 2014;Zhao et al., 2014;Deng et al., 2016). Numerous studies could show that the precuneus (as part of the DMN) is frequently affected by acupuncture (Chae et al., 2013;Nierhaus et al., 2015b). So far, the specific role of the precuneus is not fully understood, however for pain it might be involved in the assessment and integration of pain (Goffaux et al., 2014). The reduced centrality that we found for the precuneus in resting-state after needle-stimulation might hint toward such cerebral processing induced by the strong and (sometimes) painful stimulation. Functional connectivity is regularly affected by stroke (Grefkes and Fink, 2011;Rehme and Grefkes, 2013;Baldassarre et al., 2016;Almeida et al., 2017), and brain imaging studies have revealed functional brain reorganization in relation to recovery (Schaechter, 2004;Almeida et al., 2017). In a stroke mouse model, it could be shown that multisensory input can improve functional recovery and resting-state functional connectivity after stroke (Hakon et al., 2018). Acupuncture can be regarded as a complex somatosensory input with needle-stimulation. According to a study by Li et al. both acupuncture and somatosensory stimuli to the contralesional side produce hyperactivation in the ipsilesional primary sensorimotor cortex and SII (Dhond et al., 2007). A study by Schaechter et al. (2007) revealed that after acupuncture intervention (verum or sham), patients exhibited changes in motor cortex activity associated with the stroke-affected hand that were positively correlated with changes in somatosensorymotor function of the affected upper limb. There was a trend toward greater increases in motor cortex activity in patients treated with verum acupuncture than sham acupuncture (Dhond et al., 2007). XNKQ is an acupuncture technique specially designed for different neuropathological conditions such as acute and chronic stroke symptoms (Shi, 2013), and moreover seems to impact on patients suffering from motor deficits. Therefore, our results are well in line with the existing literature and support the assumption that XNKQ affects the motor system. Our data-driven analysis (ECM) showed that XNKQ acupuncture affects functional connectivity of subcortical areas (e.g., red nucleus or subthalamic nucleus) that are known to be involved in motor function (Milardi et al., 2016). This is supported by our complementary seed-based analysis, which showed reduced functional connectivity between the seed regions and primary sensori-motor areas after XNKQ acupuncture. Maybe this reduced functional connectivity in the motor systems allows for a better reorganization during recovery from motor deficits in stroke. Of course, it needs to be proven if this can be translated to stroke patients. CONCLUSION Our findings support the importance of the stimulation component of the acupuncture intervention and hint toward the modulation of functional connectivity by XNKQ acupuncture, especially in areas involved in motor function. As a next step, similar mechanisms should be validated in stroke patients suffering from motor deficits. DATA AVAILABILITY The datasets generated for this study are available on request to the corresponding author. AUTHOR CONTRIBUTIONS TN, YC, CW, and DP conceived and designed the experiments. TN, YC, BL, and DP performed the trial. TN and DP analyzed the data. TN, DP, YC, and CW wrote the first draft of the paper. TN, YC, BL, XS, MY, CW, and DP discussed the data, revised the paper, and approved the final version.
2019-03-12T13:05:42.135Z
2019-03-11T00:00:00.000
{ "year": 2019, "sha1": "84dec88c3e7f12a900809b416619a3d402b787ba", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2019.00147/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "84dec88c3e7f12a900809b416619a3d402b787ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119294885
pes2o/s2orc
v3-fos-license
VIP-2 at LNGS: An experiment on the validity of the Pauli Exclusion Principle for electrons We are experimentally investigating possible violations of standard quantum mechanics predictions in the Gran Sasso underground laboratory in Italy. We test with high precision the Pauli Exclusion Principle and the collapse of the wave function (collapse models). We present our method of searching for possible small violations of the Pauli Exclusion Principle (PEP) for electrons, through the search for anomalous X-ray transitions in copper atoms. These transitions are produced by new electrons (brought inside the copper bar by circulating current) which can have the possibility to undergo Pauli-forbidden transition to the 1s level already occupied by two electrons. We describe the VIP2 (VIolation of the Pauli Exclusion Principle) experimental data taking at the Gran Sasso underground laboratories. The goal of VIP2 is to test the PEP for electrons in agreement with the Messiah-Greenberg superselection rule with unprecedented accuracy, down to a limit in the probability that PEP is violated at the level of 10E-31. We show preliminary experimental results and discuss implications of a possible violation. 1. Introduction W. Pauli discovered the famous Exclusion Principle named after him which explained the periodic table of the elements [1,2] (Nobel prize in 1945). The Pauli Exclusion Principle is one of the most important rules of nature and it has many consequences not only related to the periodic system of the elements but also to the stability of matter, the existence/stability of neutron stars and many other phenomena. We know that the Pauli Exclusion Principle (PEP) is itself a consequence of the spin-statistics theorem which divides nature in fermionic and bosonic systems. In spite of all efforts, no simple intuitive explanation for the PEP could be given -but several proofs of the Spin-statistics relation (Pauli exclusion principle) based on complicated arguments can be found in the literature [3,4]. The proof by Lüders and Zumino [4] is based on a clear set of assumptions: -Invariance with respect to the proper inhomogeneous Lorentz group -Two operators of the same field at points separated by a spacelike interval either commute or anticommute (locality) -The vacuum is the state of lowest energy -The metric of the Hilbert space is positive definite -The vacuum is not identically annihilated by a field If at least one of these assumptions is invalid then a violation of the Pauli Principle would be possible. There are also theoretical attempts to accomplish PEP violations. Some recent theoretical studies can be found in refs. [5,6]. Experimental tests of the Exclusion Principle We know the PEP seems to be fulfilled to a high degree since no violations are found up-to-now. However, due to the outstanding importance of PEP in physics, experimental investigations were performed on many different systems: atomic transitions, nuclear transitions, nuclear reactions, anomalous atomic structure, anomalous nuclear structure, statistics of neutrinos, astrophysics and cosmology. The different experimental approaches of PEP tests are based on various assumptions. According to S. Elliott [7] these experiments need to be distinguished in relation to the Messiah-Greenberg super-selection rule [8]. This rule states that the exchange symmetry of a steady state is constant in time. As a consequence, the symmetry of a quantum state can only change if a particle which is new to the system, interacts with the state. Some experiments investigating Pauli violation in stable states resulted in remarkable upper bounds for violation [9,10,11,12]. However, there is the caveat that in these experimental cases the Messiah-Greenberg superselection rule is not obeyed, meaning that one is testing an other fundamental rule, i.e. the stability of particles (e.g. electron decay [13]). A pioneering experiment was performed by Ramberg and Snow [14] which searched for Pauli forbidden X-ray transitions in copper after introducing "new" electrons to the system. The concept is based on the assumption that an electric current running through a copper conductor represents a source of electrons which are "new" to the systems of copper atoms of the copper conductor. Thus one can search for Pauli-forbidden transitions in the copper atoms (see fig. 1). The transition energy of the PEP violating transition is shifted in energy due to the shielding by the "extra" electron in the 1s state. These shifted transition energies can be calculated using a multiconfiguration Dirac-Fock approach taking the relevant corrections (e.g. relativistic corrections) into account [15,16]. Ramberg and Snow conducted the experiment in the basement of Fermilab and obtained the result The quantity β 2 /2 stands for the probability of a Pauli violating atomic transition and is de-facto standard in the literature. VIP at LNGS A much improved experiment VIP [17,18] following the concept of Ramberg and Snow was set up in the underground laboratory LNGS in Gran Sasso/Italy (LNGS). VIP used charge coupled devices (CCDs) [19] as X-ray detectors with very good energy resolution, large area and high intrinsic efficiency. The CCDs were previously successfully employed in an experiment on kaonic atoms at LNF Frascati [20,21]. The CCDs were positioned around a pure copper cylinder operated without and with up to 40 A current. The cosmic background in the LNGS laboratory is strongly suppressed (∼ 10 −6 ) due to the rock coverage. Additionally the setup was covered by massive lead shielding (see fig. 2). . X-ray detector system of the VIP2 experiment. The 3 SDD cell detector is cooled by liquid argon to about 100 K and read out via the readout board. Compared to the result of Ramberg-Snow it is an improvement by nearly 3 orders of magnitude. VIP2 at LNGS As a next step the experiment VIP2 with SDDs (Silicon Drift Detectors) as X-ray detectors was built and installed in LNGS. The experiment is designed for higher sensitivity by providing a larger X-ray detector solid angle, higher current and employing active shielding by plastic scintillators readout by silicon photomultipliers as background sensitive detectors. Due to the timing capability of SDDs the timing information of the SDD detectors and plastic scintillator signals can be used to additionally suppress background events. Recent Results The progress of the VIP2 experiment has been reported in [24,25,26,27]. In 2016 we collected data in a time period of ∼70 days without current and ∼40 days with 100 A current. In fig the Monte Carlo generated X-ray energy spectrum in the range 7-9.5 keV around the region of interest (marked in red) and the corresponding measured energy spectrum are displayed. In order to compare our preliminary result we used the analysis technique of Ramberg and Snow [14]. The analysis of this data set leads to a preliminary upper limit for the probability that the PEP is violated for electrons in copper It has to be emphasized that this preliminary result represents already the most stringent test of the PEP with no violation of the Messiah-Greenberg superselection rule. Summary and Outlook The experimental program for testing a possible PEP violation for electrons made great progress in 2016. The use of a new type of SDDs as X-ray detectors can further enhance the sensitivity by providing larger sensitive area. Furthermore, the cooling can be made more simple changing from liquid argon to Peltier cooling. Concerning the reduction of the X-ray background we will install a passive shielding with Teflon, lead and copper. Given a running time of 3 years and alternating measurement with and without current we expect either to lower the upper limit of PEP violation by about two orders of magnitude compared to the former VIP experiment or to discover the violation.
2017-03-05T16:04:52.000Z
2017-03-05T00:00:00.000
{ "year": 2017, "sha1": "e2ef56334ac50f8f32dcdfec8c860149d2df97e0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/873/1/012018", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "48c2456cdb68f9401c87e80d66a50c8b576c66ab", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10128819
pes2o/s2orc
v3-fos-license
Treatment and outcomes of crisis resolution teams: a prospective multicentre study Background Crisis resolution teams (CRTs) aim to help patients in acute mental health crises without admitting them to hospital. The aims of this study were to investigate content of treatment, service practice, and outcomes of crises of CRTs in Norway. Methods The study had a multicentre prospective design, examining routine data for 680 patients and 62 staff members of eight CRTs. The clinical staff collected data on the demographic, clinical, and content of treatment variables. The service practices of the staff were assessed on the Community Program Practice Scale. Information on each CRT was recorded by the team leaders. The outcomes of crises were measured by the changes in Global Assessment of Functioning scale scores and the total scores on the Health of the Nation Outcome Scales between admission and discharge. Regression analysis was used to predict favourable outcomes. Results The mean length of treatment was 19 days for the total sample (N = 680) and 29 days for the 455 patients with more than one consultation; 7.4% of the patients had had more than twice-weekly consultations with any member of the clinical staff of the CRTs. A doctor or psychologist participated in 55.5% of the treatment episodes. The CRTs collaborated with other mental health services in 71.5% of cases and with families/networks in 51.5% of cases. The overall outcomes of the crises were positive, with a small to medium effect size. Patients with depression received the longest treatments and showed most improvement of crisis. Patients with psychotic symptoms and substance abuse problems received the shortest treatments, showed least improvement, and were most often referred to other parts of the mental health services. Length of treatment, being male and single, and a team focus on out-of-office contact were predictors of favourable outcomes of crises in the adjusted model. Conclusions Our study indicates that, compared with the UK, the Norwegian CRTs provided less intensive and less out-of-office care. The Norwegian CRTs worked more with depression and suicidal crises than with psychoses. To be an alternative to hospital admission, the Norwegian CRTs need to intensify their treatment and meet more patients outside the office. Background The crisis resolution team (CRT) model of treating acute mental health crises outside in-patient wards has been implemented in some Western countries in the past decade [1,2]. With the adoption of CRTs in several Western countries in the past decade and in the UK and Norway, the implementation is part of national policies, it is important to evaluate the outcomes of crises after CRT care in ordinary clinical settings [3]. Guidelines or recommendations have been developed for the implementation of CRTs [4][5][6]. The teams should offer rapid assessment, intensive short-term home treatment, specialist multidisciplinary team interventions, reduced use of coercion, collaboration with the wider mental health care system and families/networks, and have gate-keeping functions for acute wards to a greater extent than outpatient clinics or in-patient wards. These key features of the CRT model are more a framework for delivering care and treatment than a specific type of treatment or therapy [1]. Apart from these findings, there is currently no clear evidence of any further clinical or social benefits of CRT care compared with standard care. In a Cochrane review, none of the studies found any differences in symptom outcomes, although none exclusively investigated crisis intervention, and the studies mainly ranged from the 1960s to the 1980s [19]. In the randomized controlled trial of CRT and standard care by Johnson et al., they found that symptoms, quality of life, social functioning, and adverse incidents, such as violence and self-harm, were similar between CRT and standard care after six months follow-up [8]. Another quasiexperimental study found no clear differences in symptoms, social functioning, or quality of life before and after the introduction of a CRT [9]. Barker et al. reported that carers said that the patients got better after CRT input, but that study had a low response rate (29%) [13]. Nor have most studies attributed any disadvantages to CRT care. The Cochrane review showed that treatment by a CRT was as safe as standard hospital care in terms of suicide, that home care reduced the family burden, and that there was no difference in the incidence of death [19]. Keown et al. reported that the number of suicides remained constant [11]. Bookle and Webber found that people of black ethnic origin used home treatments to the same extent as other ethnic groups in mental health crises [20]. However, Kingsford and Webber found that people from more socially deprived areas, older people, and those referred by enhanced community mental health teams had poorer outcomes after a CRT intervention [21]. In terms of admissions under the Mental Health Act in the UK, Keown et al. found that detentions under sections 2 and 3 of the Mental Health Act 1983 increased, whereas those under sections 5(2) and 5(4) declined following the introduction of crisis resolution and assertive outreach teams [11]. Barker et al. found a reduction in admissions under the Mental Health Act 1983 after CRTs began operating in Edinburgh [13]. These discrepancies indicate the need for further studies of the impact of CRTs on Mental Health Act admissions and on socially deprived people before we can draw any clear conclusions. In an implementation study of the crisis resolution team model in Norway, it was found that the CRT model has been implemented without a rapid response, gate-keeping function and 24/7 availability [22]. The aim of the present study was to investigate and compare patients and CRTs with respect to: 1) content of treatment and service practices; 2) outcomes of crises; 3) predictors of favourable outcomes; and 4) where possible, compare Norwegian data with data from the UK. Study design This study had a naturalistic prospective pre-post multicentre design. The study was part of the Multicentre Study on Acute Psychiatry (MAP) in Norway. The multicentre study was planned and implemented by a national network to evaluate acute psychiatric services. Setting Norway has a total population of 4.9 million people. The country is characterized by more rural areas and a lower population density than many other countries. The standard of living is generally high. Mental health service provision for adults consists of primary care and specialist mental health services. The primary health care services run by the 430 municipalities consist of general practitioners (GPs) and primary care mental health teams, usually staffed by psychiatric nurses, social workers, and occupational therapists. Many municipalities have residential services, day centres for people with mental health problems, and ambulatory care. The specialized mental health services run by 20 health authorities include 75 community mental health centres (CMHCs), hospitals with acute psychiatric wards and some specialized wards, and psychiatrists/psychologists in private practice. The CMHCs usually consist of outpatient clinics, in-patient wards, day care, and one or more specialized teams (case management teams, early intervention teams for first-episode psychoses, CRTs, and assertive community treatment teams). Specialized services for substance abuse are usually organized as part of the specialized mental health services in the health authorities. In 2005, the national health authorities of Norway decided to implement the CRT model at all CMHCs, inspired by the implementation of CRTs in the UK. Establishing CRTs was given national policy priority, to improve the accessibility to specialized mental health services of people in mental health crisis and to offer these patients a rapid, intensive, and ambulatory alternative intervention to admission to an acute psychiatric ward. In a telephone survey of CRTs in Norway, 51 of the 76 CMHCs had established a CRT by 2010. Thirty of these only operated during office hours and one had 24/7 availability. When asked about their collaboration with families, 38 replied that they did collaborate and 31 replied that they most frequently met the patients at home. This indicates that the way the CRTs are organized and operate has not changed significantly since our data collection in [2005][2006], and that our data are still representative of these teams, although there are some indications of somewhat more home treatments in 2010 than in 2005-2006 [23]. In 2005, there were nine CRTs for adults in Norway, and eight of these teams participated in this study. The last CRT did not participate because it was undertaking a study of its own [24]. The target group of the CRTs was intended to be patients with mental health problems so severe and acute that without the involvement of a CRT, acute admission would usually be necessary [5]. The CRTs in this study were from all parts of Norway, varying from urban to rural areas, with catchment areas ranging from 65,000 to 115,000 inhabitants. They consisted of 4-19 team members, and the teams were multidisciplinary (mainly psychiatrists, psychologists, psychiatric nurses, and social workers). Three had a psychiatrist and six had a psychologist as a full-time member of the team. The intended response time was 12-48 hours and the intended length of treatment by these teams was between five consultations and eight weeks. The CRTs were similar in that they were not available 24/7, played no gate-keeping role for acute psychiatric wards, and treated patients who were not considered for hospital admission. There were variations between the CRTs in their opening hours, their authority to admit patients to acute in-patient wards, and their ability to facilitate early discharge from acute wards. The most usual referral routes to the CRTs were self-referral, and referral by GPs, CMHCs, primary care mental health teams, and casualty departments. Sample In this multicentre study, the sample consisted of 680 patients and 62 staff members of eight CRTs. All patients referred during a three-month period, aged 18 years or more, and having face-to-face consultations with the CRTs were included in the study. There were no exclusion criteria. Further patient and team characteristics have been presented in a previous paper [22]. Data collection The CRTs contributed to the planning of the study through their participation in semi-annual workshops in 2003-2005 in preparation for the study. The data were collected in 2005-2006. The CRTs included all patients referred during a three-month period, or longer if necessary to include 60 patients. The inclusion period started at different time points for different CRTs. The number of 60 patients was chosen to include a reasonable sample of patients from each team for a comparative data analysis. For patients seen for more than two months, the end of acute treatment was defined as being at two months, and the discharge assessment was performed at this point for these patients. A registration form was designed to record information about the patients and the content of their treatments from admission to discharge. The form was piloted at two of the sites before its final revision. The data were collected by the clinicians in each CRT. Measures At admission, socio-demographic characteristics and suicidal risk were assessed by the clinicians. Suicidal risk was coded as (i) no suicidal thoughts or plans, (ii) passive death wishes or suicidal thoughts without concrete plans, (iii) concrete suicidal plans or self-injury but no death intention, and (iv) self-injury and death intention. This suicidal scale was designed in collaboration with the National Centre for the Prevention of Suicide [25]. At discharge, a diagnosis according to the International Statistical Classification of Diseases and Related Health Problems, 10 th Revision (ICD-10) [26], the content of treatment, and the reason for discharge were recorded. The content of treatment included variables such as length of treatment, frequency of and participants in consultations, collaboration with other services, unwanted incidents, and pharmacological treatments. Symptom severity and level of functioning were assessed at both admission and discharge using the Health of the Nation Outcome Scales (HoNOS) and Global Assessment of Functioning scale, split version (GAF) [27,28]. The patients who had one consultation were only rated once. The HoNOS consists of 12 subscales, each of which rates problems from 0 (no problem) to 4 (severe to very severe problem). In this study, the sums of scales 1-8 and 9-12 on HoNOS were calculated to give an overall measure of symptom severity and social problems, respectively. The subscales of HoNOS for overactive, aggressive, or disruptive behaviour, non-accidental self-injury, problems with drinking or drug-taking, problems with hallucinations and delusions, and problems with depressed mood were also included as the clinical scales most relevant to this study. The clinicians were trained in rating HoNOS in the half-day training seminar used in the UK, and all the clinicians had experience in rating GAF as a routine measure required for all treatment episodes in the mental health services. An earlier study, which used the same training for the clinicians, had shown acceptable inter-rater reliability (intra-class correlation coefficient [ICC] of 0.60-0.89) for the HoNOS subscales used in this paper [29]. The Community Program Practice Scale (CPPS) [30] was completed by each clinician. The CPPS is a questionnaire that measures practice and program climate of non-residential service models and consists of a 45-item scale on a five-point Likert scale (from 1 = strongly disagree to 5 = strongly agree) and with 13 subscales. For this study, the following six subscales were chosen as the most clinically relevant: case management, out-ofoffice contact, medication emphasis, team model, family orientation and involvement. The case management sub-scale measures whether the staff provide practical help to the patients, the out-of-office contact sub-scale measures to what degree the staff is working outside of the office, the medication emphasis sub-scale measures how much emphasis the team put on medication as a part of the treatment, the team model sub-scale measure whether more than one team member meet the patients, the family orientation sub-scale measures whether the team provide information or counselling for clients' family and the involvement sub-scale measures whether the staff members find their work interesting and challenging. The HoNOS, GAF, and CPPS scales have shown satisfactory reliability and validity [30][31][32]. Several studies have indicated moderately high internal consistency and low item redundancy for the HoNOS sum score, and therefore support the instrument's use as a meaningful measure of symptom severity [31]. Söderberg found that when staff use patients' GAF scores to measure changes and outcomes, it might be necessary to use several raters for an individual patient for the GAF scales' reliability and validity to be satisfactory [33]. In this study, two or more raters filled in the registration form, including the GAF assessment score, for each patient. A questionnaire completed by the team leaders assessed treatment approaches: response time, length of treatment, whether the CRT had a team approach with shared responsibility for the patient, collaboration with the wider mental health care system and families/networks, use of home treatment, and whether the CRT wanted to see the patient several times a week. Approval from authorities and contributions from user groups The study was approved by the Regional Ethical Committee for Research in Health and by the Norwegian Data Inspectorate. The Directorate of Health and Social Affairs consented to the use of information from the health services. The data were collected from all patients without their written consent, because the Regional Ethical Committee for Research in Health had agreed to this insofar as it was important to include information on all patients. Representatives for the user organizations Mental Health Norway and the National Association of Relatives in Mental Health participated as a reference group and in the workshops to plan and prepare the study. Data analysis HoNOS scales with missing values (average 5.5% across scales) were set to 0, because this was considered to be the most probable rating based on the skewed distribution with most patients rated 0, and on the assumption that clinicians most easily forgot to mark the rating when there was no indication of problems. This was also chosen in favour of imputation because it was the most conservative way to measure the severity of the patients' mental health problems. Diagnoses were missing for 53.5% and 17.4% of the patients in two teams and for 3.4%-10.4% in the other six teams. In Norway, only physicians/psychiatrists and psychologists are authorized to make ICD-10 diagnoses. The teams with the most missing values on the diagnosis variable operated without a physician/psychiatrist or psychologist as a regular member of the team and with nurses and social workers constituting the majority of their staff. In these teams, diagnoses were made by physicians who were not a part of the team. For this reason, the HoNOS scales were used instead of diagnoses in the analysis of the type and severity of the psychiatric problem. One of the CRTs did not register the length of treatments (n = 46). An imputation of missing values was performed with a regression model. We identified the socio-demographic and clinical variables that predicted length of treatment. For each of these patients, we calculated the length of treatment based on the estimated coefficients of these predictor variables. Descriptive and test statistics were assessed on all baseline variables according to whether the variables were categorical or continuous. Variations between the CRTs were also computed. In the analysis of treatment outcomes, we included only those patients who had received more than one consultation (n = 455). A paired-samples t test was used to evaluate the impact of the CRT interventions on the patients' clinical conditions by comparing the means of the pre-post test scores for the HoNOS total scores and the GAF scales. The calculation of the effect sizes was based on Cohen's d, defined as the difference between two means (preand post-treatment) divided by the standard deviation at admission [34]. A multilevel regression analysis was performed with the difference score for GAF symptoms as the dependent outcome variable. The ICC was 2.75% (ICC multiplied by 100), which indicated that the team level only contributed slightly to the explained variance. For this reason, a linear regression analysis was performed, with a stepwise backwards variable selection procedure. Potential predictors of a favourable outcome were chosen based on the guidelines for the implementation of CRTs both in relation to the target group and in clinical practice. The predictor variables selected were age, sex, being single, current employment, HoNOS scales 1-3 and 6-7 at admission, previous contact with mental health services, self-referral, length of treatment, intensity of consultations, doctor/psychologist participation in the consultations, collaboration with other mental health services and families/networks, pharmacological treatment, and the six selected subscales of the CPPS. The CPPS variables were used as team-level variables. Pairwise interaction tests were performed on all significant predictors. SPSS software version 15 for Windows (SPSS Inc., Chicago, IL) was used for most of the data analysis. Multilevel regression analysis was performed using the software SAS 9.2. A significance level of 0.05 was used. Results As shown in table 1, the 680 patients had a mean age of 40 years, 60% were female, 60% were single, and 25% were employed. The median number of patients per team was 80 (range, 46-147). The clinicians assessed patients to be at risk of suicide in about 60% of cases, and the mean GAF scores were 48.4 on the symptom scale and 49.6 on the functioning scale. The clinicians at the CRTs (n = 62) characterized themselves as focusing most often on involvement and least often on out-ofoffice contact. The analysis of the CRTs showed significant differences in the patients' characteristics and the service practices of staff members on most variables. Content of treatment As shown in table 2, the mean length of treatment for the total sample was 19 days (SD = 24.4, range 0-97 days). Two hundred and twenty-five patients had a single consultation for CRT care/assessment, and the remaining 455 received treatment with a mean length of 29 days. We found no significant difference between these two groups in the severity of their mental health illnesses. The mean length of treatment differed significantly between the CRTs (range 7-30 days). Patients with depressive problems received significantly longer periods of treatment (21 days, SD = 22) than those with were not further referred). In 7.4% of cases, the clinicians in the CRTs met the patient more than twice a week and the doctors and psychologists participated in 55% of the treatment episodes. The CRTs collaborated with other parts of the mental health services in 72% of cases and with families/networks in 52% of cases. Pharmacological treatment was given to 42% of the patients. Few structured diagnostic interviews were used by the CRTs. Eight patients were under compulsory treatment. With regard to the treatments, 75% of patients concluded them as planned. Outcomes of crises Of the 455 patients who had more than one consultation, 262 had positive changes in the HoNOS total score and 256 in the GAF symptom score. As shown in table 3, the mean HoNOS total scores were 12.1 at admission and 10.02 at discharge. The corresponding figures for the GAF symptoms were 49.2 and 54.3, respectively. This indicates a significant improvement between admission and discharge, with the largest effect size on the GAF symptoms (d = -0.45). The effect sizes across the GAF and HoNOS total scores (d = 0.15-0.45) *p values from χ 2 tests, ANOVA, and Kruskal-Wallis test; "**significance of the difference between teams indicated a small or medium improvement after CRT care [34]. A comparison of the effect sizes of the CRTs showed that the effect sizes of the HoNOS and GAF total scores for the CRTs differed (d = 0.19-0.45). Table 4 shows the numbers of patients with scores of ≥ 2 on the clinically relevant HoNOS subscales at admission and discharge. These scores decreased most on the depression scale (19.4) and least on the psychosis scale (3.0) and the substance abuse scale (2.7). Table 5 shows a linear multiple regression analysis of the significant predictors of favourable treatment outcomes, both unadjusted and adjusted for other variables. With adjustment for other variables, the length of treatment (p < 0.001), being male (p = 0.002), being single (p = 0.013), CRT focusing on out-of-office contact (p = 0.016), and having a problem with non-accidental selfinjury (p = 0.017) were associated with a favourable outcome. A high degree of involvement of the team members (CPPS subscale) was negatively associated with outcome (p = 0.006). Current employment, having received consultations more than twice a week, and the participation of a doctor/psychologist in the consultations were significant predictive variables before we adjusted for other variables, but were not significant in the final multiple regression model. Predictors of favourable outcomes of crises The pairwise interaction tests of all the significant predictors showed that a favourable outcome depended on the length of treatment: interaction effects p ≤ 0.001. The regression model explained 13.7% of the variance. Discussion The pattern of contact of the Norwegian CRTs was not characterized by intensive care, and there was an emphasis on depression and suicidal problems rather than on psychosis or substance abuse problems. The CRTs collaborated with other parts of the mental health system and with families/networks, but they had limited out-of-office and multidisciplinary contact. Content of treatment Providing intensive home-based care is a key element of the CRT approach [1][2][3][4]. Half the CRTs in this study claimed to have focused on home treatment. Only one team claimed that they wanted to see patients several times a week, and only 7.4% of the patients had had more than twice-weekly consultations with any member of the clinical staff of the CRTs. A team focus on outof-office contact was a predictor of a favourable outcome in the adjusted regression model. Compared with the UK, where home treatment programmes and frequent visits (usually at least daily) are considered key components of CRT care, the Norwegian treatment by CRTs can be characterized as short-term interventions with less intensive care, and with more outpatient care than home-based care. There might have been some changes related to home treatment since this study; the telephone survey mentioned in the setting section of this paper indicating more home treatments occurring in the Norwegian CRTs [23]. We suggest future studies should include measurement on actual home treatment frequency. It has also been emphasized in this model that CRTs should be specialist multidisciplinary teams consisting of psychiatrists, psychologists, psychiatric nurses, social workers, and other social care professionals [1][2][3][4]. In this study, five of the CRTs lacked a full-time psychiatrist as part of the team. A national survey of CRTs in England also found a lack of full-time consultant psychiatrists (45% of teams had input from psychiatrist at a mean 0.5 full time) [35]. A significant proportion of the patients (about 45%) in our study did not meet a doctor or psychologist in a CRT during the treatment episode. This lack of consultant psychiatrists and psychologists is also reflected in the fact that many of the patients were not diagnosed by the CRTs during the treatment episode. In the unadjusted regression analysis, patients provided with a physician/psychologist during the consultations had better treatment outcomes. This lack of specialized professionals can restrict the CRTs' ability to provide comprehensive, multidisciplinary care. A significant number of patients received only a single consultation for CRT assessment or care. Most of them were referred to other parts of the mental health services. This probably reflects the role of the CRTs as a kind of "triage" in the mental health system for patients with acute mental health problems. A key question is whether this screening process should be a function of outpatient clinics. The remaining group of patients received about four weeks of CRT care, with small to medium improvement. The size of the effect was not surprising given the brief period of the crisis intervention. Conversely, CRT care is a part a treatment chain in the mental health system. The clinical benefit of CRT care might be delayed, and may appear in another part of the mental health service. We hypothesized that collaboration with other mental health services and families/networks would predict favourable outcomes, but it did not. In Norway, there has been particular emphasis on this part of CRT care. In the review of Winness et al. and the study of Hopkins and Niemiec of service users' experiences with CRTs, the inclusion of family members as part of the treatment and the staff's communication with other services were appreciated [15,16]. However, based on our study, we know little about the content of the contact with other parts of the mental health system or with families/networks, but only that there had been some form of contact (consultations, meetings, by phone, etc.). Outcomes of crises This study indicates that patients may benefit from CRT care. However, patients with severe mental health illnesses were not common in our sample compared with studies in the UK. In studies of home-care acute psychiatric treatment based on data collected before the government proposed the establishment of nationwide CRTs in the UK, it was found that 53 -62% of the patients had psychotic disorders [36][37][38][39]. In Johnson's two samples from 2005 37% and 40% had a psychotic disorder [8,9]. But the evidence is not wholly consistent; In a study of Barker et al from Edinburgh they found that 17% of the patients had psychotic symptoms [13]) and Tacchi found 13.5% with psychosis in a home treatment emergency response service in Newcastle [40]. With the lack of a randomized control group in this study, we cannot tell whether the patients would have progressed without CRT care (see the "Strengths and limitations" section below). The staff of these CRTs may also have overestimated the patients' improvement. Our measurement of the outcomes of crises was not based on patients' reports, but on the clinical staff's evaluations. By having the clinicians from the CRTs collect the data there is a risk of observer bias, especially with respect to rate HoNOS and GAF scales at initial assessment and discharge. Staff members from these teams were participating in the development of a new service in Norway, catering for people experiencing a mental health crisis. This might have increased the enthusiasm of the staff for their work, which may again have caused the staff to rate the patients' conditions better than they really were. Patients with depressive symptoms showed the best outcomes from their crises, and non-accidental selfinjury was also related to favourable outcomes. Patients with psychotic symptoms received shorter treatments, showed less improvement, and were most frequently referred to other parts of the mental health services. Our study indicates that because of the way in which Norwegian CRTs operate, they predominantly reach patients with depression and at risk of suicide. The length of treatment was a highly significant predictor of favourable outcomes of crises, and an interaction effect showed that favourable treatment outcomes depended on the length of treatment. Although the interventions of the CRTs are meant to be brief, this finding indicates that these teams should provide intensive treatments for patients experiencing acute mental health crises rather than referring them to other parts of the mental health system or for rapid discharge. Then again, this finding may also indicate that people improve with time, regardless of any CRT care (see the "Strengths and limitations" section below). In addition to the length of treatment, a team focus on out-of-office contact and suicidal problems, being male, and being single predicted favourable outcomes in the adjusted model. There were no significant differences between the sexes in the total severity of their symptoms or their social problems. The impact of CRT care may be greater for patients with little support from a social network. The regression model in this study explained only a small part of the variance (13.7%). Despite the statistically significant results for several independent variables, it is clear that other unknown variables influenced the outcomes of these crises. CRT care is a complex intervention involving many factors. Given the variations in clinical practice and the significant variations in the social and clinical functioning of the patients in this study, it was likely that we would be unable to identify all the critical components required for favourable outcomes of these crises. The possible random distribution attributed to the unreliability of the GAF scale may also have reduced the amount of variance explained [33]. There were differences between the CRTs in the lengths of treatment and the outcomes of crises, insofar as the CRTs with best staffing provided the longest treatment episodes and had the best outcomes. However, the resources of the local mental health services in the catchment areas of the CRTs may have been intermediate variables that varied between the CRTs. The proportions of compulsory treatments were low in these CRTs, but this is probably attributable to the small proportions of patients with severe mental health illnesses. It is hard to interpret the finding that a high degree of involvement by the team members was negatively associated with the treatment outcomes. This might be a random finding. In contrast, this sub-scale measures whether the staff members find their work interesting and challenging and whether they are involved in their work. The implementation of the CRT model is a new way of treating patients experiencing mental health crises. Most staff members at the CRTs were enthusiastic and devoted to this new way of working. In their meetings with patients, this enthusiasm may have led to their over-involvement and excessive zeal, which may have caused negative outcomes of treatment. Strengths and limitations The major strength of our design was its good external validity, because all patients treated at the CRTs were included and the data were obtained in routine clinical services, with no exclusion criteria. The lack of a control group and of randomization was the most important limitations. Randomized controlled trials (RCTs) are generally considered the gold standard evidence for treatment effectiveness in medicine, although it has been argued that the complexity of interventions and the many factors that may cause outcomes to vary between settings may limit the usefulness of RCTs in mental health services research [41]. Because our study was an uncontrolled naturalistic study, the positive outcomes of crises after CRT care may have resulted from factors other than the CRT intervention. The patients in this study were included because they were experiencing an acute mental health crisis. Their improvements may have been spontaneous recoveries or the natural fluctuations that often characterize mental health problems. Conclusions Our study shows that Norwegian CRTs provide less intensive and less out-of-office contact than UK CRTs, and they concentrate on depression and suicidal crises rather than psychoses. In the future implementation of CRT care in Norway, there should be an emphasis on improving the intensity of contact and ambulatory work, and an expansion of the target patients to include psychotic patients.
2014-10-01T00:00:00.000Z
2011-11-22T00:00:00.000
{ "year": 2011, "sha1": "1e53e5003992b212b1f43359d5d0503be99f64cd", "oa_license": "CCBY", "oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/1471-244X-11-183", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e53e5003992b212b1f43359d5d0503be99f64cd", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
10191256
pes2o/s2orc
v3-fos-license
Obesity and high waist circumference are associated with low circulating pentraxin-3 in acute coronary syndrome Background Long pentraxin 3 (PTX3) is a component of the pentraxin superfamily and a potential marker of vascular damage and inflammation, associated with negative outcome in patients with acute coronary syndromes (ACS). Obesity is a risk factor for cardiovascular disease and PTX3 production is reported in abdominal adipose tissue. Low PTX3 is however reported in the obese population, and obesity per se may be associated with less negative ACS outcome. Methods We investigated the potential impact of obesity and high waist circumference (reflecting abdominal fat accumulation) on plasma PTX3 concentration in ACS patients (n = 72, 20 obese) compared to age-, sex- and BMI-matched non-ACS individuals. Results Both obese and non-obese ACS patients had higher PTX3 than matched non-ACS counterparts, but PTX3 was lower in obese than non-obese individuals in both groups (all P < 0.05). PTX3 was also lower in ACS subjects with high than in those with normal waist circumference (WC). Plasma PTX3 was accordingly associated negatively with BMI and WC, independently of age and plasma creatinine. No associations were observed between PTX3 and plasma insulin, glucose or the short pentraxin and validated inflammation marker C-reactive protein, that was higher in ACS than in non-ACS individuals independently of BMI or WC. Conclusions Obesity is associated with low circulating PTX3 in ACS. This association is also observed in the presence of abdominal fat accumulation as reflected by elevated waist circumference. Low PTX3 is a novel potential modulator of tissue damage and outcome in obese ACS patients. Introduction The pentraxin superfamily includes short and long components [1,2]. C-reactive protein is a liver-synthesized short pentraxin and a strongly validated marker of systemic inflammation [1][2][3]. Long pentraxins are however synthesized by various cell types and may differentially modulate the inflammatory response under different clinical conditions [1][2][3]. In particular, long pentraxin 3 (PTX3) may be secreted by adipocytes under proinflammatory stimuli and it has been proposed as a clinical marker of vascular damage [1,2]. Plasma PTX3 was accordingly reported to be elevated in patients with arterial stiffness [4] and subclinical [5] or unstable atherosclerotic lesions [6], and high circulating PTX3 is observed in acute coronary syndromes (ACS) [7,8]. In ACS patients, higher PTX3 was also remarkably associated with negative outcome in terms of subsequent events and overall survival [9,10]. Despite its clinical relevance, factors modulating circulating PTX3 in ACS remain however incompletely defined. Obesity is an independent risk factor for coronary artery disease and ACS, but obesity per se has been paradoxically associated with improved prognosis in ACS patients [11,12]. In the general population [4,13] and in disease states including chronic kidney failure [14,15] and insulin resistance or metabolic syndrome [16][17][18][19], low plasma PTX3 was found in most reports in obese individuals and in subjects with high waist circumference, despite high PTX3 expression in abdominal fat [20,21]. The potential interactions between obesity, abdominal fat accumulation and ACS in modulating plasma PTX3 remain to be defined. In the current study we therefore investigated the impact of obesity and waist circumference on plasma PTX3 in non-obese and obese ACS patients and in sex-, age and BMI-matched non-ACS control subjects. We hypothesized that obesity has a negative impact on circulating PTX3 in ACS, and that similar interactions are also observed between PTX3 and high waist circumference, a surrogate marker of abdominal fat accumulation. Finally, we tested the hypothesis that changes in PTX3 are unrelated to those of the short pentraxin and inflammation marker CRP in ACS patients. Subjects and experimental protocol The study conforms to the principles outlined in the Declaration of Helsinki and was approved by the institutional Ethics Committee. All patients were given detailed information on the study aims and risks and they gave written consent before enrolled. In all participants, clinical history and complete physical examination including measurements of blood pressure, body mass index (BMI) and waist circumference were collected. Obesity was defined as BMI > 30 kg/m2, while high waist circumference was defined based on Adult Treatment Panel III diagnostic criteria for metabolic syndrome (>102 or 88 cm for male and female subjects, respectively). Diagnosis of hypertension was based on blood pressure measurement (>135/85 mmHg) or antihypertensive medications; diagnosis of dyslipidemia was based on plasma triglycerides (>150 mg/dl) and HDL cholesterol (<50 or 40 mg/dl for females and males respectively) or triglyceride-lowering medications; diagnosis of type 2 diabetes was based on HbA1c >6.5% or antidiabetic medications. Exclusion criteria were clinical or laboratory evidence of liver failure or disease, renal failure (plasma creatinine above 1,5 mg/dl), cancer, chronic autoimmune and thyroid disease. Females taking hormonal estrogen therapy were also excluded from the study. No subject in either group had history or clinical or laboratory signs of systemic inflammatory disease. ACS 72 consecutive patients with acute coronary syndrome (50 non-obese, 22 obese) were recruited in Coronary Care Unit from the Cardiovascular Department of the Azienda Ospedaliero-Universitaria "Ospedali Riuniti" in Trieste. ACS was diagnosed based on WHO criteria in the presence of two of the following criteria: ischemic chest pain, serial ECG modifications, troponin I elevation with subsequent reduction. For all patients, one overnight fasted blood sample was collected within 24 hours of admission. No differences in timing of sample collection occurred between non-obese and obese patients. After separation, plasma was stored at −80 C until biochemical and hormonal measurements were performed. Non-ACS 52 control subjects with no clinical history of coronary artery disease based on detailed history and clinical examination were also studied (33 non-obese, 19 obese). These subjects were matched to the ACS groups for age, sex, BMI, waist circumference. Overnight-fasted blood samples were collected also for control subjects. Part of the study results in a smaller study population, pertaining to the associations between obesity, ACS, insulin resistance and adipose tissue hormones have been reported elsewhere [22]. Plasma analyses Plasma glucose, HDL cholesterol and plasma triglycerides were measured using standard methods. Plasma insulin was measured by ELISA (Insulin Human Ultrasensitive ELISA; DRG Instruments, Marburg, Germany). Insulin sensitivity was assessed by the validated HOMA index using the following formula: HOMA = (FPG*FPI)/ 22.5, where FPG and FPI are fasting plasma glucose (mmol) and fasting plasma insulin (μU/ml) respectively [20]. Plasma pentraxin-3 (PTX3) (Human Pentraxin3/ TSG-14ELISA System Perseus Proteomics Inc., Tokyo, Japan) and C-reactive protein (CRP) (High sensitivity c-reactive protein, Diagnostics Biochem Canada Inc London, Ontario, Canada) were measured using commercially available ELISA kit. Statistical analysis The StatView software (SAS Institute, Cary, NC, USA) was used for statistical analyses. Normality Tests were run to assess data distribution. Comparisons between ACS patients and non-ACS control subjects were made by unpaired Student's t-test or Wilcoxon test for nonparametric analyses in variables with non-normal distribution (PTX3, HOMA-IR and CRP). To assess differences between obese and non-obese ACS and non-ACS patients, ANOVA or Kruskal-Wallis test for non-parametric variables were used. Linear regression analysis was used to determine associations between PTX3 and different study variables that are potentially involved in its regulation. Multiple regression analysis was then used to investigate potential independent associations between groups of statistically related variables. Due to non-normal distribution, log-transformed values for PTX3, HOMA-IR and CRP were used for regression analyses, and log-transformed PTX3 was used as dependent variable in multiple regression analyses. All data are reported as Mean ± Standard Deviation and range, unless stated otherwise. P values of less than 0.05 were considered statistically significant. Results Clinical characteristics, metabolic and hormonal profile ACS and non-ACS patients were comparable for sex, age, BMI, waist circumference, prevalence of hypertension and dyslipidemia, type 2 diabetes, blood pressure, lipid profile, plasma glucose and HOMA insulin resistance index. Plasma C-reactive protein and PTX3 were higher in the whole ACS group than in non-ACS patients (Table 1). Linear regression analysis between PTX3 and anthropometric and biochemical variables in all ACS subjects In all ACS subjects (n = 72) PTX3 was associated positively with age and plasma creatinine. Consistent with the impact of obesity and waist circumference on PTX3, plasma PTX3 was associated negatively with BMI and WC (Figure 2), and both associations were independent of age and plasma creatinine in multiple regression analysis (Tables 2, 3). No statistically significant associations were instead observed between PTX3 and total and HDL-cholesterol, triglycerides, blood pressure, plasma glucose, insulin and HOMA insulin resistance index (Table 2), despite higher insulin and HOMA index in obese compared to non-obese subjects in both ACS and control groups (P < 0.05). Similar associations of PTX3 were observed in the control group alone (BMI: r = −0.32, P = 0.04, WC: r = −0.22, P = 0.08). When all patients and control subjects were considered together, however, the correlations were no longer significant (BMI: r = −0.15, P = 0.11, WC: r = −0.04, P > 0.2) likely due to different PTX3 plasma concentrations in the two groups at any given BMI level. Obesity, waist circumference and CRP At variance with plasma PTX3, CRP was not lower in obese than in non-obese ACS patients (P > 0.2). Comparable CRP plasma concentrations were also observed when patients were stratified according to waist circumference (P > 0.2). Plasma CRP was accordingly not associated with BMI or WC in linear regression analysis (P > 0.2, not shown) (Figure 3). Discussion In the current study we provide novel information on the impact of obesity or waist circumference on plasma PTX3 in ACS. Results demonstrate that: 1) ACS leads to plasma PTX3 elevation in both non-obese and obese patients; 2) obesity is however associated with lower PTX3 in people with and without ACS; 3) lower PTX3 is also observed in patients with normal compared to those with high waist circumference, a marker of abdominal fat accumulation. PTX3 is a component of the pentraxin superfamily reportedly involved in the modulation of vascular inflammation and damage [1,2]. Although the majority of available studies indicate a negative impact of obesity on plasma PTX3 in the general population and various disease states [11][12][13][14][15][16][17][18], obesity is a strong risk factor for cardiovascular events and PTX3 is commonly elevated in ACS [7][8][9][10]. The current data confirm that ACS enhances circulating PTX3, but they further demonstrate that a negative impact of obesity on plasma PTX3 extends from non-ACS to ACS individuals. Lack of statistically significant associations indicates that changes in plasma lipid profile, glucose metabolism or systemic inflammation were unlikely to contribute to lower PTX3 in obese ACS patients. Since abdominal adipose tissue is a potential relevant source of PTX3 [20,21], the impact of waist circumference on plasma PTX3 was also directly investigated, and a negative association was also observed in ACS between waist circumference and PTX3. Low PTX3 production in abdominal adipose tissue could therefore be, at least in part, paradoxically responsible for lower PTX3 plasma concentration in ACS patients with high waist circumference. As an alternative explanation, obesity and abdominal fat accumulation could lower PTX3 production in other cell types through yet unidentified signalling and mechanisms, that should be investigated in future studies. Higher PTX3 is associated with negative outcome in ACS, and the current results therefore suggest that less pronounced PTX3 elevation may contribute to positively modulate outcome and survival in obese ACS patients [7][8][9][10]. The association between PTX3 and negative outcome had been originally proposed to involve direct negative effects of PTX3 in cardiac and vascular tissues [7]. Strong emerging evidence however indicates that PTX3 elevation may represent an adaptive, antiinflammatory response to pre-existing vascular damage [23,24], and this concept is also supported by differential changes of pro-inflammatory short pentraxin CRP and PTX3 in ACS in the current study. More pronounced tissue damage, rather than PTX3 elevation per se, could therefore be directly responsible for negative outcome in ACS patients with highest PTX3. Potential BMIdependent characteristics of cardiovascular lesions should be directly investigated in obese ACS patients, along with their potential impact on PTX3. Limitations of the present study should be acknowledged. First, factors regulating PTX3 production and plasma concentration remain largely unknown, and the current cross-sectional study design in vivo could not directly address potential mechanisms underlying altered circulating PTX3, that should be investigated in experimental models. The potential interaction between obesity, PTX3 and ACS outcome and survival will also need to be investigated and confirmed in future studies. Finally, we selected to base the diagnosis of diabetes on HbA1c levels, since plasma glucose could have been acutely affected by metabolic changes induced ACS per se. The current findings however indicate a novel link between obesity and plasma PTX3 in ACS, and understanding the underlying mechanisms will likely lead to novel potential prevention and treatment strategies to improve ACS prognosis in both obese and non-obese patients. Conclusion In conclusion, we demonstrated a negative impact of obesity on circulating PTX3 in ACS patients. A similar negative impact was also observed for elevated waist circumference, a surrogate marker of abdominal fat accumulation. These effects do not extend to the short pentraxin and validated inflammation marker CRP, whose plasma concentrations were not reduced in obese ACS patients. Low PTX3 is a novel potential modulator of tissue damage and outcome in obese ACS patients.
2015-06-01T23:46:22.000Z
2013-11-11T00:00:00.000
{ "year": 2013, "sha1": "5ba51b8edd5fc3b66668a3e41390a124ac632aa8", "oa_license": "CCBY", "oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/1475-2840-12-167", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71462363ce57d68bed8458cbe26c5bc8b8fa5521", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255287919
pes2o/s2orc
v3-fos-license
Urban vegetation cooling potential during heatwaves depends on background climate The capacity of vegetation to mitigate excessive urban heat has been well documented. However, the cooling potential provided by urban vegetation during heatwaves is less known even though heatwaves have been projected to be more severe with climate change. Across 24 global metropolises, we combine 30 m resolution satellite observations with a theoretical leaf energy balance model to quantify the change of the leaf-to-air temperature difference and stomatal conductance during heatwaves from 2000 to 2020. We found the responses of urban vegetation to heatwaves differ significantly across cities and they are mediated by climate forcing and human management. During heatwaves, vegetation in Mediterranean and midlatitude-humid cities shows a significant decrease in cooling potential in most cases due to large stomatal closures, while vegetation in arid cities shows a cooling enhancement with an unmodified stomatal opening likely in response to intense irrigation. In comparison, the cooling potential of vegetation in high-latitude humid cities does not show significant changes. These responses have implications for future urban vegetation management strategies and urban planning. Introduction Heatwaves have been observed to increase both in frequency and severity due to climate change (Skinner et al 2018, Perkins-Kirkpatrick andLewis 2020). Nearly one-third of the world's population is currently exposed to deadly heat for at least 20 d per year and an increasing number of people are likely to experience such climatic conditions in the future under all emission scenarios (Mora et al 2017). For people living in urban areas, the threat of this impending heat stress is greater due to localized synergies of heatwaves and urban heat islands (UHIs) in different climates and urban development patterns (Li and Bou-Zeid 2013, Wouters et al 2017, Khan et al 2020, He et al 2021. The case in metropolises could be even worse because of the positive relationship between the level of heat stress and population density (Luo andLau 2018, Zander et al 2018). An often-proposed mitigation strategy to counteract extreme urban heat is the increase of vegetation in cities (Gaffin et al 2012, Wong et al 2021. It has been well-documented that vegetation reduces the local temperature in multiple ways such as by providing shade and transpiration, which replaces sensible heat with latent heat (Gunawardena et al 2017, Lai et al 2019, Meili et al 2021. However, how urban vegetation behaves during heatwaves when its cooling is most needed is uncertain but crucial for vegetation survival and its ecological benefits in cities. During heatwaves, raised temperature and vapor pressure deficit (VPD) typically increase transpirative demand but induce stomatal closure to avoid water loss, which in turn restrains transpirative cooling (Grossiord et al Figure 1. Example simulation of (a) the relationships between Tv and Ta for one single leaf receiving different levels of radiation (Qa) and (b) how Tv − Ta changes with VPD under different levels of water stress, where β = 0% indicates the leaf is well-watered and β = 50% and 80% represent the level of stomatal closure when compared to a well-watered leaf. When not specified Qa = 1000 W m −2 , Ta = 25 • C, relative humidity = 65%, wind speed = 1.5 m s −1 , atmospheric CO2 concentration = 400 ppm and atmospheric pressure = 1013 hPa. The dashed black line in (a) is the 1:1 standard line. 2020, Kimm et al 2020. This is even more the case when the heatwave coincides with a drought that largely decreases soil moisture (He et al 2022). However, for urban vegetation, irrigation can potentially maintain evapotranspiration elevated and therefore enhance the cooling potential Santamouris 2019, Gao et al 2020). An opposite behavior was also observed in some species showing that plants open or retain their stomatal opening if they receive irrigation or can access water with deep roots (Drake et al 2018, Aparecido et al 2020. A glasshouse experiment in Australasia indicates that this is the case for plants with inherently low stomatal conductance (g s ) and which typically experience droughts because they are likely to be at a higher risk of heat-related leaf damage and therefore, need transpiration to reduce their leaf temperature (Marchin et al 2022). In the US, satellitebased evidence also shows a prevalent enhanced cooling capacity of urban trees during heatwaves (Wang et al 2019) but the magnitude of such enhancements varies across cities. Thus, whether plants provide more or less cooling during heatwaves and the magnitude of cooling they provide are uncertain and they will depend on the environmental conditions (soil moisture, VPD, wind speed, etc) and the extent to which plants adjust their stomatal opening. All these factors vary geographically and therefore require a more extensive investigation across cities in a wide range of climates. Vegetation regulates its leaf canopy temperature (T v ) through transpiration to maintain functional biochemical and physiological processes (TESKEY et al 2015, Muller et al 2021. Depending on leaf traits and the leaf atmospheric coupling, T v can be higher or lower than air temperature (T a ) (Leuzinger et al 2010, Feng and Zou 2019). The leaf-to-air temperature difference (T v − T a ) is a key variable describing vegetation cooling potential that affects the sensible heat flux between plant surfaces and the air (Moran et al 1994, Muller et al 2021 and changes with meteorological conditions. T v − T a can be computed by concurrently solving leaf photosynthesis, g s , and the energy balance for a given leaf or a canopy. In the following, we show an example of how climatic and environmental conditions such as available radiation, VPD and water availability can influence the relationship between T v and T a using a mechanistic modeling approach (Bonan 2019) (figure 1). In the given example, a well-watered leaf receiving a moderate amount of radiation (Q a = 1000 W m −2 ), where Q a is the sum of net shortwave radiation and incoming longwave radiation, has a theoretically higher T v than T a when T a is lower than 25 • C, above which T v becomes lower than T a and the leaf starts cooling the ambient air ( figure 1(a)). However, when Q a increases to 1200 W m −2 , the transpirative cooling cannot offset the radiative warming and higher T v than T a is observed across all analyzed T a values. The opposite occurs at Q a = 800 W m −2 . All else being equal, T v − T a significantly decreases (more cooling) as VPD increases but the rate of the decrease can be notably suppressed by water stress which limits g s ( figure 1(b)). Although theoretically these mechanisms are well described and can be used to evaluate vegetation's cooling potential (i.e. T v − T a in this study), measuring them in real conditions is challenging, and little is known about how heatwaves modify T v − T a of urban plants in different cities and climates, especially considering contrasting stomatal responses. To quantitatively derive the cooling potential and stomatal behavior of urban vegetation, we used 30 m satellite-retrieved land surface temperatures (LSTs) and meteorological variables from the ERA5-Land reanalysis product to compute T v − T a during heatwave periods since 2000 across 24 global cities located in climates ranging from humid to semi-arid to arid (figure S1). The value of T v − T a is used to infer g s by inverting a theoretical formulation of the canopy energy budget and to answer the questions: do urban plants close or open their stomata during heatwaves in comparison to normal climatic conditions? Is there a correlation between stomatal behavior and city background climate? Addressing these questions can shed light on the potential of vegetation to provide cooling during heatwaves, when it is likely most needed, and concurrently guide urban greening strategies. Data and methods Heatwave days in this study are defined as five or more consecutive with a maximum T a above its 90th percentile in the city during summer (from June to early September) for the period 2000-2020 (figure S2). Normal summer days are defined as the remaining summer days during this period. Days with daily precipitation of more than 2 mm were excluded to remove effects associated with recent rainfall and interception. To analyze the response of urban vegetation to heatwaves, we selected several typical urban green spaces (UGSs) that are present from 2000 to 2020 and are fully covered by vegetation in each city. Most UGSs are fully covered by trees while UGSs in a few arid cities are covered by grasses mixed with trees. The boundaries of the UGSs are determined using multiple datasets including the 10 m European Space Agency (ESA) WorldCover, the 30 m National Land Cover Database (Yang et al 2018), the long-term global land-cover product with fine classification system at 30 m (GLC_FCS30) (Zhang et al 2021), and high-resolution google historical images. The 30 m Landsat thermal images (from all of Landsat 5, Landsat 7 and Landsat 8) were used to retrieve the LST (T v ) of these UGSs and the 9 km ERA5-Land hourly reanalysis product (globally available at hourly scale since 1981) (https://cds.climate.copernicus.eu/#!/home) was used to extract the 2 m air temperature (T a ) that is spatially and temporally matched to each of the T v observations to calculate T v − T a . Other meteorological variables used in this study are also extracted from the ERA5-Land hourly product. More details on the selection criteria of UGSs and description of the T v and T a data are provided in supporting information S1. Note, while there is a mismatch in the spatial resolution between the two data sets, the spatial heterogeneity of T a in urban areas is generally much smaller than that of T v (Eliasson 1990). For example, data from urban weather stations in Shenzhen show that within-city daytime LSTs ranging from 25 • C to 40 • C only result in a T a spatial variability of 30 • C-31.5 • C (Cao et al 2021). A comparison of the ERA5-Land Hourly T a values with a high-resolution (100 m) simulation product of T a generated for European cities with the urban climate model Urb-Clim (Hooyberghs et al 2019) shows that the maximum difference of T a at 11:00 a.m. local time, which is close to the Landsat overpass times (∼10:30 a.m. local time), on one typical summer day in Paris is less than 1 • C (figure S3). Furthermore, throughout July 2015, there is a high correlation between T a at different resolutions in vegetated areas (R 2 = 0.93, figure S4) for a humid city (Paris) and semi-arid city (Madrid), selected as examples. A single leaf absorbs incoming solar radiation R ↓ sw and long-wave radiation L ↓ from the atmosphere and surrounding surfaces and emits long-wave radiation L ↑ as a function of its leaf temperature. The net absorbed radiation is then partitioned into sensible heat H and latent heat λE according to the energy balance as follows where ρ a is the air density, C p is the specific heat capacity of the air, r b is the leaf boundary layer resistance and r a is the where γ ≈ 67 is the psychrometric constant, r s is the stomatal resistance, e sat is the saturation vapor pressure and e a is the actual vapor pressure. Equation (1) can be rewritten as where Q a is the total available energy for the leaf which is the sum of absorbed solar radiation and incoming long-wave radiation, g b = 1/r b is the boundary layer conductance, g a = 1/r a is the aerodynamic conductance, and g s = 1/r s is the stomatal conductance. Here g b and g a follow the parameterization as used in the Urban Tethys-Chloris (UT&C) model (Meili et al 2020). A more detailed description of the calculation of g b and g a is provided in supporting information S2. Since remotely sensed T v measures the vegetation canopy temperature rather than the temperature of a single leaf, it is necessary to upscale g b and g s from the leaf to the canopy scale. This is done by simply multiplying the conductance g b and g s by the leaf area index (LAI), which was calculated by an artificial neural network trained on a radiative transfer model (PROSAIL) inversion that can predict LAI from the Landsat surface reflectances (Martínez-Ferrer et al 2022). Then, equation (2) becomes Equation (4) is non-linear in T v . To obtain an analytical solution, we use a Taylor's expansion to approximate T v around T a , which gives which gives an analytical expression for T v − T a : For each observation of T v − T a , we numerically solve equation (6) to obtain the value of g s that satisfies equation (6). All other terms in equation (6) are either measured (T v , T a , Q a , VPD, etc) or calculated (g b , g a , etc), based on aerodynamic considerations. In this way, we approximate g s during heatwaves and normal summer days over UGSs in the analyzed cities. All variables/parameters and their units used in this study are listed in table S1. Results We first compared the changes in meteorological variables during heatwaves and normal climatic conditions in all 24 selected cities (figure 2). By definition, T a is statistically larger during heatwaves ( figure 2(b)). We also found that VPD significantly (p < 0.05, t-test) increased on heatwave days compared to normal summer days in most cities ( figure 2(d)). Additionally, half of the cities also showed significantly stronger solar radiation during heatwaves ( figure 2(a)). T a can exceed 40 • C for some arid cities such as Baghdad, Dubai and Phoenix with VPD exceeding 6 kPa. Seven cities showed a significantly decreased wind speed during heatwaves ( figure 2(c)). Since heatwave durations in most cities are relatively short (less than 10 d, figure S5), UGSs only showed a slightly decreased greenness (indicated by normalized difference vegetation index, NDVI) and the decreases are only significant in Houston and Denver (figure 2(e)). A few cities showed a significant increase of NDVI (e.g. Wuhan and Shanghai), which might be related to modified radiative conditions. Despite the relatively consistent changes in meteorological variables, we observed distinct patterns of the leaf-to-air temperature difference T v − T a (figure 2(f)), which in most conditions is a positive number, i.e. vegetation is warmer than the surrounding air. For several high-latitude humid cities including Baltimore, Moscow, London, Berlin, Oslo and Chicago, T v − T a showed almost no change between heatwave and normal summer days, while for Mediterranean cities such as Madrid, Rome, Barcelona and Lisbon, T v − T a increased (although not always statistically significant) on average by 0.8 • C-2.6 • C, when compared to T v − T a on normal summer days. The increasing T v − T a indicates that the cooling potential of plants in these cities decreased during heatwaves as sensible heat increases driven by the positive T v − T a . Some midlatitude humid cities such as Wuhan, Paris and Shanghai also showed an increased T v − T a . In contrast, the average T v − T a decreased by 1.4 • C-4.8 • C in six arid cities (Baghdad, Dubai, Phoenix, Las Vegas, Abu Dhabi and Denver). Within a given city, the response of urban vegetation to heatwaves is spatially consistent across UGSs (figure 3). For example, the T v − T a of UGSs in Paris universally increased during heatwaves however with different magnitudes in various Figure 2. Comparison of meteorological variables, NDVI and Tv − Ta between normal summer days (green box) and heatwave days (yellow box). The asterisk indicates that a significant (p < 0.05, t-test) difference exists between the mean value of the two groups. According to the Köppen-Geiger climate classification (figure S1), we group these cities into high-latitude humid cities (in purple), predominantly midlatitude humid cities (in dark green), Mediterranean cities (in light green), arid cities (in tan), and cities in other climates (in black). UGSs (figures 3(a)-(c) ). Small UGSs showed a higher increase in T v − T a compared to larger UGSs and the T v of small UGSs can be up to 10 • C higher than T a on heatwave days in Paris (figure 3(c)). The effect of heatwaves was more pronounced in Madrid, where T v − T a increased up to 20 • C for most grasslands, which are likely wilted or highly water-stressed and almost completely lose their cooling function (figures 3(d)-(f)). However, tree-covered UGSs in downtown Madrid were less affected by the heatwaves compared to the grasslands. This discrepancy is likely related to different rooting depths and also irrigation practices. In contrast to Paris and Madrid, all UGSs in Phoenix, which are mostly grasslands and irrigated, showed a consistently decreased T v − T a during heatwaves (figures 3(g)-(i)). To explain the different responses of T v − T a to heatwaves across cities, we numerically computed the g s for each observation of T v − T a (figure 4). For the same value of g s , T v − T a is commonly lower during heatwaves than during normal summer days, indicating a higher cooling potential of vegetation in hotter and higher VPD conditions if plants are not water stressed, which is the case where plants are intensively irrigated in arid cities. In this case, results suggest that vegetation keeps its stomata relatively open during heatwaves (Baghdad, Abu Dhabi, Dubai, Las Vegas, Phoenix and Denver). However, for Mediterranean cities experiencing a hot and dry summer such as Lisbon, Rome, Barcelona and Madrid, T v on normal summer days was prevalently higher than T a , resulting in a positive T v − T a larger than 3 • C. This indicates that plants in these cities are already under water stress on normal summer days ( figure 1(b)). Their g s is generally between 0.2 and 0.3 mol H 2 O m 2 s −1 ( figure 5(a)). During heatwaves, the plants further close their stomata and therefore show a decreased g s of around 0.15 mol H 2 O m 2 s −1 ( figure 5(b)) and raised T v − T a , which approaches 6 • C in some cities ( figure 4). For humid cities, plants also commonly closed their stomata and show a decreased g s (although not always statistically significant) during heatwave days ( figure 4). However, T v − T a of high-latitude cities (London, Oslo, Berlin, Moscow, Chicago and Baltimore) showed no significant difference compared to normal summer days even when g s significantly decreased (figures 2(f) and 4). Most midlatitude humid cities including Paris, Wuhan, Shanghai and Houston showed increases of T v − T a ranging from 1.1 • C to 2.7 • C. We also found that T v − T a is positively related to heatwave intensity (defined as the degree-hours that are the sum of hours with T a above the 90th percentile of daily maximum T a in the city during summer, weighted by the departure of hourly T a from the 90th percentile) in a few cities (e.g. Moscow, Chicago, Wuhan, Houston, Atlanta, Baghdad, Las Vegas, Phoenix, figure S6) and thus an increase in T v − T a is observed with increasing heatwave intensity. Meanwhile, g s mostly follows the change of T v − T a and shows a negative relationship with heatwave intensity in some cities leading to larger stomatal closure with higher intensity (figure S7). . Dots indicate the observation of Tv − Ta and the corresponding computed gs (green ones for normal summer days and purple ones for heatwave days). Tv − Ta and gs of the two groups are statistically compared in the form of the box-and-whisker plot and the asterisk indicates that a significant (p < 0.05, t-test) difference exists between the mean gs of the two groups. To avoid overlap over individual dots, these boxes were moved down by nine units ( • C) and readers can check the secondary y-axis for specific values. According to the Köppen-Geiger climate classification (figure S1), we group these cities into high-latitude humid cities (in purple), predominantly midlatitude humid cities (in dark green), Mediterranean cities (in light green), arid cities (in tan), and cities in other climates (in black). The unit of gs is converted from m s −1 to mol H2O m −2 s −1 to be comparable with plant physiological literature (supporting information S3). Discussion and conclusion We combined 30 m resolution satellite observations with a theoretical leaf energy balance model to analyze how urban vegetation responds to heatwave conditions across 24 global metropolises. Unavoidable uncertainties in LST from remote sensing observations (Cook 2014, Laraby 2017 and T a from reanalysis (Hersbach et al 2020, Araújo et al 2022) are filtered by using a large number of time steps for each city and by fitting a theoretical T v − T a vs. g s model in each time step. Although T a in cities experiencing strong UHIs is likely underestimated in the used dataset, the canopy T a UHI is on average much smaller than the surface temperature UHI (Venter et al 2021), especially during daytime and it is unlikely to generate any major effect on the results. To quantify the potential impact of variations or uncertainty in T a , we also tested the sensitivity of the g s results to the variability in T a by adding an error term to T a for each observation. Even with a large error term of ε ∼ N (3 • C, 1 • C), that assumes a large uncertainty in T a , results show that the T v − T a and g s response to heatwaves (i.e. how T v − T a and g s changes during heatwaves) in each city remain the same (figures 4 and S8). Hence, the changes of T v − T a and g s are robust at the city scale even without accounting for the within-city spatial heterogeneity of T a . Our results highlight that the response of urban vegetation has a geographical divergence which is largely related to background climate forcing and to vegetation management, i.e. irrigation. In highlatitude humid cities, there are no significant changes in the leaf-to-air temperature difference (T v − T a ) (figures 2(f) and 4), which indicates that the vegetation cooling potential of these cities was not affected by the heatwave conditions. This is because of (a) similar g s during normal summer days and heatwave days in some cities such as London (figure 4) and (b) relatively higher g s (figure 5(a)) and overall less sensitivity of T v − T a to higher g s than to lower g s (see the curvature change of the theoretical relationship between T v − T a and g s in figure 4). For instance, a significantly decreased g s does not change T v − T a in Moscow as lower g s is likely compensated by changes in meteorological conditions (figures 2 and 4). However, in midlatitude humid cities such as Wuhan, Shanghai and Houston, we found significant stomatal closure which led to increased T v − T a during heatwaves. Due to generally abundant summer rainfall in these cities, urban vegetation is mainly rainfed and expected to still provide substantial cooling even during heatwaves or extended dry periods when cooling is likely most needed. However, our results do not support this assumption and suggest that urban vegetation in such cities, with the current management strategies, provides less rather than more cooling under extreme heat conditions. This effect is even exacerbated in vegetation with lower canopy heights, which showed higher T v − T a than that of vegetation with higher canopy heights for similar g s (see an example in Houston, figure S9). This suggests that increased irrigation is likely needed to fulfill the water demand of urban vegetation and to maintain its cooling potential during heatwaves. Vegetation in the Mediterranean is widely known for experiencing and being adapted to low water availability (Galmés et al 2007, Rana et al 2020. We found that heatwaves exacerbated plant water stress as stomatal closure is more pronounced than during normal conditions and such closure limits transpiration, which is originally already at low levels ( figure 5). During heatwaves, as plants close stomata further, we observe a much higher T v than T a indicating that Mediterranean urban plants are actually unlikely to considerably cool urban air. However, plant T v could be still lower than those of impervious surfaces and their exact cooling potential during heatwaves requires further investigation. UGSs in arid cities, which are mostly covered by grasses, were found to retain stomatal opening during heatwaves at a similar level than during normal days. Given the higher atmospheric water demand, T v − T a for a similar g s shows lower values during heatwaves than normal summer days in arid climates (figure 4). This contrasts with the reduced green roof cooling potential during heatwave/drought conditions found in other studies (Speak et al 2013, Zhang et al 2020. However, by keeping stomata relatively open, these plants have a considerably enhanced cooling potential, which given the extremely high VPD, is counterintuitive, and not captured by most stomatal conductance models (Leuning 1995, Damour et al 2010, Medlyn et al 2011, Meili et al 2021 that will predict stomatal closure at such VPD levels independently from water availability (Yang et al 2019, Meili et al 2021. This result sheds light on the importance of watering vegetation during extreme weather conditions (Zhang et al 2020). Regardless of heatwave conditions, T v − T a generally follows the gradient of vegetation NDVI (figures 2(e) and (f)), i.e. dense and healthy UGSs lead to higher cooling potential. However, all our results combined show the critical role of stomatal behavior and background climate in the responses of the cooling potential to heatwaves (figure 4), especially given the remarkable difference of such responses, for example, in Phoenix and Barcelona with both having a decreased NDVI (figures 2(e) and (f)). A large decrease of g s and stomatal closure during heatwaves can directly suppress plant transpiration and increase leaf temperature (T v ), which might prevent any transpirative cooling and puts plants at risk of lethal overheating if they fail to keep T v below the leaf critical temperature, generally 46 • C-49 • C (Hüve et al 2011, O'sullivan et al 2017). Nevertheless, even in the dry Mediterranean cities experiencing severe water stress, plants close their stomata as these critical temperatures are likely not reached even during heatwaves in these cities ( figure 2(b)), or herbaceous vegetation is already wilted. Conversely, when plants have enough water available (in humid and irrigated cities) they keep stomata open even when atmospheric water demand is significant. Our analysis has unavoidable limitations in terms of data and methodological choices. The rare occurrence of heatwaves and cloud contamination in some cities results in a limited number of observations during heatwave days and induces uncertainty in cooling potential and g s estimation. The use of the firstorder Taylor's expansion can also cause some bias in estimating g s , especially when leaf temperature strongly deviates from air temperature. Beyond this, even though most UGSs chosen in this study are fully covered by trees, in a few arid cities (i.e. Abu Dhabi, Dubai, Las Vegas and Phoenix) UGSs are mainly covered by grasses and contain only a few trees, in which case our modeled g s could have larger uncertainties. Irrigation on urban grasslands can strongly change soil moisture and can lead to evaporation from water intercepted on the grass leaves or from the soil underneath, which can make T v − T a not representative of leaf temperature only. Compared to the low-frequency Landsat observations used here, future studies could focus on daily observations to quantify the change of stomatal behavior throughout the whole heatwave and potentially associated drought period. In summary, we explore for one of the first times how T v − T a and g s , which are good proxies for the cooling potential of urban vegetation, respond to current heatwaves in different climates and cities. Results highlight the crucial role of human intervention through irrigation in all those climates where vegetation might undergo partial or substantial water stress. Mediterranean and midlatitude humid cities are shown to experience a significantly suppressed cooling potential as plants are mostly rainfed and they largely decrease g s when cooling (transpiration) will be most needed (e.g. during heatwaves). However, cooling enhancement is observed in arid cities because irrigation is largely shifting the behavior of g s leading to equal or higher g s than on normal summer days. As a result, while urban greening is a desirable strategy to achieve multiple ecosystem services (Haase et al 2014, Richards et al 2022), its capability to reduce temperature during heatwaves might not match expectations derived from normal summer days and cannot be thought of disjointly from strategies to supply necessary water requirements. This is becoming even more relevant in a rapidly changing climate. Data availability statements All data that support the findings of this study are included within the article (and any supplementary files).
2022-12-31T16:03:21.617Z
2022-12-29T00:00:00.000
{ "year": 2023, "sha1": "bba2bf11a948ada87a0ae11a5b07ef72ce74e0ea", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/acaf0f", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "00113fa013e894ccc1c7aebc5633bc33af887b5b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
245178465
pes2o/s2orc
v3-fos-license
Nitrogen restricts future sub-arctic treeline advance in an individual-based dynamic vegetation model Arctic environmental change induces shifts in high-latitude plant community composition and stature with implications for Arctic carbon cycling and energy exchange. Two major components of change in high-latitude ecosystems are the advancement of trees into tundra and the increased abundance and size of shrubs. How future changes in key climatic and environmental drivers will affect distributions of major ecosystem types is an active area of research. Dynamic vegetation models (DVMs) offer a way to investigate multiple and interacting drivers of vegetation distribution and ecosystem function. We employed the LPJ-GUESS tree-individual-based DVM over the Torneträsk area, a subarctic landscape in northern Sweden. Using a highly resolved climate dataset to downscale CMIP5 climate data from three global climate models and two 21st-century future scenarios (RCP2.6 and RCP8.5), we investigated future impacts of climate change on these ecosystems. We also performed model experiments where we factorially varied drivers (climate, nitrogen deposition and [CO2]) to disentangle the effects of each on ecosystem properties and functions. Our model predicted that treelines could advance by between 45 and 195 elevational metres by 2100, depending on the scenario. Temperature was a strong driver of vegetation change, with nitrogen availability identified as an important modulator of treeline advance. While increased CO2 fertilisation drove productivity increases, it did not result in range shifts of trees. Treeline advance was realistically simulated without any temperature dependence on growth, but biomass was overestimated. Our finding that nitrogen cycling could modulate treeline advance underlines the importance of representing plant–soil interactions in models to project future Arctic vegetation change. Abstract. Arctic environmental change induces shifts in high-latitude plant community composition and stature with implications for Arctic carbon cycling and energy exchange. Two major components of change in high-latitude ecosystems are the advancement of trees into tundra and the increased abundance and size of shrubs. How future changes in key climatic and environmental drivers will affect distributions of major ecosystem types is an active area of research. Dynamic vegetation models (DVMs) offer a way to investigate multiple and interacting drivers of vegetation distribution and ecosystem function. We employed the LPJ-GUESS tree-individual-based DVM over the Torneträsk area, a subarctic landscape in northern Sweden. Using a highly resolved climate dataset to downscale CMIP5 climate data from three global climate models and two 21st-century future scenarios (RCP2.6 and RCP8.5), we investigated future impacts of climate change on these ecosystems. We also performed model experiments where we factorially varied drivers (climate, nitrogen deposition and [CO 2 ]) to disentangle the effects of each on ecosystem properties and functions. Our model predicted that treelines could advance by between 45 and 195 elevational metres by 2100, depending on the scenario. Temperature was a strong driver of vegetation change, with nitrogen availability identified as an important modulator of treeline advance. While increased CO 2 fertilisation drove productivity increases, it did not result in range shifts of trees. Treeline advance was realistically simulated without any temperature dependence on growth, but biomass was overestimated. Our finding that nitrogen cycling could modu-late treeline advance underlines the importance of representing plant-soil interactions in models to project future Arctic vegetation change. Introduction In recent decades, the Arctic has been observed becoming greener (Epstein et al., 2012;Bhatt et al., 2010). Causes include an increased growth and abundance of shrubs (Myers-Smith et al., 2011;Elmendorf et al., 2012;Forbes et al., 2010), increased vegetation stature associated with a longer growing season, and poleward advance of the Arctic treeline (Bjorkman et al., 2018). Shrubs protruding through the snow and treeline advance alter surface albedo and energy exchange with potential feedback to the climate system (Chapin et al., 2005;Sturm, 2005;Serreze and Barry, 2011;Zhang et al., 2013Zhang et al., , 2018. Warming and associated changes in highlatitude ecosystems have implications for carbon cycling through increased plant productivity, species shifts (Chapin et al., 2005;Zhang et al., 2014) and increased soil organic matter (SOM) decomposition with subsequent loss of carbon to the atmosphere. Studies of the Arctic carbon balance have shown that the region has been a weak sink in the past (Mcguire et al., 2009(Mcguire et al., , 2012Bruhwiler et al., 2021;Virkkala et al., 2021), although uncertainty is substantial, and it is difficult to determine accurately the strength of this sink. How climate and environmental changes will affect the relative balance between the carbon uptake, i.e. photosynthesis, and release processes, i.e. autotrophic and heterotrophic respiration, will determine whether the Arctic will be a source or a sink of carbon in the future. Forest-tundra ecotones constitute vast transition zones where abrupt changes in ecosystem functioning occur . While a generally accepted theory of what drives treeline advance is currently lacking, several alternative explanations exist. Firstly, direct effects of rising temperatures have been thoroughly discussed (e.g. Rees et al., 2020;Hofgaard et al., 2019;Körner, 2015;Chapin, 1983). On the global scale, treelines have been found to correlate well with a 6-7 • C mean growing season ground temperature (Körner and Paulsen, 2004) and could thus be expected to follow isotherm movement as temperatures rise. A global study of alpine treeline advance in response to warming since 1900 showed that 52 % of treelines had advanced while the other half were stationary (47 %), with only occasional instances of retreat (1 %) (Harsch et al., 2009). Similar patterns have been observed on the circumarctic scale, although latitudinal treelines might be expected to shift more slowly than elevational treelines due to dispersal constraints (Rees et al., 2020). As trees close to the treeline often show ample storage of non-structural carbohydrates (Hoch and Körner, 2012), it has been suggested that a minimum temperature requirement for wood formation, rather than productivity, might constrain treeline position (Körner, 2003(Körner, , 2015Körner et al., 2016). Secondly, it has been hypothesised that indirect effects of warming might be as important as or more important than direct effects (Sullivan et al., 2015;Chapin, 1983). For example, rising air and soil temperatures might induce increased mineralisation and plant availability of nitrogen in the litter layer and soil (Chapin, 1983). Increased nitrogen uptake could in turn enhance plant productivity and growth (Dusenge et al., 2019). Increased nitrogen uptake as a consequence of increased soil temperatures or nitrogen fertilisation has been shown to increase seedling winter survival among seedlings of mountain birch (Betula pubescens ssp. tortuosa) -the main treeline species in Scandinavia (Weih and Karlsson, 1999;Karlsson and Weih, 1996). Thirdly, experiments exposing plants and ecosystems to elevated CO 2 often show increased plant productivity and biomass increase, especially in trees (Ainsworth and Long, 2005). Terrestrial biosphere models generally emulate the same response (Hickler et al., 2008;Piao et al., 2013). Although difficult to measure in field experiments, the treeline position seems unresponsive to increased [CO 2 ] alone (Holtmeier and Broll, 2007). Whether treelines are responsive to increased productivity through CO 2 fertilisation might yield insights into whether treelines are limited by their productivity, i.e. photosynthesis, versus their ability to utilise assimilated carbon, i.e. wood formation. However, the extent to which increased [CO 2 ] drives long-term tree and shrub encroachment and growth remains poorly studied. For treeline migration to occur, it is not only the growth and increased stature of established trees but also the re-cruitment and survival of new individuals beyond the existing treeline that is important (Holtmeier and Broll, 2007). Seedlings of treeline species are sometimes observed above the treeline, especially in sheltered microhabitats (Hofgaard et al., 2009;Sundqvist et al., 2008). However, these individuals often display stunted growth and can be decades old, although age declines with elevation (Hofgaard et al., 2009). The suitability of the tundra environment for trees to establish and grow taller will thus be an important factor for the rate of treeline advance (Cairns and Moen, 2004). Interspecific competition and herbivory are known to be important modulators of range shifts of trees (Cairns and Moen, 2004;Van Bogaert et al., 2011;Grau et al., 2012). For instance, the presence of shrubs has been shown to limit tree seedling growth (Weih and Karlsson, 1999;Grau et al., 2012), likely as a consequence of competition with tree seedlings for nitrogen. Comparisons of a model incorporating only bioclimatic limits to species distributions and more ecologically complex models have also suggested interspecific plant competition to be important for range shifts of trees (Epstein et al., 2007;Scherrer et al., 2020). Thus, as a fourth factor, shrub-tree interactions could be important when predicting range shifts such as changing treeline positions under future climates. Rising temperatures have been suggested as the dominant driver of increased shrub growth, especially where soil moisture is not limiting (Myers-Smith et al., 2015, 2018. Furthermore, a changed precipitation regime, especially increased winter snowfall, might promote establishment of trees and shrubs through the insulating effects of snow cover with subsequent increases in seedling winter survival (Hallinger et al., 2010). A narrow focus on a single variable, e.g. summer temperature, or a few driving variables may lead to overestimation of treeline advance in future projections (Hofgaard et al., 2019). Dynamic vegetation models (DVMs) offer a way to investigate the influence of multiple and interacting drivers on vegetation and ecosystem processes. Model predictions may be compared with observations of local treelines and ecotones to validate assumptions embedded in the models and to interpret causality in observed dynamics and patterns. DVMs also offer a way to extrapolate observable local phenomena to broader scales, such as that of circumarctic shifts in the forest-tundra ecotone and the responsible drivers. Here, we examine a sub-arctic forest-tundra ecotone that has undergone spatial shifts over recent decades (Callaghan et al., 2013), previously attributed to climate warming. Adopting an individual-based DVM incorporating a detailed description of vegetation composition and stature and nitrogen cycle dynamics, we apply the model at a high spatial resolution to compare observed and predicted recent treeline dynamics, and we project future vegetation change and its implications for carbon balance and biogeophysical vegetationatmosphere feedbacks. In addition, we conduct three model experiments to separate and interpret the impact of driving factors (climate, nitrogen deposition, [CO 2 ]) on vegetation in a forest-tundra ecotone in Sweden's sub-arctic north. Study site Abisko Scientific Research Station (ANS; 68 • 21 N, 18 • 49 E), situated in the mountain-fringed Abisko Valley near Lake Torneträsk in northern Sweden, has a long record of ecological and climate research. The climate record dates back to 1913 and is still ongoing. The area is situated in a rain shadow and is thus relatively dry despite its proximity to the ocean (Callaghan et al., 2013). The forests in the lower parts of the valley consist mostly of mountain birch Betula pubescens ssp. czerepanovii, which is also dominant at the treeline. Treeline elevation in the Abisko Valley ranges between 600-800 m above sea level (a.s.l.) (Callaghan et al., 2013). Other tree types in lower parts of the valley are Sorbus aucuparia and Populus tremula, along with small populations of Pinus sylvestris, which are assumed to be refugia species from warmer periods during the Holocene (Berglund et al., 1996). Soils consist of glaciofluvial till and sediments. An extensive summary of previous studies and the environment around Lake Torneträsk can be found in Callaghan et al. (2013). Our study domain covers an area of approximately 85 km 2 and extends from Mount Nuolja in the west to the mountain Nissončorru in the east (see Fig. 2). The northern part of our study domain is bounded by Lake Torneträsk. The mean annual temperature was −0.5 ± 0.9 • C for the 30-year period 1971-2000 ( Fig. 1, Table 2), with January being the coldest month (−10.2 ± 3.5 • C) and July the warmest (11.3 ± 1.4 • C). Mean annual precipitation was 323 ± 66 mm for the same reference period. This reference period was chosen as it is the last one in the dataset by Yang et al. (2011). Ecosystem model We used the LPJ-GUESS DVM as the main tool for our study (Smith et al., 2001Miller and Smith, 2012). LPJ-GUESS is one of the most ecologically detailed models of its class, suitable for regional-and global-scale studies of climate impacts on vegetation, employing an individualand patch-based representation of vegetation composition and structure. It simulates the dynamics of plant populations and ecosystem carbon, nitrogen and water exchanges in response to external climate forcing. Biogeophysical processes (e.g. soil hydrology and evapotranspiration) and plant physiological processes (e.g. photosynthesis, respiration, carbon allocation) are interlinked and represented mechanistically. Canopy fluxes of carbon dioxide and water vapour are calculated by a coupled photosynthesis and stomatal conductance scheme based on the approach of BIOME3 (Haxeltine and Prentice, 1996). Photosynthesis is a function of air Historic (1971Historic ( -2000 and projected (2071-2100) temperature (left) and precipitation (right) variability at the Abisko study area. The shaded areas (temperature) and narrow bars (precipitation) mark ±1 standard deviation uncertainty in the three CMIP5 multi-model means for RCP2.6 and RCP8.5. temperature, incoming short-wave or photosynthetically active radiation, [CO 2 ], and water and nutrient availability. Autotrophic respiration has three components -maintenance, growth and leaf respiration. Tissue maintenance respiration is dependent on soil and air temperature for root and aboveground respiration, respectively, along with a dependency on tissue C : N stoichiometry. All assimilated carbon that is not consumed by autotrophic respiration, less a 10 % flux to reproductive organs, is allocated to leaves; fine roots; and, for woody plant functional types (PFTs), sapwood, following a set of prescribed allometric relationships for each PFT, resulting in biomass, height and diameter growth (Sitch et al., 2003). Consequently, an individual in the model is assumed to be carbon limited when autotrophic respiration equals or exceeds the amount of carbon assimilated by photosynthesis. A chronically negative carbon balance at the individual level eventually results in plant death. The model assumes the presence of seeds in all grid cells, meaning that simulated PFTs can establish once the climate is favourable, as defined by each PFT's predefined bioclimatic limits. The competition between neighbouring plant individuals for light, water and nutrients, affecting establishment, growth and mortality, is modelled explicitly. Competition for light and nutrients is assumed to be asymmetric; i.e. individuals with taller canopies or larger root systems will be advantaged in the capture of resources under scarcity. Water uptake is divided equally among individuals according to the water availability and the fraction of each PFT's roots occupying each soil layer. Individuals of the same age cooccurring in a local neighbourhood or patch and belonging to the same PFT (see below) are assumed to be identical to each other. Decomposition of plant litter and cycling of soil nutrients are represented by a CENTURY-based soil biogeochemistry module, applied at the patch scale . Biological N fixation is represented by an empirical relationship between annual evapotranspiration and nitrogen fixation (Cleveland et al., 1999). LPJ-GUESS does not currently incorporate PFT-specific nitrogen fixation, which for instance may be associated with species that form root nodules, such as Alnus spp. Additional inputs of nitrogen to the system occur through nitrogen deposition or fertilisation. Nitrogen is lost from the system through leaching, gaseous emissions from soils or wildfires . For this study we employed LPJ-GUESS version 4.0 , enhanced with arctic-specific features (Miller and Smith, 2012;Wania et al., 2009). The combined model incorporates an updated set of arctic PFTs (described below), improved soil physics and a multi-layered dynamic snow scheme, allowing for simulation of permafrost and frozen ground. The soil scheme includes 15 equally distributed soil layers constituting a total soil depth of 1.5 m. Vegetation in the model is represented by cohorts of individuals interacting in local communities or patches and belonging to a number of PFTs that are distinguished by growth form (tree, shrub, herbaceous), life history strategies (shade tolerant or intolerant) and phenology class (evergreen or summergreen). Herbaceous PFTs are represented as a dynamic, aggregate cover of ground layer vegetation in each patch. In this study 11 PFTs were implemented (see Table S2.1 in the Supplement for a description of included PFTs; see Table S2.2 in the Supplement for parameter values associated with each PFT). Out of these, three were tree PFTs: boreal needle-leaved evergreen tree (BNE), boreal shade-intolerant evergreen tree (BINE) and boreal shade-intolerant broad-leaved summergreen tree (IBS). Corresponding tree species present in the Torneträsk region include Picea abies (BNE), Pinus sylvestris (BINE), Betula pubescens ssp. czerepanovii, Populus tremula and Sorbus aucuparia (IBS). Following Wolf et al. (2008), shrub PFTs with different statures were implemented as follows: tall summergreen shrub (HSS) and tall evergreen shrub (HSE), corresponding to Salix spp. (HSS) and Juniperus communis (HSE), and low summergreen shrub (LSS) and low evergreen shrub (LSE) such as Betula nana (LSS) and Empetrum nigrum (LSE). We also included prostrate shrubs and two herbaceous PFTs. Grid cell vegetation and biogeophysical properties are calculated by averaging over a number of replicate patches, each nominally 0.1 ha in area and subject to the same climate forcing. No assumptions are made about how the patches are distributed within a grid cell; they are a statistical sample of equally possible disturbance/demographic histories across the landscape of a grid cell. Within each patch, the establishment, growth and mortality of tree or shrub cohorts comprising individuals of equal age (and dynamic size/form) are modelled annually (Smith et al., 2001. Establishment and mortality have both an abiotic (bioclimatic) and a biotic (competition-mediated) component. Vegetation dynamics, i.e. changes in the distribution and abundance of different PFTs in grid cells over time, are an emergent outcome of the competition for resources between PFT cohorts at the patch level within an overall climate envelope determined by bioclimatic limits for establishment and survival. The bioclimatic envelope is a hard limit to vegetation distribution, intended to represent the physiological niche of a PFT. Furthermore, the climate envelope is a proxy not only for physiological processes such as meristem activity that may set species ranges but also for climatic stressors such as tissue freezing. The parameters are intended to capture broader climatic properties of each grid cell. A detailed description of each bioclimatic parameter and its respective values can be found in Table S2.2 in the Supplement. Disturbance is accounted for by the occasional removal of all vegetation within a patch with an annual probability of 300 yr −1 , representing random events such as storms, avalanches, insect outbreaks and windthrow. The study used three replicate patches within each 50 × 50 m grid cell. We judged this number sufficient to obtain a stable representation of vegetation dynamics given the relative area of each grid cell and replicate patches (0.1 ha). For summergreen PFTs we slightly modified the assumption of a fixed growing degree day (GDD) requirement for establishment, using thawing degree days (TDDs -degree days with a 0 • C basis; see Table S2.2) to capture the thermal sum requirement for the establishment of new individuals. Forcing data The input variables used as forcing in LPJ-GUESS simulations are monthly 2 m air temperature ( • C), precipitation (mm) and incoming short-wave radiation (W m −2 ) as well as annual atmospheric [CO 2 ] (ppm), soil texture (mineral fractions only) and nitrogen deposition (kgN per hectare per month). Monthly air temperature and short-wave radiation are interpolated to a daily time step, while precipitation is randomly distributed over the month using monthly wet days. Historic period A highly resolved (50 × 50 m) temperature and radiation dataset using field measurements and a digital elevation model (DEM) by Yang et al. (2011) provided climate input to the model simulations for the historic period . The field measurements were conducted in the form of transects that captured mesoscale climatic variations, i.e. lapse rates. In addition, the transects were placed to capture microclimatic effects of the nearby Lake Torneträsk and aspect effects on radiation influx. The temperature in the lower parts of the Abisko Valley in the resulting dataset was influenced by the lake, with milder winters and less yearly variability. At higher elevation, the temperature was more variable over the year and the local-scale variations were more dependent on the different solar angles between seasons and by aspect (Yang et al., 2011(Yang et al., , 2012) (see Fig. S1.1 in the Supplement). For a full description of how this dataset was constructed we refer to Yang et al. (2011Yang et al. ( , 2012. Monthly precipitation input was obtained from the Abisko Scientific Research Station weather records. Precipitation was randomly distributed over each month using the number of wet days from the CRUNCEP v.7 dataset (Wei et al., 2014). We assumed that local differences in precipitation can be neglected for our study domain, and thus the raw station data were used as input to LPJ-GUESS for the historic period. Nitrogen deposition data for the historic and future simulations were extracted from the grid cell including Abisko in the dataset of Lamarque et al. (2013). Nitrogen deposition was assumed to be distributed equally over the study domain. Soil texture was extracted from the WISE soil dataset (Batjes, 2005) for the Abisko area and assumed to be uniform across the study domain. Callaghan et al. (2013) report that the soils around the Torneträsk areas are mainly glaciofluvial till and sediments. Clay and silt fractions vary between 20 %-50 % (Josefsson, 1990) with higher fractions of clay and silt in the birch forest and a larger sand content in the heaths. In the absence of spatial information on particle size distributions, the soil was prescribed as a sandy loam soil with 43 % sand and approximately equal fractions of silt and clay. Future simulations Future estimates of vegetation change were simulated for one low-emission (RCP2.6) and one high-emission (RCP8.5) scenario. For each scenario, climate change projections from three global climate models (GCMs) from the CMIP5 GCM ensemble (Taylor et al., 2012) were used to investigate climate effects on vegetation dynamics. The chosen GCMs (MIROC-ESM-CHEM, HadGEM2-AO, GFDL-ESM2M) were selected to represent the largest spread, i.e. the highest, the lowest and near average, in modelled mean annual temperature for the reference period 2071-2100. Only models with available simulations for both RCP2.6 and RCP8.5 were used in the selection. Monthly climate data for input to LPJ-GUESS (temperature, total precipitation and short-wave radiation) were extracted for the grid cell including Abisko for each GCM. The number of wet days per month was assumed not to change in the future scenario sim-6334 A. Gustafson et al.: Nitrogen restricts future sub-arctic treeline advance ulations, so we used the 1971-2000 climatology for this period. The historic climate dataset by Yang et al. (2011) was extended into the projection period (2001-2100) using the delta change approach as follows. For each grid cell monthly differences were calculated between the projection climate and the dataset by Yang et al. (2011) for the last 30-year reference period in our historic dataset . For temperature, the arithmetic difference was extracted, while for precipitation and incoming short-wave radiation, relative (i.e. geometric) differences between the two datasets were extracted. The resulting monthly anomalies were then either added (temperature) to the GCM outputs or used to multiply (precipitation, radiation) the GCM outputs from 2001-2100, for each of the climate scenarios used. Forcing data of atmospheric [CO 2 ] for the two scenarios were obtained from the CMIP5 project. Model experiments To investigate the possible drivers of future vegetation change, we performed three model experiments. The model was forced with changes to one category of input (driver) variables (climate, [CO 2 ], nitrogen deposition) at a time for a projection period between the years 2001-2100. A full list of simulations can be found in Table S3 (Supplement). A control scenario with no climate trend (and with [CO 2 ] and nitrogen deposition held at their respective year 2000 values) was also created. We estimated the effect of the transient climate change, [CO 2 ] or nitrogen deposition scenarios by subtracting model results for the last decade (2090-2100) in the no-trend scenario from those for the last decade (2090-2100) of the respective transient scenario. To estimate how sensitive the model was to different factors, we performed a Spearman rank correlation for each PFT in 50 m elevational bands over the forest-tundra ecotone. We chose Spearman rank over Pearson since not all correlations were linear. Climate change To estimate the sensitivity to climate change, the same scenarios as those used for the future simulations (Sect. 2.3.2) were used while [CO 2 ] and nitrogen deposition were held constant at their year 2000 value. Climate anomalies without any trend were created by randomly sampling full years in the last decade (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000) from the climate station data. The climate dataset was then extended using these data. The resulting climate scenario had the same interannual variability as the historic dataset and no trend for the years 2001-2100. This scenario was used to investigate any lag effects on vegetation change. This scenario also provided climate input for the nitrogen and [CO 2 ] sensitivity tests described below. Nitrogen deposition Scenarios of nitrogen deposition were obtained from the Lamarque et al. (2013) dataset. Since this dataset assumes a decrease in nitrogen deposition after the year 2000, we also added four scenarios where nitrogen deposition increased with 2, 5, 7.5 and 10 times the nitrogen deposition relative to the year 2000. These four scenarios were created to isolate the single-factor effect of nitrogen increase without any climate or [CO 2 ] change. The resulting additional loads of nitrogen after the year 2000 in these scenarios were 0.38, 0.97, 1.46 and 1.9 gN m −2 yr −1 , respectively. Model evaluation We evaluated the model against available observations in the Abisko area. Measurements of ecosystem productivity from an eddy covariance (EC) tower were obtained for 6 nonconsecutive years . Biomass and biomass change estimates were used to evaluate simulated biomass in the birch forest . Surveys of historic vegetation change above the treeline were obtained from Rundqvist et al. (2011). Leaf area index (LAI) and evapotranspiration estimates were obtained from Ovhed and Holmgren (1996). The studies by Hedenås et al. (2011) and Rundqvist et al. (2011) were used to evaluate model outputs around the observation year 2010. To compare biomass and vegetation change with these studies, we extracted 5-year multimodel averages for 2008-2012 from our projection simulations (Sect. 2.3.2). These means were used to calculate modelled change in biomass and vegetation in our historic dataset and to compare the modelled output to the observational data. To determine the local rates of treeline migration, several transects were defined within our study domain (Fig. S1.2 in the Supplement). These transects were chosen to represent a large spread in heterogeneity with regard to slope and aspect in the landscape. A subsample of the selected transects were placed close to the transects used by Van Bogaert et al. (2011) and used to evaluate model performance. Results from the model evaluation are summarised in Tables 1 and S1.1. Determination of domains in the forest-tundra ecotone In our analysis we distinguished between forest, treeline and shrub tundra, defined as follows. Any grid cell containing 30 % fractional projective cover or more of trees was clas- , 2011) to determine the birch forest boundary. The treeline was then determined by first selecting grid cells classified as forest. Any grid cell with four or more neighbours fulfilling the 30 % cover condition criterion was classified as belonging to the forest. The perimeter of the forest was then determined through sorting out grid cells with four or five neighbours classified as forest. Grid cells with fewer or more neighbours were regarded as tundra or forest, respectively. Grid cells below the treeline were classified as forest in the analysis, and grid cells above the treeline were classified as tundra. Presentation of results We present seasonal values for soil and air temperature. These are averages of the 3-month periods DJF, MAM, JJA and SON, referred to as winter, spring, summer and autumn below. For the RCPs average values are presented with the ranges of the different scenarios within each RCP given in parentheses. We report values of both gross primary production (GPP), which we benchmark the model against, and net primary productivity (NPP) as this is of relevance for the carbon limitation discussion. Historic vegetation shifts The dominant PFT in the forest and at the treeline was IBS, which constituted 90 % of the total LAI (Figs. 2a-3a). The only other tree PFT present in the forest was BINE, which comprised a minor fraction of total LAI. However, in the lower (warmer) parts of the landscape, BINE comprised up to 20 % of the total LAI in a few grid cells. The forest understorey was mixed but consisted mostly of tall and low evergreen shrubs and grasses. Shrub tundra vegetation above the treeline was more mixed, but LSE dominated with 51 % of the total LAI. Grasses comprised an additional 25 % of the total LAI, and IBS was present close to the treeline, where it comprised up to 5 % of the LAI in some grid cells. NPP for IBS in the forest increased from 96 to 180 gC m −2 yr −1 over our historic period . Corresponding values at the treeline did not increase but were saturated at around 60 gC m −2 yr −1 . Above the treeline, IBS showed very low NPP values (<15 gC m −2 yr −1 ), while NPP for the dominant shrub (LSE) doubled from 20 gC m −2 yr −1 at the treeline to 40 gC m −2 yr −1 in the tundra. Between the start and end of our historic simulation the treeline shifted upwards by 67 elevational metres on average, corresponding to a rate of 0.83 m yr −1 . However, during the 20th century both a period with more rapid warming (0.8 • C) and a faster tree migration rate (1.23 m yr −1 ) and a period with a cooling trend (−0.3 • C) and stationary treeline occurred (Fig. 5). Between 1913-2000, the lower boundary of the treeline shifted upwards by 2 m, while treeline upper boundaries shifted upwards by 123 m. These shifts corresponded to rates of 0.03 and 1.54 m yr −1 , respectively. Similar rates were also found in the transects established to test how the model simulates the heterogeneity of treeline migration (Fig. S1.2 and Table S1.1 in the Supplement), where the average migration rate was 0.87 (0.54-1.25) m yr −1 . During the 1913-2000 period, annual average air temperature at the simulated treeline warmed from −2.0 to −0.8 • C. Warming occurred throughout the year but was strongest in winter and spring, when temperatures increased by 3.0 and 1.4 • C, respectively. In contrast, both summer and autumn temperatures warmed by only 0.6 • C. The resulting winter, spring, summer and autumn air temperatures at the treeline in 1990-2000 were −8.7, 3.3, 8.8 and −0.1 • C, respectively. The warming was also reflected in annual average soil temperature increases of a similar magnitude, by 2.1 • C from −0.8 to 1.3 • C. Winter soil temperature increased by 3.7 • C from −5.6 • C in 1913 to −1.9 • C in 2000. The warmer soil temperatures resulted in a 4.8 % simulated increase in the annual net nitrogen mineralisation rate in the treeline soils over the same period. In absolute numbers, nitrogen mineralisation increased from 1.29 to 1.36 gN m −2 . Combined with an increased nitrogen deposition load from 0.06 gN m −2 in 1913 to 0.20 gN m −2 in 2000 and an increased nitrogen fixation from 0.13 to 0.18 gN m −2 , plant-available nitrogen was simulated to increase by 15.9 %. Simulated permafrost with an active layer thickness of <1.5 m was present at elevations down to 560 m a.s.l. in a few grid cells but was always well above the treeline. More shallow permafrost (active layer thickness <1 m) was only present in grid cells at elevations of 940 m a.s.l. and above. Model experiments A slight treeline advance at the end of the projection period (2090-2100) of approximately 11 elevational metres was seen in the control simulation. As all drivers were held constant or trend-free in this simulation, this reveals a lag from the historical period, likely resulting from smaller trees that had established in the historic period that matured during the projection period. Climate change Treeline advance occurred in all climate change scenarios although the rate was not uniform throughout the projection period (Fig. 5). When driven by climate change alone, migration rates were faster compared to simulations where nitrogen deposition and [CO 2 ] were also changed (Sect. 3.2). Treeline advance in climate-change-only scenarios ranged between 60 elevational metres (HadGEM2-AO-RCP2.6) and 245 elevational metres (MIROC-ESM-CHEM-RCP8.5) over the 100-year projection period. Tree productivity was strongly enhanced by air temperature increase over the whole study domain (Fig. 6a). Weaker correlations between productivity and other climate factors such as precipitation and net short-wave radiation were also seen (Figs. S1.5 and S1.6 in the Supplement). Annual precipitation increased in all climate change scenarios (Table 2). In the lower parts of the valley, the increased precipitation did not result in increased soil moisture during summer as losses through evapotranspiration driven by temperature exceeded the additional input. Spring and autumn soil moisture increased in the forest, mainly because of earlier snowmelt and thawing ground in spring and relatively weaker evapotranspiration in autumn. Above the treeline, soil moisture increased as the lower temperatures and LAI did not drive evapotranspiration as strongly as in the lower parts of the valley, and the increased moisture input thus outweighed the slightly increased evapotranspiration. Increased tree productivity in the forest resulted in an increased LAI of 0.3-1.5 m 2 m −2 (18 %-90 %). BNE appeared in the forest and dominated in a few grid cells. In most places BNE constituted approximately 5 % of the total LAI. Tall shrub (HSE and HSS) productivity and the LAI increased in the forest. This increase was negatively correlated with temperature; i.e. the increase was highest in the coolest climate change scenarios. Above the treeline, tall shrubs showed the opposite pattern, increasing by 8 %-50 % to finally constitute 10 %-36 % of the total LAI. Higher soil moisture content in spring and autumn favoured trees in the whole ecotone, while the forest understorey suffered from the earlier onset of the growing season with subsequent flushing of the leaf and light shading from taller competitors. Although soil moisture in summer decreased in the forest, the LAI and biomass carbon of summergreen shrubs were positively correlated with soil moisture. Higher soil moisture during summers in the wetter GCM scenarios promoted summergreen shrubs over evergreen shrubs in the whole ecotone. As an example, vegetation composition on the tundra above the treeline differed between GFDL-ESM2M and MIROC-ESM-CHEM under RCP8.5, where the warmer GCM showed a 52 % biomass C increase in the tall evergreen shrub, HSE. The intermediate warming scenario (GFDL-ESM2M-RCP8.5) showed a more mixed increase in biomass carbon in HSE (20 %) and HSS (24 %). While annual temperature differed by 3.9 • C between the two scenarios, average annual precipitation only differed by 6.2 mm, yielding much (26 %) lower JJA soil moisture in the warmest scenario (MIROC-ESM-CHEM-RCP8.5) compared to the coldest (GFDL-ESM2M-RCP8.5). Relatively higher soil moisture and subsequently lower water stress allow taller plants to establish. Radiation correlated positively with the growth of tree PFTs, with spring and autumn radiation found to be especially important for height and biomass increase (Fig. S1.7 in the Supplement). Increased radiation provided a competitive advantage for taller trees and shrubs to shade out lower shrubs and grasses in the forest. Shrubs above the treeline were also favoured by increased light. Net nitrogen mineralisation at the treeline showed great variation between different climate change scenarios, ranging from a 4 % decrease in GFDL-ESM2M-RCP8.5 to a 79 % increase in the strongest warming scenario (MIROC-ESM- CHEM-RCP8.5). In absolute terms, the latter increase corresponds to an increase from 1.35 gN m −2 yr −1 at the end of the historic period (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000) to 2.43 gN m −2 yr −1 at the end of the century (2090-2100). This is comparable to the nitrogen load in the 7.5× increased nitrogen deposition scenario. Interestingly, despite very different plant-available nitrogen and warming, the two scenarios displayed a similar resulting (2090-2100) treeline elevation (Fig. 5a). Permafrost with an active layer thickness of <1.5 m disappeared completely from our study domain in all scenarios except the coldest (GFDL-ESM2M-RCP2.6), where it occurred in a few grid cells at elevations of approximately 600 m a.s.l. However, the shallow permafrost (<1 m) had also disappeared in this scenario. CO 2 [CO 2 ] increase enhanced productivity in most PFTs (Fig. 6b). The total GPP averaged over the forest increased by between 2 %-10 % depending on the [CO 2 ] scenario, with the largest increase in RCP8.5 and smallest in RCP2.6. The CO 2 fertilisation effect was not uniform within the landscape but stronger towards the forest edge with increases from 2 % to 18 % from the weakest to the strongest [CO 2 ] scenario. NPP for IBS increased uniformly over the forest with 2.5 %-8.4 % but decreased above the treeline. Thus, the productivity of the two dominant PFTs (IBS in the forest and LSE above the treeline) was reinforced in their respective domains. The increased productivity translated into a 1 %-5 % increase in the tree LAI in the forest, while the low shrub LAI increased by 24 %-77 %. Likewise, the increase in the leaf area of low shrubs was largest on the tundra under elevated [CO 2 ], which saw a 15 %-40 % LAI increase in the low and high [CO 2 ] scenario, respectively. Above the treeline, the productivity of grasses and low shrubs responded strongly to the CO 2 fertilisation with a 350 % increase in GPP for grasses and 150 % increase for low shrubs. The additional litter fall produced by the increased leaf mass did not lead to an increase in N mineralisation. However, immobilisation of nitrogen through increased uptake by microbes increased by 2 %-6 % between the lowest and highest [CO 2 ] scenarios, yielding a net reduction in plant-available nitrogen. Despite productivity in- creases, the treeline remained stationary in all [CO 2 ] scenarios (Fig. 5b). Nitrogen deposition Productivity of woody PFTs was in general positively correlated with nitrogen in the different nitrogen deposition scenarios. In contrast, productivity of grasses was negatively correlated (Fig. 6c) as they suffered in competition for light with the trees. Annual GPP of trees (especially IBS) was positively correlated throughout the whole ecotone, but the increase in GPP was larger towards the forest boundaries than in the lower parts of the forest when nitrogen was added. Nitrogen-stressed plants in the model allocate more carbon to their roots at the expense of foliar cover when they suffer a productivity reduction . In the two scenarios with decreasing nitrogen deposition (RCP2.6, RCP8.5) there was an overall reduction in the LAI in both the tundra and the forest of 6 %-10 %. The largest reduction was seen in tree PFTs, which have the largest biomass and consequently will have the highest nitrogen demand, followed by tall shrubs. Low shrubs and grasses did however increase their LAI in the forest when nitrogen input decreased as a result of less light competition from trees. Above the treeline, the LAI of low shrubs and grass PFTs also decreased with less nitrogen input. In all scenarios with increasing nitrogen deposition there was an advancement of the treeline on the order of 10-85 elevational metres with the smallest (2× nitrogen deposition) having the smallest change in treeline elevation and vice versa for the largest input (10× nitrogen deposition) (Fig. 5c). In the scenarios where nitrogen input was constant or decreasing, the treeline remained stationary. Discussion In our simulations, rates of treeline advance were faster under climate-change-only scenarios than when all drivers were changing. This revealed nitrogen as a modulating environmental variable, as nitrogen deposition was prescribed to decrease in both the RCP2.6 and the RCP8.5 scenarios. During our historic simulations, the treeline correlated well with a soil temperature isotherm close to the globally observed 6-7 • C isotherm. However, in our projection period the correlation between the treeline position and the isotherm weakened, revealing a fading or potential lag of the treeline-climate equilibrium that became stronger with increased warming. Future rates of treeline advance were thus constrained by factors other than temperature in our simulations. In contrast to previous modelling studies of treeline advance (e.g. Paulsen and Körner, 2014), we include not only temperature dependence on vegetation change but also the full nitrogen cycle and CO 2 fertilisation effects . Scenarios with increased nitrogen deposition induced treeline advance, further illustrating the modulating role played by nitrogen dynamics in our results. Rising [CO 2 ] induced higher productivity in our simulations, but these productivity enhancements alone did not lead to significant treeline advance. Furthermore, although NPP for IBS was lower at the treeline than in the forest, it was never close to zero. Such a pattern, which was seen above the treeline, indicates stagnant growth in which the carbon costs of maintaining a larger biomass cancel out any productivity increase. However, enhancement of productivity in combination with an allocation shift from roots to shoots, enabled by a greater nitrogen uptake, favoured taller plants over their shorter neighbours in the competition for light within the model. For treeline advance to occur, trees need to invade the space already occupied by other vegetation. As the model assumes asymmetric competition for nutrients, newly established seedlings have a disadvantage compared to incumbent vegetation, fur-A. Gustafson et al.: Nitrogen restricts future sub-arctic treeline advance ther slowing down the modelled rate of treeline advance. Field experiments with nitrogen fertilisation have shown that mountain birches at the treeline display enhanced growth after nitrogen addition (Sveinbjörnsson et al., 1992). Furthermore, fertilisation with nitrogen improved birch seedling survival above the treeline (Grau et al., 2012) and is thus likely important for the establishment and growth of new individuals to form a new treeline. Historically, treeline positions show a strong correlation with the 6-7 • C isotherm (Körner and Paulsen, 2004). These records are, however, a snapshot in time and are not necessarily a strong predictor of the future treeline, with other factors (as with nitrogen in our results) potentially breaking the link to temperature. As pointed out by others (Hofgaard et al., 2019;Van Bogaert et al., 2011), considering climate change or temperature alone in projections of treeline advance could potentially result in overestimation of vegetation change. Our results clearly point to nitrogen cycling as a modulating factor when predicting future Arctic vegetation shifts. In our simulations, the treeline advanced at similar rates to those experienced during the historic period, resulting in a displacement of 45-195 elevational metres over the 100year projection period. Some estimates based on lake sediments in the Torneträsk region from the Holocene thermal maximum, when summer temperatures may have been about 2.5 • C warmer than present (Kullman and Kjällgren, 2006), indicate potential treeline elevations approximately 500 m above the present level in the warmer climate (Kullman, 2010). Macrofossil records from lakes in the area indicate that birch was present 300-400 m above the current treeline (Barnekow, 1999). Furthermore, pine might have occurred approximately 100-150 m above its present distribution (Berglund et al., 1996). IBS emerged as the dominant forest and treeline PFT in both our historic and our projection simulations but with larger fractions of evergreen trees (BNE and BINE) at the end of the century (2090-2100). Mountain birch, represented by IBS in our model, has historically dominated treelines in the study area, even during warmer periods of the Holocene (Berglund et al., 1996), but with larger populations of pine (BINE) and spruce (BNE) than seen at present. Both pine and spruce have been found in high-elevation lake pollen sediments and can thus be assumed to have grown in higher parts of the ecotone during warmer periods (Kullman, 2010). Treeline advance for the historic period in our simulations is broadly consistent with observational studies from the Abisko region (Van Bogaert et al., 2011). Temperature was a strong driver of tree productivity and growth in the whole ecotone in our simulations. For the historic period, higher rates of treeline advance followed periods of stronger warming. However, other factors such as precipitation indirectly influenced treeline advance through changes in vegetation composition and nitrogen mineralisation. This is illustrated by the comparison of GFDL-ESM2M and MIROC-ESM-CHEM under RCP8.5, where the interme-diate warming but wetter scenario had a very similar resulting treeline elevation to that of the warmer scenario. While the simulated treeline position was too low compared to the treeline elevation reported by Callaghan et al. (2013), the correlation with the globally observed 6-7 • C ground temperature isotherm (Körner and Paulsen, 2004) throughout the historic period gives confidence in the model results. IBS at the treeline had a positive carbon balance (NPP) and was thus not directly limited by its productivity in our simulations. This is consistent with observations of ample carbon storage in treeline trees globally (Hoch and Körner, 2012). The modelled treeline is thus not set by productivity directly but rather by competition, as non-tree PFTs become more productive above the treeline. Whether the treeline is set by productivity constraints or by cold temperature limits on wood formation and meristematic activity has been a subject of debate in the literature (Körner, 2015(Körner, , 2003Körner et al., 2016;Fatichi et al., 2019;Pugh et al., 2016). DVMs assume NPP to be constraining for growth. On the other hand, trees close to the treeline have been shown to have ample stored carbon (Hoch and Körner, 2012). Furthermore, enhancement of photosynthesis through added CO 2 does not always result in increased tree growth close to the treeline (Dawes et al., 2013), and wood formation is slow below around 5 • C, leading to a hypothesis of reversed control of plant productivity and treeline position (Körner, 2015). As has also been highlighted in this study, ecological interactions as a component in the control of treeline position have been the subject of attention in some recent modelling studies (see for example Scherrer et al., 2020). Such studies add an extra dimension to the discussion as they not only consider plant physiology and hard limits to species distributions but also broadly accept ecological concepts such as realised versus fundamental niches. The model overestimated biomass carbon in the forest but captured historic rates of biomass increase. The overestimation was more severe closer to the forest boundaries as the model showed a weaker negative correlation between biomass carbon and elevation than observed by Hedenås et al. (2011). The mean annual biomass increase in the same dataset is, although highly variable, on average 2.5 gC m −2 yr −1 between 1997 and 2010. As the simulated GPP and LAI were within the range of observations in the area (Rundqvist et al., 2011;Ovhed and Holmgren, 1996;Olsson et al., 2017), this indicates a coupling between photosynthesis and growth in the model that is stronger than that observed. Terrestrial biosphere models often overestimate biomass in high latitudes (Pugh et al., 2016;Leuzinger et al., 2013) and potentially lack processes that likely limit growth close to low temperature boundaries. Examples of such processes are the carbon costs of nitrogen acquisition (Shi et al., 2016), including costs for mycorrhizal interactions , and temperature limits on wood formation (Friend et al., 2019). However, data on carbon allocation and its temperature dependence are scarce (Fatichi et al., 2019). Additionally, the overestimation in our study can be partly attributed to a lack of herbivory in the model. Outbreaks of the moth Epirrita autumnata are known to limit productivity and reduce biomass of mountain birch in the area in certain years ; however, this would not fully explain the overestimation of biomass at the treeline in our simulations. Since growth and biomass increments in the model do not include a direct temperature dependence or any decoupling of growth and productivity, we do not regard these mechanisms as necessary to accurately predict treeline dynamics. However, they might be important to accurately predict forest biomass at the treeline. To examine variability in the simulated treeline dynamics across the study area, we established a number of transects close to observation points in the landscape. Average treeline advance in the transects showed a somewhat faster and more homogenous migration than reported (Van Bogaert et al., 2011). The model does not include historic anthropogenic disturbances, topographic barriers or insect herbivory, all of which have been invoked to explain the heterogeneity of treeline advance rates and placement in the landscape (Van Bogaert et al., 2011;Emanuelsson, 1987). Furthermore, our model does not include any wind-related processes such as wind-mediated snow transport or compaction. Thus, our simulations result in a homogenous snowpack during the winter months with no differentiation in sheltering or frost damage that may result from different snow and ice properties. Sheltered locations in the landscape are known to promote the survival of tree saplings (Sundqvist et al., 2008). For nitrogen cycling this may also mean that suggested snow-shrub feedbacks (Sturm et al., 2001;Sturm, 2005) are not possible to capture with the current version of our model. While overall rates of treeline migration were captured, local variations arising from physical barriers such as steep slopes, stony patches or anthropogenic disturbances were not possible to capture as these processes are not implemented in the model. High-resolution, local observations of vertically resolved soil texture and soil organic matter content (see, e.g., Hengl et al., 2017, for an example compiled using machine learning) have the potential to improve the spatial variability in modelled soil temperatures and nutrient cycling in our study domain. A longer growing season favoured tree PFTs in the whole ecotone, which escaped early-season desiccation due to milder winters and earlier spring thaw. Permafrost was only present at the highest elevations during the historic simulation but had disappeared from the landscape by 2100 for all except the coolest scenario (GFDL-ESM2M-RCP2.6). The simulated permafrost was however always well above the treeline and did not have a significant impact on the treeline advancement. While some aspects of ground freezing are accounted for in the model, soil vertical and horizontal movement caused by frost, as well as the amelioration of such effects in the warmer future climate, is not. Such processes could affect survival and competition among the plant functional types, especially in the seedling stage when plants are most vulnerable to mechanical disturbance (Holtmeier and Broll, 2007). These effects could be relevant to treeline dynamics at the high grid resolution of our study but are not included in our model. Higher summer soil moisture in the wetter climate scenarios shifted the ratio of summergreen to evergreen shrubs in favour of the summergreen shrubs, in line with observations (Elmendorf et al., 2012). Conversely, drier scenarios yielded an increased abundance of evergreen shrubs, similar to what has been observed in drier parts of the tundra heath in the Abisko region (Scharn et al., 2021). Within RCP8.5, the warmest (MIROC-ESM-CHEM-RCP8.5) and coldest (GFDL-ESM2M-RCP8.5) scenarios gave rise to very similar treeline positions at the end of the projection period (2090-2100). The cooler scenario led to both higher soil moisture and a greater abundance of summergreen shrubs. Higher soil moisture promoted carbon allocation to the canopy and thus favoured the taller IBS tree PFT over tall shrubs (HSS). Increased shrub abundance and nutrient cycling have been shown to have potentially non-linear effects on shrub growth and ecosystem carbon cycling (Buckeridge et al., 2009;Hicks et al., 2019), and some observations indicate that changes in the ratio of summergreen to evergreen shrubs or an increased abundance of trees might impact soil carbon loss (Parker et al., 2018;Clemmensen et al., 2021). Thus, our results indicate that any future change in soil moisture conditions could play an important role in the competitive balance between shrubs and trees and for carbon balance. LPJ-GUESS assumes the presence of seeds in all grid cells, and PFTs may establish when the 20-year (running) average climate is within PFT-specific bioclimatic limits for establishment. This assumption may overlook potential constraints on plant migration rates such as seed dispersal and reproduction. On larger spatial scales, it is likely that lags in range shifts would arise from these additional constraints (Rees et al., 2020;Brown et al., 2018). Models that account for dispersal limitations generally predict slower latitudinal tree migration than models driven solely by climate (Epstein et al., 2007). However, on smaller spatial scales, the same models predict competitive interactions to be more dominant in determining species migration rates (Scherrer et al., 2020), and this is included in our model. In a seed transplant study from the Swiss Alps, seed viability could not be shown to decline towards the range limits of eight European broadleaved tree species (Kollas et al., 2012;Körner et al., 2016). Similarly, gene flow above the treeline could not be shown to be limited to near-treeline trees in the Abisko region (Truong et al., 2007). Furthermore, tree saplings have been reported to be common up to 100 m above the present treeline (Sundqvist et al., 2008;Hofgaard et al., 2009). As environmental conditions improve, these individuals may form the new treeline. Above the treeline low evergreen shrubs (LSE) dominated the vegetation in both our historic and our projection simulations. The productivity of shrubs and grasses was greatly enhanced by CO 2 fertilisation in our [CO 2 ] model experiment, 6342 A. Gustafson et al.: Nitrogen restricts future sub-arctic treeline advance and a large proportion of tundra productivity increases in our projection simulations could be attributed to rising [CO 2 ]. Physiological effects of elevated CO 2 on arctic and alpine tundra productivity and growth are understudied. Free-air CO 2 enrichment (FACE) experiments are generally considered the best method for quantifying long-term ecosystem effects of elevated CO 2 but are extremely costly, and very few have been deployed in near-treeline locations. A majority of FACE experiments have been implemented in temperate forests and grasslands, yielding limited evidence of relevance to boreal and tundra ecosystems (Hickler et al., 2008). One FACE experiment situated in a forest-tundra ecotone in the Swiss Alps showed differing responses to elevated CO 2 among shrub species where Vaccinium myrtillys showed 11 % increased shoot growth, while Empetrum nigrum was unresponsive and the response of V. gaultherioides depended on the forest type in which it was growing (Dawes et al., 2013). Our model results indicated that shrubs are carbon limited, and shrub productivity and growth are consequently responsive to CO 2 fertilisation. Conclusions In this study we examined treeline dynamics in the sub-arctic north of Sweden using an individual-based dynamic vegetation model at a high spatial resolution. The model identified nitrogen cycling and availability as important modulating factors for treeline advance in a warming future climate. Internal cycling of nitrogen in soils provides the main source of this usually limiting nutrient for Arctic plants (Chapin, 1983). The model performed well regarding rates of shrub increase and treeline advance but overestimated biomass carbon in the treeline forest. Treeline migration rates were realistically simulated even though the model did not represent temperature limitations on tree growth. While a decoupling between productivity and growth in the model could potentially have improved estimates of biomass carbon, it was not needed to correctly predict treeline elevation. Instead, our results point to the importance of indirect effects of rising temperatures on tree range shifts, especially with regard to nutrient cycling and competition between trees and shrubs. Furthermore, soil moisture strongly influenced vegetation composition within the model with implications for treeline advance. Improving how models represent nutrient uptake and cycling and incorporating empirical understanding of processes that determine tree and shrub growth will be key to making better predictions of Arctic vegetation change and carbon and nitrogen cycling. Models are a valuable aid in judging the relevance of these processes for sub-arctic treeline ecosystems. Data availability. The climate dataset by Yang et al. (2011) was generously shared with the authors but is not publicly accessible. The data can be accessed upon inquiry to the authors. CMIP5 climate data were downloaded from the ESGF data repository (https://esg-dn1.nsc.liu.se/projects/esgf-liu/, ESGF, 2021) and can be accessed through the ESGF service. The historic climate data for the Abisko Scientific Research Station were generously shared with the authors. Access to this data is provided by the Abisko Scientific Research Station. The soil data used in our study can be accessed through the ISRIC Data Hub (http://data.isric.org/geonetwork/srv/eng/catalog.search# /metadata/dc7b283a-8f19-45e1-aaed-e9bd515119bc, Batjes et al., 2005) The dataset with historic and projected nitrogen deposition is not publicly available but was generously shared with the authors. Author contributions. AG designed the experiments with contributions from PAM and SO. AG also performed necessary model code developments and carried out model simulations and data analysis. RGB and BS contributed scientific advice and input throughout the study and contributed to the writing. AG prepared the manuscript with contributions from all co-authors. Competing interests. The contact author has declared that neither they nor their co-authors have any competing interests. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Review statement. This paper was edited by Ben Bond-Lamberty and reviewed by Jed Kaplan and Christian Körner.
2021-12-16T17:41:30.410Z
2021-12-13T00:00:00.000
{ "year": 2021, "sha1": "b2c7f8dd59e4f40d9749bbed56e5480104d9863b", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/18/6329/2021/bg-18-6329-2021.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7a212ed02fd769a1aa52de983c4d2d7c250a28df", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
12321869
pes2o/s2orc
v3-fos-license
Statistical Mechanics of Non-stretching Elastica in Three Dimensional Space Recently I proposed a new calculation scheme of a partition function of an immersion object using path integral method and theory of soliton (to appear in J.Phys.A). I applied the scheme to problem of elastica in two-dimensional space and Willmore surface in three dimensional space. In this article, I will apply the scheme to elastica in three dimensional space as a more physical model in polymer science. Then orbit space of the nonlinear Schrodinger and complex modified Korteweg-de Vries equations can be regarded as the functional space of the partition function. By investigation of the partition function, I gives a conjecture of the relation of these soliton equations. §1. Introduction Elastica problem in two dimensional space R 2 has long history [1,2]. It is known that by observing a shape of thin elastic beam, James Bernoulli named the shape elastica. It might be regarded as birth of the elastica problem and germination of the mathematical physics, including the elliptic function theory, mode analysis, nonlinear science, elliptic differential theory, algebraic analysis and so on. The elastica in R 2 [1,2] is defined as a curve with the Bernoulli-Euler functional, (1-1) where k is its curvature. Recently I presented a new calculation scheme of a partition function of non-stretching elasticas in R 2 under the condition preserving its local length [3]. The partition function is formally defined as where DX is the Feynman measure for an affine vector of a point of the elastica X and β is the inverse of temperature. Goldstein and Petrich discovered that the virtual motion of nonstretching curve obeys the modified Korteweg-de Vries (MKdV) equation, (1)(2)(3) and its hierarchy [5,6]. Using the Goldstein-Petrich scheme, I found that the functional space of the partition function (1)(2) are completely represented by the MKdV equation (1)(2)(3). In other words, the MKdV flows conserves the energy functional (1-1). The functional space (1)(2) is classified by the solutions of the MKdV equation (1)(2)(3). After that, I applied this method to the Willmore surface in three dimensional space R 3 [4]. Instead of the MKdV equation, there appears the modified Novikov-Veselov equation which classifies the functional space of the partition function. In this article, I will investigate a partition function of an elastica in R 3 with the energy functional E = ds |κ| 2 , (1)(2)(3)(4) where κ is a complex curvature of the elastica in R 3 . I will also require that the elastica does not stretch. Then the partition function of an elastica in R 3 with the energy (1)(2)(3)(4) can be also evaluated. Due to the non-stretching condition, instead of Goldstein-Petrich scheme of the MKdV hierarchy [3,5,6], the Langer-Perline scheme of the nonlinear Schrödinger (NLS) hierarchy and the complex MKdV (CMKdV) hierarchy appears in the calculation of the partition function [7,8]. Whereas the NLS equation is well known as the integrable equation and investigated well, the properties of the CMKdV equation is not sufficiently studied. According to the result of Mohammad and Can [10], the different version of the CMKdV equation does not pass the Painlevé test [10]. In this article, I will also argue the properties of the CMKdV equation and the relation between the CMKdV and the NLS equations. On the other hand, the study of elastic chain model of a large polymer is current [11]. According to recent review of a large polymer [11], statistical mechanics of a polymer model is closely connected with the mathematical science. Due to the complexity, investigation of its properties is not simple in general. However it sometimes can be exactly performed owing to deep symmetry [11]. In fact an exact partition function of elastic chain with the energy functional (1-4) was obtained by Saitô et al. using the path integral [12]. However they paid no attention upon isometry condition as thermal fluctuation of the path integration even though they required isometry condition after all computations; they summed allover configuration space without isometry condition rather than over restricted functional space. It should be noted that the constraint does not commute with such evaluation of the partition function in general. Thus as another limit, it is of interest to investigate the partition function with the energy (1-4) under the isometry condition. One of purposes of this article is to investigate the partition function of a non-stretching space curve with the energy functional (1-4) as a polymer model. Furthermore, a space curve in R 3 also interests us from the viewpoint of the string theory [15]. Grinevich and Schmidt investigated closed condition of a space curve obeying the NLS equation because a kind of its complexfication becomes a surface with Kähler metric [14]. Thus the problem is associated with the string theory [15]. (However as I mentioned in ref. [3], it should be noted that the elastica absolutely differs from a string in the string theory, even though it influences the theory [15].) Thus although it is not main purpose, another hidden purpose of this article is to investigate the moduli of non-stretching curve in R 3 by taking into the consideration of such relation as a generalization to the surface problem [4,14]. The organization of this article is as follows. In §2, I will evaluate the partition function of non-stretching elastica in R 3 . Section 3 gives a discussion of the results. §2. Partition Function of Non-stretching Elastica in R 3 I will denote by C a shape of an elastica (a real one-dimensional curve) immersed in three 2 dimensional space R 3 and by X(s) = (X 1 , X 2 , X 3 ) its affine vector, where L is the length of the elastica s is a parameter of the curve and N is natural number. I consider a closed polymer in R 3 ; its center axis is a space curve C. Here I will fix the metric of the curve C induced from the natural metric of R 3 ; ds = √ dXdX. As I stated in ref. [3], a reader should not confuse an elastica with a "string" in a string theory; they are absolutely different. There is the orthonormal system along C, (n 0 , n 1 , n 2 ) with fixing n 0 as the tangent unit vector; n 0 = ∂ s X, where ∂ s := ∂/∂s. We make them, first, satisfy the Frenet-Serret relation [16], Here k is the curvature, τ is the Frenet-Serret torsion and they are functions of only s. We rotate the orthonormal frame SO(2) fixing a 0 := n 0 so that we obtain (a 0 , a 1 , a 2 ) [17][18][19], For convenience, we introduce a complex curvature as In this article, I will deal with a non-stretching elastica in R 3 with the energy functional which I will also call Bernoulli-Euler functional [3]. It is worth while noting that in general, there appear other potential terms in the energy functional for a general elastic rod. For example, there might appear elastic torsion term, stretching term and so on. An elastica is usually defined as a curve realized as a stationary point of an energy functional related to an elastic rod, at least, in the meaning of the classical mechanics. Hence the word "elastica" sometimes has ambiguity. Depending upon the potential term, its shape might belong to individual class. Thus reader should not confuse the word "elastica" with another one in another context. In this article, the word of "elastica" is meaning of a curve with the Bernoulli-Euler functional (2-7). The elastica I deal with here is a model of a polymer which can freely rotate around its center axis but does not stretching and is forced by the potential (2)(3)(4)(5)(6)(7). In other words, I assume that the force from the elastic torsion can be negligible but stretching can not. Furthermore, I will neglect the kinetic term of the elastica. Physically speaking, I will consider the polymers in the liquid whose temperature is determined and viscosity is very large. I also suppose that each polymer behaves independently and interaction among them are neglected. Let the elastica closed and preserve its local infinitesimal length for even thermal fluctuation; it does not stretch. Under the conditions, I will consider this partition function of the elastica given as [3], (2-8) Following the calculation scheme which I proposed in refs. [3,4], I will evaluate the partition function (2)(3)(4)(5)(6)(7)(8) under the non-stretching condition. However there is trivial affine symmetry of the centroid and direction of the elastica and the partition function naturally diverges [3]. For an affine transformation (translation and rotation g ∈S0 (3)), X(s) → X 0 + gX(s), (X 0 and g are constants of s), the curvature κ and the Bernoulli-Euler functional (2-7) does not change; this is a gauge freedom and the energy functional (2-7) has infinitely degenerate states. In the path integral method, I must sum over all possible states, Z includes the integration over R 3 and naturally diverges. As well as the arguments in refs. [3,4], I will regularize it, where Vol(Aff) is the volume of the space related to the affine transformation. By this regularization, I can concentrate the classification of shapes of elastica. Next I will investigate the condition preserving local length even for the thermal fluctuation. I will expand the affine vector around the point which is an extremum point of the Bernoulli-Euler functional (2-7). I will call the point quasi-classical point according to the semi-classical method in path integral [3]. In the path integral, I must pay attention to the higher perturbations of ǫ in order to obtain an exact result. Hence I will assume that X is parameterized by a parameter t. I will express a perturbed affine vector X around an extremum point X qcl in the partition function (2-9) as [3,4,7,8], with the relation where u's are real function of s and t. I will regard (2-11) as virtual dynamics of the curve describing the thermal fluctuation [3]. As in refs. [3,7,8], due to the isometry condition, I require [∂ t , ∂ s ] = 0 for X. Since ds qcl := ∂ s X qcl ∂ s X qcl ds, the isometry condition exactly preserves, ds ≡ ds qcl . Here I will note that the deformation (2-10) generally contains non-trivial ones through u a (s) and the "equation of motion" (2)(3)(4)(5)(6)(7)(8)(9)(10)(11). Let us compute the non-stretching condition [∂ t , ∂ s ]X qcl = 0. I will introduce "velocities" From the condition, I have the relation between ∂ t φ c (φ c := φ 1 + iφ 2 ) and a complex "velocity", Here I use the notation κ qcl := κ 1 + iκ 2 and I introduce the pseudo-differential operator ∂ s in the meaning of In order to find the connection between φ c and κ, I will also investigate the fluctuation of a a (a = 1, 2). By the virtual dynamics of a 0 , differentiation of a a (a = 1, 2) by t must have the form, where v means the rotation in the plane spanned by a a (a = 1, 2). By requirement of the isometry, the virtual dynamics of a a is constrained as [∂ t , ∂ s ]a a = 0 (a = 1, 2), (2-17) Hence I have the relation [7,8], Accordingly I have the relation between ∂ t κ and complex velocity u c as the "equation of motion" of the deformation satisfied with the isometry condition [7,8], . (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19) I will remark that Q 2 is known as the recursion operator of the NLS and CMKdV equations. For this non-stretching deformation, the Bernoulli-Euler functional (2-7) changes as (2-20) Since I wish to expand the complex curvature κ around the extremum point in the functional space, I will require the extremum condition [3], (2-21) In this method, I will sum the weight function over all extremum points. Since they are extremum rather than stationary points, they need not be realized in zero temperature. Noting the relation ∂ s u 0 = (κ qcl u c + κ qcl u c )/2 and above notices, supposed that κ qcl Q 2 (u c ) + κ qcl Q 2 (u c ) could be regarded as another function κ qcl u ′ c + κ qcl u ′ c of the variation of the normal direction in (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15), I might find the relation I supposed that the deformation is described by one parameter t. However there is no requirement that I must go along with only one parameter t to characterize this system. In the calculation of the partition function, one must sum up the weight function over events if the possibility of occurrence of the events can be considerable. I will search for all possible extremum points. Furthermore in a microcanonical system at energy E 0 , the entropy S of the system is defined as S := log Z| E=E 0 and can be regarded as the logarithm of the volume of the functional space. From primitive consideration, the dimension of the functional space in the statistical physics is related to the degrees of freedom corresponding to E 0 and the degrees of freedom of the elastica are not finite and its dimension need not one. Along the line of the arguments of ref. [3], I will give up to express the thermal fluctuation using only one parameter t and I will introduce the sequence for mathematical times t := (t 1 , t 3 , t 5 , · · · , t 2n+1 , · · · ) in this system so that (2-22) is satisfied. I will redefine the fluctuation (2-10) and introduce infinite parameters family, which can sometimes become finite set as I will show later, where ǫ was replaced with (1/ √ β)δt 2n+1 and ∂ t 2n+1 X qcl is expressed as The virtual equations of motion for the deformation are expressed as Thus (2-25) represents the thermal fluctuation which conserves the local length. However it should be noted that there are two manifest symmetries in this system; one exhibits the symmetry of choice of the origin s and another is for the symmetry of U(1) phase of κ; the later one is the same as the choice of the s 0 at the integration (2)(3)(4)(5). For the transformation κ(s) → e it κ(s −t), the partition function is invariant. I require that the virtual motions must include such manifest symmetries ∂t 1 κ qcl = ∂ s κ qcl and ∂ t 1 κ qcl = iκ qcl . (2-27) As in refs. [3,4], instead of the single deformation parameter, I will assign the infinite dimensional parameters in to those which fulfill this requirement; t := (t,t) = (t 1 , t 3 , · · · ,t 1 ,t 3 , · · · ). In terms of these, I will investigate the moduli space of the partition function (2)(3)(4)(5)(6)(7)(8)(9). In other words I will give a minimal set of the virtual equations of motion, which is satisfied with this physical requirement that the deformation contains the manifest symmetries and , (2-28) They are the CMKdV and the NLS hierarchies respectively. As stated in the introduction, the properties of the CMKdV equation is not well-known as far as I know. It has not been concluded that it is soliton equation yet. However even though it might not be integrable, properties of the CMKdV hierarchy and the CMKdV equation are very regular as I show as follows. Here I will comment upon the result of Mohammad and Can [10]. They investigated the "complex MKdV" equation and concluded that it is not a soliton equation. However their "complex MKdV equation" is expressed as which is a kind of "complexification" of the MKdV equation but differs from (2-32). Thus their result does not directly affect the studies on the integrability of our CMKdV equation (2-32). Since the CMKdV and NLS problems are initial value problems, for any regular shape of elastica satisfied with the boundary conditions, the "time" t andt developments of the curvature are uniquely determined . Furthermore noting that if one gives the real value κ, κ goes on real in the "time"t development of the CMKdV equation (2-32) whereas for the NLS equation (2-33) its "time" t development includes the complex value due to the pure imaginary in the first term in (2-33). Thus the "time" dt and dt are expected orthogonal in the moduli of the CMKdV and NLS equations. The "time" developments of both equations differ each other. In other words, for a given regular curve, there exist individual families of the solutions of the CMKdV (2-32) and NLS (2-33) equations which contain the given curve as an initial condition. Due to relations (2-34) and , during the motion of t andt, the Bernoulli-Euler functional (2-7) does not change its value. Hence the deformation parameter t andt draw the trajectories of the functional space which have the same value of the Bernoulli-Euler functional (2)(3)(4)(5)(6)(7). In the case that I immersed an elastica in R 2 , the thermal fluctuation obeys the MKdV equation and there appears only one sort of hierarchy or the MKdV hierarchy [3]. In this article, the codimension of the immersion of the elastica in R 3 is two while the former problem is one [3]. Accordingly it is natural that there appear twice degrees of freedom of the elastica in R 2 , t andt for the elastica in R 3 . NLS(g) E ). For the case of a solution represented by the hyperelliptic function of genus g which is satisfied with (2-37), dµ NLS(g) E is expressed as dt 3 ∧ dt 5 ∧ · · · ∧ dt 2g−1 . For each point, there is CMKdV flows. Even though it has not been confirmed that the trajectories of the CMKdV equation are linear and regarded as the vector space, it is clear that their cotangent space is flat and can be regarded as the vector space locally. Thus I can locally express the measure of dµ (g) E as dµ (g) E = dt 3 ∧ dt 5 ∧ · · · ∧ dt 2g−1 ∧ dt 3 ∧t 5 ∧ · · · ∧ dt 2g−1 . (2-41) Here I remove dt 1 ∧ dt 1 in the measure because it exhibits trivial symmetries [3]. (2-41) is a subset of the infinite dimensional deformation parameters t in . Hence becomes (2-42) By exchanging the coordinate dt i and dt j of multi-times t, the volume of Ξ (g) E is estimated by the unit of the elastica length L. Since the dimension of the Bernoulli-Euler functional E is the inverse of length and β/[length] is order unit, the multiple of the length can be interpreted as the multiple of the inverse temperature β −1 . Hence the sum of terms with different dimensional volume which appear in can be regarded as expansion of power of β. §3. Discussion In this article, I gave a calculation scheme of the partition function of elasticas in R 3 in terms of solutions of the CMKdV equation and NLS equation . Even though I could not give a concrete form of the partition function (2)(3)(4)(5)(6)(7)(8)(9), I showed that its formal expansion is given by . As I thought that this scheme is based upon the soliton theory in refs. [3,4], I can not deny that it might be beyond the integrable system. In fact the CMKdV equation might be connected with the deformation of the Jacobi variety induced by the NLS equation . Hence I believe that this formulation might shed a new light upon the theories of the immersed object and its quantization (or evaluation of the partition function). Here I will mention the knot configuration. Since the NLS and CMKdV equations are initial value problems, the solution space includes any configurations of a space curve in R 3 . In other words, they also include any knot configurations and so I need not pay any attention upon the ambient isotopy [21]. In fact the trajectories of NLS equation classify space curves immersed in R 3 rather than ones embedded in R 3 ; crossings are allowed and its topology disables us to distinguish such knot invariances or ambient isotopy. Since the knot configuration is physically discriminated by means of long range force such as the electromagnetic force and this theory in this article does not include such force, this notion can be physically interpreted. If one wishes to consider the knot configuration in this system, it might be related to the gauged NLS equation [22]. Next I will give two comments on the CMKdV equation. First, one might have a question why I need the CMKdV equation whereas the solution space of the NLS equation includes any configurations of a space curve in R 3 . I have been dealing with the measure of the functional space. An uncountable set of R becomes R 2 if the elements are measurable and one can define R 2 topology in the set. In the similar meaning, I need CMKdV equation in order to introduce the natural measure in the functional space. The solutions of the NLS equation are described in terms of the hyperelliptic functions [13,20]. A hyperelliptic curve is embedded in a Jacobi variety. The trajectories of the NLS equation, (t 1 , t 3 , · · · , t 2g−1 ), form the vector structure of the Jacobi variety. The NLS flow, which obeys the NLS equation (2-33), covers a subset of Jacobi variety. In each Jacobi variety, there exists compact subset as orbits of NLS flows. The individual Jacobi varieties are distinguished by points in the Siegel upper half space [20]. Since the CMKdV flows are perpendicular with the NLSE flows, the CMKdV flows might connect the different Jacobi varieties of solutions of the NLS equation. Thus I will conjecture, as the second comment upon the CMKdV equation, that the moduli of the CMKdV equation might be realized in the Siegel upper half space. It reminds me of the facts the theta function of the elliptic curve obeys the heat equation over the Siegel upper half plane, which is not integrable in the sense of the kinematic theory such as the soliton theory. The integrability of the soliton theory is associated with the time inversion symmetry and time translational symmetry, and the solutions are acted by a (continuous) group. On the other hand, the solution space of the heat equation is acted by only semi-group and thus it is not "integrable" in general. By complexfication of the heat equation, imaginary time heat equation, or Schrödinger equation is kinematic equation and integrable in the sense of the kinematic theory. Thus even though the CMKdV equation includes integrable solutions as kinematic region [23], I have a question regarding role of the CMKdV equation in Jacobi varieties of the hyperelliptic curves; is it in the framework of the integrable system? However in this stage, I cannot explicitly express the role of the CMKdV equation because there are few studies on the CMKdV equation. I state that the properties of the CMKdV should be investigated. Finally I will comment upon the higher dimensional elastica problem, e.g., an elastica in ndimensional space C ⊂ R n . The codimension of the elastica becomes n − 1 and thus instead of t = (t,t), there appear (n − 1) sets of infinite dimensional parameters t = (t (1) , t (2) , · · · , t (n−1) ). As there appeared U (1)-bundle in this article, they represent the (n−2)-dimensional inner sphere of sphere bundle over the elastica C and the normal radius direction of C. Thus there is naturally a principal bundle over C. In other words, one can add the group structure over the equations. Thus the generalized MKdV equation naturally appears [8,24] and it is expected that my computation scheme of the partition function can be extended.
2014-10-01T00:00:00.000Z
1998-01-04T00:00:00.000
{ "year": 1998, "sha1": "956ccdfd9ab1394a03d4a19117a5567f34c03bbf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/solv-int/9801005", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a98387adc3a43350ef7d0d42518c41681cb3e6e5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
248987957
pes2o/s2orc
v3-fos-license
When the “heroes” “don’t feel cared for”: The migration and resignation of Philippine nurses amidst the COVID-19 pandemic www.jogh.org • doi: 10.7189/jogh.12.03011 1 2022 • Vol. 12 • 03011 As socio-economic activities re-open and societies re-emerge from the COVID-19 pandemic, collective and effective COVID-19 responses among nations must be sustained. For this to occur, the needs and challenges of health care workers as essential parts of health systems and as central actors of the collective COVID-19 response must be addressed. While previous discussions and reports have focused on health issues such as burnout and depression [1], it is also essential to look into their rights, freedoms, and living conditions, as these may not only affect their health and well-being, but also their decision to partake in societal COVID-19 responses. A s socio-economic activities re-open and societies re-emerge from the COVID-19 pandemic, collective and effective COVID-19 responses among nations must be sustained. For this to occur, the needs and challenges of health care workers as essential parts of health systems and as central actors of the collective COVID-19 response must be addressed. While previous discussions and reports have focused on health issues such as burnout and depression [1], it is also essential to look into their rights, freedoms, and living conditions, as these may not only affect their health and well-being, but also their decision to partake in societal COVID-19 responses. In this regard, this paper centres on the importance of upholding the rights, freedoms, and just living conditions of health care workers as exemplified by the situation of Filipino nurses in the Philippine health care system amidst the COVID-19 pandemic. The world faced the pandemic with a global shortage of nurses of about 5.9 million [2]. Asia is among the regions with the lowest density of nurses in the world, despite having countries that largely supply nurses in other regions [2]. The Philippines alone supplied about 240 000 nurses to Organisation for Economic Co-operation and Development (OECD) countries with an outflow of 15 000 to 20 000 nurses per year. This made the Philippines the largest supplier of nurses to OECD countries [2]. The high nurse-to-patient ratio and low wages were among the common reasons for Filipino nurses to work in other countries [2]. While it gave rise to a global diaspora of Filipino nurses, it also resulted in a low number and unequal distribution of nurses in the Philippines [2]. This migration and resignation of Filipino nurses from the Philippine health care system may have accelerated during the pandemic. FILIPINO NURSES AMIDST THE COVID-19 PANDEMIC One year into the pandemic, recent news reports in the Philippines highlighted that Filipino nurses are resigning to work abroad. In the first two to three weeks of October 2021 alone, it was noted that about 5% to 10% of nurses working in private hospitals have resigned [3]. In another 2021 news report, a hospital director in a city mentioned that their nursing staff had decreased from 200 to 63 over the past year [4]. Overall, about When the "heroes" "don't feel cared for": The migration and resignation of Philippine nurses amidst the COVID-19 pandemic Department of Sociology and Behavioral Sciences, De La Salle University, Manila City, Philippines While healthcare workers have been hailed as "heroes" in the recent pandemic, honor without just wages, adequate staffing, and livable conditions will not sustain the responses to COVID-19. 40% of nurses in private hospitals have resigned since the pandemic began [3]. Thus, hospitals in the Philippines may be understaffed due to the dwindling number of nurses during the pandemic. Among the commonly cited reasons for the resignation remained to be low wages. An entry-level nurse working in a public hospital starts with a monthly salary of about PHP33 575 (about US$670), while those working in private hospitals may start with as little as PHP8000 (about US$160) [4]. These wages may not be enough to cover the cost of living in the Philippines. For example, the estimated cost of living in Metro Manila, the largest Philippine metropolitan area, is PHP50 798 (about US$1080) [5]. Some of the nurses even go to work without benefits and hazard pay, despite the heightened health risks and threats during the pandemic [4]. GOVERNMENT RESPONSES TO THE PLIGHT OF NURSES Despite the need for livable wages and just benefits for Filipino nurses, the Philippine government responded by banning and limiting them from living and working abroad, so they could serve as a "reserve force" as the country navigates through the pandemic. This deployment ban was largely questioned due to its possible unconstitutionality, violation of the right to travel and earn a living wage and negative effect on the Philippine economy [6]. Nonetheless, some improvements have been done, such as the additional PHP500 (about US$10) daily allowance for health care workers who care for patients with COVID-19. However, its implementation has been met with confusion, dismay, and disappointment [7,8]. For instance, a 2020 news report showed that the daughter of a nurse who died from COVID-19 was appalled and dismayed when she claimed her mother's hazard pay amounting to PHP7000 (about US$140) since she expected to receive PHP30 000 (about US$600) [7]. This was because the previously announced government daily allowance was reduced to PHP64 (about US$1.5) after it was adjusted for their city's health budget and mandated deductions [7]. Amidst these news reports and the resignation of nurses, several health care worker groups have also highlighted that they were being forced to work long hours and had an inconsistent supply of personal protective equipment (PPE) [7]. A year after, the situation had seemingly remained the same, as disclosed by health care groups, with nurses forgoing their meals and bathroom breaks to save on PPEs. Moreover, it was reported that the promised additional compensation for health care workers had not been paid out. To them, "their working conditions are no longer humane" [8]. Thus, Filipino nurses seemed to be domestic captives in their own country. The barriers to escape are generally invisible and take form as economic, social, and legal subordination. RESIGNATION AND MIGRATION: SENTIMENTS AND RESPONSES OF FILIPINO NURSES Given the chronic understaffing, low wages, unsafe working conditions, and deployment bans, Filipino nurses have expressed their exhaustion and dismay with statements such as "We don't feel cared for" and "We feel exhausted...but we always keep in mind that we have to help our people because...no one else will" [3,4]. Eventually, some of them may leave the profession or try to go abroad since "it's really not worth being a nurse at home" [4]. This seemed to be the sentiment of nurses and other health care worker groups who have announced their mass resignation from the Philippine health care system amidst the COVID-19 pandemic [8]. While some were able to migrate, remaining nurses in the Philippines, as seen in private hospitals [4], may leave their profession to escape their seeming domestic captivity and socio-economic hardships amidst the COVID-19 pandemic. Thus, Filipino nurses may be free when they no longer work as "nurses". THE EFFECT OF THE RESIGNATION AND MIGRATION OF FILIPINO NURSES ON THE LOCAL COVID-19 RESPONSE This flight of health care workers from health care institutions in the Philippines had severely affected the local COVID-19 response [3,4]. In 2021, hospitals in the country have already started to downsize their operations, Photo: Filipino nurses' daily routine during the COVID-19 pandemic (from Rowalt Alibudbud's personal collection, used with permission). not because of the lack of facilities or health equipment, but because of the lack of health care workers. Thus, despite the decreasing trends of COVID-19 in the country, hospitals remained fully occupied [3,4]. If allowed to worsen, the health care system may be overwhelmed by a new COVID-19 wave. HONOUR AND VALUE AS "HEROES" Generally, while health care workers have been hailed as "heroes" in the recent pandemic [9], honour without just wages, adequate staffing, and livable conditions will not sustain the responses to COVID-19. Given this, governments, policymakers, and health care institutions must be ever cognizant of the rights and needs of nurses and other health care workers. If these are not addressed, health care workers, as exemplified by the resignation of Filipino nurses, may leave their profession and institutions to seek opportunities where their work is valued, and their rights are upheld. As a result, health care systems may collapse in the face of a tremendously challenging situation such as the COVID-19 pandemic. There is, therefore, a need to reflect the health care heroes' honour and value in specific programs and policies implemented amidst the pandemic. Overall, the resignation and migration of Filipino nurses amidst the COVID-19 pandemic may not only be an issue related to health and well-being but also rights and justice. Nonetheless, it must be addressed. If these are not addressed, healthcare workers, as exemplified by the resignation of Filipino nurses, may leave their profession and institutions to seek opportunities where their work is valued and their rights are upheld.
2022-05-24T06:23:20.605Z
2022-05-23T00:00:00.000
{ "year": 2022, "sha1": "0588d61c68ea936eb0fbfbdadd5d8c380dda4f3a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "036faf177e8de6b8ce1d0f17d89c864dbcafe01b", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
49653557
pes2o/s2orc
v3-fos-license
Dysbiosis of oral microbiota and its association with salivary immunological biomarkers in autoimmune liver disease The gut microbiota has recently been recognized to play a role in the pathogenesis of autoimmune liver disease (AILD), mainly primary biliary cholangitis (PBC) and autoimmune hepatitis (AIH). This study aimed to analyze and compare the composition of the oral microbiota of 56 patients with AILD and 15 healthy controls (HCs) and to evaluate its association with salivary immunological biomarkers and gut microbiota. The subjects included 39 patients with PBC and 17 patients with AIH diagnosed at our hospital. The control population comprised 15 matched HCs. Salivary and fecal samples were collected for analysis of the microbiome by terminal restriction fragment length polymorphism of 16S rDNA. Correlations between immunological biomarkers measured by Bio-Plex assay (Bio-Rad) and the oral microbiomes of patients with PBC and AIH were assessed. Patients with AIH showed a significant increase in Veillonella with a concurrent decrease in Streptococcus in the oral microbiota compared with the HCs. Patients with PBC showed significant increases in Eubacterium and Veillonella and a significant decrease in Fusobacterium in the oral microbiota compared with the HCs. Immunological biomarker analysis showed elevated levels of inflammatory cytokines (IL-1β, IFN-γ, TNF-α, IL-8) and immunoglobulin A in the saliva of patients with AILD. The relative abundance of Veillonella was positively correlated with the levels of IL-1β, IL-8 and immunoglobulin A in saliva and the relative abundance of Lactobacillales in feces. Dysbiosis of the oral microbiota is associated with inflammatory responses and reflects changes in the gut microbiota of patients with AILD. Dysbiosis may play an important role in the pathogenesis of AILD. Introduction Primary biliary cholangitis (PBC) and autoimmune hepatitis (AIH) are classically viewed as distinct autoimmune liver diseases (AILDs). PBC is a progressive AILD characterized by portal inflammation, immune-mediated destruction of the intrahepatic bile ducts, and the presence of highly specific anti-mitochondrial antibodies in serum [1,2]. AIH manifests as chronic liver a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 inflammation of an unknown cause. It generally affects young to middle-aged females and is associated with the presence of autoantibodies and hypergammaglobulinemia [3]. AILD is thought to be triggered by environmental factors in genetically susceptible individuals. Genome-wide association and murine model studies have expanded our knowledge of AILD; however, the pathogenesis of the disease remains obscure. The oral cavity is a large reservoir of bacteria of more than 700 species or phylotypes and is profoundly relevant to host health and disease [4][5][6]. The role of oral and gut microbiota in the pathogenesis of immune-related diseases has been highlighted in autoimmune diseases, such as autoimmune encephalomyelitis, rheumatoid arthritis, and inflammatory bowel disease [7][8][9][10][11][12][13]. A previous report revealed that there was evidence of pervasive immune-microbiota interface changes in the saliva of patients with cirrhosis similar to that found in stool [14]. Recently, culture-independent techniques have revolutionized the knowledge of the gut and oral microbiota. These techniques are based on sequence divergences of the small subunit ribosomal ribonucleic acid (16S rRNA) and can demonstrate the microbial diversity of the gut and oral microbiota, providing qualitative as well as quantitative information on bacterial species and changes in the gut and oral microbiota in health and disease. It is increasingly recognized that the composition of the gut microbiota plays a critical role in influencing the predisposition to PBC and AIH [15][16][17][18][19][20]. However, direct evaluation of the oral microbiome has not been performed in AILD. This study aimed to analyze and compare the composition of the salivary microbiota between patients with AILD and healthy controls (HCs) and to evaluate its association with oral immunological biomarkers. Study population This study included 39 patients with PBC and 17 with AIH who received a diagnosis at Fukushima Medical University Hospital and Hanawa Kosei Hospital between 1996 and 2016, as well as 15 HCs. As HCs, normal serum was collected from staff members and their families in our department. The diagnosis of AIH was based on the revised and simplified International Autoimmune Hepatitis Group (IAIHG) scoring system [21][22][23]. Patients with other causes of chronic liver disease, particularly alcohol abuse, chronic hepatitis B, or hepatitis C, were excluded from the AILD patient group. Patients were diagnosed as having PBC features if they met at least two of the following three criteria: 1) chronic elevation of cholestatic liver enzymes alkaline phosphatase (ALP) and gamma-glutamyl transpeptidase (γGTP) for at least six months; 2) presence of serum anti-mitochondrial antibody (AMA) detected by either indirect immunofluorescence or ELISA using commercially available kits; and 3) typical histological findings from biopsied liver specimens [24]. Twenty-nine patients with PBC had liver biopsies. The data used for analysis included patient background parameters (age, sex, observation period, body mass index (BMI)), clinical parameters at sample collection (aspartate aminotransferase (AST), alanine transaminase (ALT), ALP, γGTP, total bilirubin (TB), IgG, IgM, anti-nuclear antibodies [ANA], AMA, fibrosis (FIB)-4 index), histological parameters at presentation (Scheuer stage for PBC, Fibrosis for AIH) and therapeutic methods. The histological findings of PBC were graded according to the Scheuer staging system [25]. The fibrosis stage of AIH was evaluated according to the METAVIR scoring system [26] and graded as follows: F0, no fibrosis; F1, stellate enlargement of portal tracts without spectrum formation; F2, enlargement of portal tracts with rare spectrum formation; F3, numerous septa without cirrhosis; and F4, cirrhosis. Nine patients with AIH (53%) and 11 patients with PBC (28%) were concomitantly using proton pump inhibitors (PPIs). Sjögren's syndrome was associated with 2 cases of AIH and 1 case of PBC. The patients with AIH were classified as the normal liver function group (AST and ALT 33 U/L) and abnormal liver function group (AST or ALT >33 U/L). The patients with PBC were classified as the normal liver function group (ALP 359 U/L and γGTP 50 U/L) and abnormal liver function group (ALP >359 U/L or γGTP >50 U/L). Exclusion criteria were as follows: (i) antibiotic use within the past 3 months; (ii) otolaryngology consultation due to sinusitis, tonsillitis or tonsilloliths within the past 3 months; (iii) use of gargling solution on the day of screening; and (iv) periodontitis. Sample collection and DNA extraction All subjects underwent stool and saliva collection on the same day. Unstimulated saliva samples collected from subjects were immediately stored at -20˚C until use. The saliva samples were homogenized with zirconia beads in a 2.0-mL screw cap tube by FastPrep 24 Instrument (MP Biomedicals, Santa Ana, CA) at 5 m/s for 90 sec. DNAs were extracted from 100 μL of the saliva and purified with the MORA-EXTRACT DNA extraction kit (Kyokuto Pharmaceuticals, Tokyo, Japan) in accordance with the manufacturer's instructions. The DNAs were eluted with 100 μL of TE (10 mM Tris-HCl, 1 mM EDTA, pH 8.0). Fecal samples were immediately suspended in a solution containing 100 mM Tris-HCl (pH 9.0), 40 mM Tris-EDTA (pH 8.0), 4 M guanidine thiocyanate, and 0.001% bromothymol blue. An aliquot of 1.2 mL of the suspension was homogenized with zirconia beads in a 2.0-ml screw cap tube by FastPrep 24 Instrument (MP Biomedicals) at 5 m/s for 2 min and placed on ice for 1 min. After centrifugation at 5000 × g for 1 min, DNA was extracted from 200 μL of the suspension using an automatic nucleic acid extractor (Precision System Science, Chiba, Japan). MagDEA DNA 200 (GC) (Precision System Science) was used as the reagent for automatic nucleic acid extraction [27,28]. Terminal restriction fragment length polymorphism (T-RFLP) T-RFLP analyses were performed by TechnoSuruga Laboratory (Shizuoka, Japan). T-RFLP analyses for salivary samples were performed as previously described [29]. The primers used for the PCR amplification of 16S rRNA gene sequences were 27F (5'-AGAGTTTGATCCT GGCTCAG-3') and 1492R (5'-GGTTACCTTGTTACGA-CTT-3'). Primer 27F was labeled at the 5' end with 6-carboxyfluorescein (6-FAM), which was synthesized by Thermo Fisher Scientific. For the 16S rDNA amplified from human saliva-extracted DNA, HotStarTaq DNA Polymerase (QIAGEN, Hilden, Germany) by Thermal Cycler Dice (Takara, Shiga, Japan) was used. The amplification program was as follows: preheating at 94˚C for 15 min, 30 cycles of denaturation at 94˚C for 30 s, annealing at 50˚C for 30 s, extension at 72˚C for 2 min, and finally, a terminal extension at 72˚C for 10 min. Amplified DNA was verified by the electrophoresis of PCR mixture aliquots (2 μL) in 1.0% agarose in TAE buffer. The amplified DNA was purified by a MultiScreen PCR 96 Filter Plate (Millipore, Billerica, MA). The purified PCR product (3 μL) was digested with 10 U of Fast Digest MspI (Thermo Fisher Scientific) in a total volume of 15 μL at 37˚C for 10 min. The restriction digestion products (0.5 μL) were mixed with 10 μL of deionized formamide and 0.5 μL of DNA fragment length standard. The standard size marker was MapMarker X-Rhodamine Labeled 50-1000 bp (BioVentures, Murfreesboro, TN). The samples were denatured at 95˚C for 2 min and then placed immediately on ice. The length of T-RF was determined on an ABI PRISM 3130xl Genetic Analyzer (Thermo Fisher Scientific), and the length and peak area were determined using the genotype software GeneMapper (Thermo Fisher Scientific). Fragment sizes were estimated using the Local Southern method in GeneMapper software (Thermo Fisher Scientific). T-RFs with a peak height of less than 50 fluorescence units were excluded from the analysis. Fragments were resolved to one base pair by manual alignment of the size standard peaks from different electropherograms. Predicted T-RFLP patterns of the 16S rDNAs of known bacterial species were obtained using the sequence [29]. T-RFLP analyses for fecal samples were performed as previously described [30,31]. The 16S rRNA sequences were amplified from human fecal DNA by using a fluorescently labeled 516F primer (5'-TGCCAGCAGCCGCGGTA-3'; E. coli positions 516-532) and 1510R primer (5'-GGTTACCTTGTTACGACTT-3'; E. coli positions 1510-1492). The 5'-ends of the forward primers were labeled with 6 -carboxyfluorescein (6-FAM), which was synthesized by Thermo Fisher Scientific. The PCR amplifications of DNA samples (10 ng of each DNA) were performed according to a protocol described by Nagashima et al. The purified PCR products (2 μL) were digested with 10 U of Fast Digest BslI (Thermo Fisher Scientific) at 37˚C for 10 min. The length of the T-RF fragment was determined with an ABI PRISM 3130xl Genetic Analyzer (Thermo Fisher Scientific). The standard size marker was MapMarker X-Rhodamine Labeled 50-1000 bp (BioVentures). The T-RFs were divided into 29 operational taxonomic units (OTUs). The OTUs were quantified as the percentage of individual OTU per total OTU areas, which were expressed as the percentage of the area under the curve (% AUC). The bacteria were predicted for each classification unit, and the corresponding OTU was identified according to reference Human Fecal Microbiota T-RFLP profiling (https://www.tecsrg.co.jp/t-rflp/t_rflp_hito_OTU.html). Statistical analysis The results are expressed as the mean ± SD. The Mann-Whitney U-test was used to compare the bacterial abundance or cytokine levels between the HC and AILD groups. Correlations between the bacterial abundance and immunological markers in saliva or the bacterial abundance in feces were assessed using Spearman's rank correlation coefficient. The difference in the ratio of bacterial groups was examined by the χ 2 test. Shannon diversity indices were used to compare the diversity of the T-RFLP profiles between HC and AILD groups. The T-RFLP profiles were clustered by hierarchical cluster analysis and analyzed by principal component analysis (PCA). Univariate and multivariate logistic regression analyses were used to assess microbiomes associated with AILD patients. All statistical analyses were performed using Prism 6.0 software (GraphPad Software, Inc.) and JMP pro 13.1 (SAS Institute Inc., Cary, NC, U.S.A.). P<0.05 was considered significant. Ethics statement The study was approved by the ethics committee of Fukushima Medical University School of Medicine. Written informed consent was obtained from all subjects. Table 1 shows the characteristics of the matched HCs and the patients with PBC or AIH. Patients with PBC (mean age, 63 years; male:female ratio, 5:34) had an ALT level of 27 ± 16 U/ L, an ALP level of 321 ± 111 U/L, and a γGTP level of 59 ± 44 U/l. In all, 26 patients were treated with ursodeoxycholic acid (UDCA), and 11 were treated with UDCA and bezafibrate. Patients with AIH (mean age, 60 years; male:female ratio, 2:15) had an ALT level of 19 ± 10 U/ L and an IgG level of 1473 ± 821 mg/dl; 11 patients were treated with prednisolone, and 4 were treated with prednisolone and azathioprine. No significant differences were found between the PBC and AIH groups with respect to age, sex or BMI. Fig 1A shows the relative abundance of the bacterial composition at the phylum level in each sample from subjects in the AIH, PBC and HC groups. The most dominant phylum was Firmicutes in the AIH, PBC and HC groups. Indeed, the average relative abundance of phyla Firmicutes in the AIH, PBC and HC groups was 25.1%, 29.8% and 27.8%, respectively. No significant difference at the phylum level in Firmicutes, Bacteroidetes and Proteobacteria were observed among the groups. Analysis at the phylum level showed that the relative abundance of Fusobacteria was significantly lower in both the AIH and PBC groups than in the HC group (P<0.05). Analysis of the salivary microbiota of the PBC, AIH and HC groups based on the T-RFLP profiles T-RFLP analysis of the salivary microbiota in all 71 subjects revealed 78 peaks by digestion with MspI. The relative amounts of several T-RFs in the AIH and PBC groups were significantly different from those in the HC group. When T-RFs were digested by MspI, there was a significantly higher frequency of genus Veillonella (OTU301) and genus Eubacterium (OTU166) and a lower frequency of genus Fusobacterium (OTU283) in the PBC group than in the HC group ( Fig 1B). Moreover, there was a significantly higher frequency of genus Veillonella and a lower Cytokine levels in the saliva of HCs and patients with PBC or AIH Given these changes in the salivary microbiota, we subsequently enrolled patients with AIH or PBC and age-matched HCs to study the inflammatory milieu in the saliva (Fig 2). None of the HCs were on PPIs or had diabetes or other chronic diseases. We found a significantly higher inflammatory response in AILD patients than in HCs, as shown by significantly higher IL-1β, IL-8 TNF-α, IFN-γ, MIP-1β and secretory IgA levels. No differences were observed in oral inflammatory markers between patients with/without PPI use (S1 Fig). In most samples, IL-2, IL-5, IL-10, IL-13, and GM-CSF were undetectable. There were no differences in the level of IL-4, IL-6, IL-7, IL-12p70, IL-17, G-CSF, MCP-1, or lysozyme between AILD patients and HCs (S1 Table). Correlation between the relative abundance of predominant genera and the level of immunological biomarkers in the saliva of AILD patients We searched for correlations between the relative abundance of dominant bacterial genera and the measured biomarkers in the saliva of 56 patients with AILD. The results are shown in Table 2. The changes in the gut microbiota composition in AILD were characterized by an increase in the order Lactobacillales and by a decrease in the genus Clostridium subcluster XIVa. We next examined for correlations between the relative abundance of bacterial composition in salivary samples and that in fecal samples form patients with AILD. The results are shown in Table 3. Correlation between the oral and gut microbiota in AILD patients The relative abundance of Lactobacillales in feces positively correlated with the relative abundance of Veillonella in saliva from patients with AIH, whereas the relative abundance of Bifidobacterium in feces negatively correlated with the relative abundance of Veillonella in saliva from patients with PBC. Moreover, the relative abundance of Clostridium subcluster XIVa in feces positively correlated with the relative abundance of Neisseria and negatively correlated with the relative abundance of Eubacterium in saliva from patients with AIH. By contrast, the abundance of Streptococcus in saliva positively correlated with the abundance of Clostridium cluster XVIII and negatively correlated with the relative abundance of Bifidobacterium in feces from patients with AIH. Associations between clinical variables and the oral microbiota We investigated the effects of subphenotypes on the oral microbiota in AILD patients (S2 Fig). We examined whether sex bias in AILD patients was associated with the oral microbiome. The relative abundance of the bacterial composition at the genus level in salivary samples was not significantly related to sex in this study (S2A and S2B Fig). The patients were divided into advanced and non-advanced stages based on the Scheuer system and fibrosis. There was no significant difference between the two stages in the relative abundance of associated PBC and AIH taxa (S2C and S2D Fig). There was a significantly higher frequency of genus Neisseria (OTU496) in salivary samples obtained from AIH patients with abnormal liver function than in those obtained from AIH patients with normal liver function, whereas there was a significantly lower frequency of genus Neisseria in PSL-using AIH patients than in non-PSL-using Significant correlations after P-value adjustment are marked by AIH patients. Moreover, there was a significantly higher frequency of genus Streptococcus (OUT556, 563) in UDCA 600-900 (mg/day) users than in UDCA 0-300 (mg/day) users among AIH patients. There was no significant difference between PBC patients who were treated with or without medications such as UDCA and bezafibrate (S2E- S2J Fig). Moreover, we next investigated the effects of subphenotypes on the gut microbiota in AILD patients ( S3 Fig). There was a significantly lower frequency of the genus Clostridium cluster IX in fecal samples obtained from female patients than in fecal samples obtained from male There were no significant differences in the relative abundance of bacterial composition with respect to sex, disease stage, UDCA and PSL use among AIH patients. Dysbiosis of oral microbiota and its association with salivary immunological biomarkers in AILD Principal component analysis (PCA) We created the distribution map based on the PCA to visualize the difference in T-RFLP profiles of oral microbiota between the AILD and HC groups and found that the first and second principal components explained 43.9% of the variance (Fig 4A and 4B). Subjects of Cluster I were localized in the left part of this map, and subjects of Cluster II were localized in the right part ( Fig 4A). The PCA showed a relatively weak clustering of the oral microbiota between the AILD and HC groups (Fig 4B). The AILD-related genera included Veillonella, while Fusobacterium was more related to the HC samples. The Streptococcus, Eubacterium and Neisseria genera were approximately between the AILD and HC groups. Fig 4C presents the PCA of the gut microbiota in the AILD and HC groups. The two components explained 43.5% of the variance. Gut microbiota showed more clustering in the AILD group than in the HC group. At the genera level, we found no significant difference between the groups in regard to the Shannon Diversity index of oral microbiota (Fig 4D, P = 0.594) and gut microbiota (Fig 4E, P = 0.1325). Univariate and multivariate analyses of microbiota associated with AILD patients We investigated the association between microbial flora and AILD patients using univariate and multivariate analyses (Table 4). Univariate analysis showed significant associations between AILD patients and the increased relative abundance of Veillonella in oral microbiota, the increased relative abundance of Lactobacillales and the decreased relative abundance of Clostridium subcluster XIVa in gut microbiota. The subsequent multivariate analysis showed that the genus Veillonella in oral microbiota (odds ratio [OR]: 1.49, 95% confidence interval [CI]: 1.14-1.94, P = 0.003) was independently associated with AILD patients. Discussion Until recently, there have been almost no studies exhaustively examining the oral microbiota at the genus level in subjects with AILD. In this study, we used T-RFLP analysis and found that the oral microbiota T-RFLP profile of subjects with AILD was significantly different from that of HCs. Our data indicated a significant increase in the genus Veillonella in the salivary microbiota of AILD patients; its relative abundance was almost equivalent to the reduced abundance of Streptococcus, which is most abundant in healthy salivary microbiota. The genus Veillonella is an anaerobic gram-negative coccus that is part of the normal flora of the human mouth and gastrointestinal tract [32]. The main habitats of Veillonella are the tongue, buccal mucosa, and saliva [33]. The Veillonella genus has recently been associated with primary sclerosing cholangitis and PBC [19,34]. Veillonella is associated with poor oral health, which causes many human oral infectious diseases, such as periodontitis [35]. Veillonella produces a large amount of lipopolysaccharide to induce cytokine secretion [36]. In this study, our data indicated that the abundance of Veillonella positively correlated with the levels of pro-inflammatory cytokines, such as IL-1β, IL-6, IL-8, and IL-12p70 in the saliva of patients with AIH. These data suggest that the increase in Veillonella is clearly related to abnormal physiologies in AILD patients. The subjects could be divided into two groups based on cluster classification using the T-RFLP profiles of their saliva. Approximately 61% of subjects with AILD were categorized into the Cluster II microbiota, while approximately 67% of the HCs were categorized into the Cluster I microbiota. The characteristics of Cluster II, comprising most subjects with AILD, included a lower frequency of genera Streptococcus and Fusobacterium and a higher frequency of genus Veillonella. A previous study showed that the genus Veillonella was significantly higher in the salivary microbiota of inflammatory bowel disease (IBD) patients than in that of HCs, while the genus Streptococcus was significantly lower in the salivary microbiota of IBD patients than in that of HCs [12]. Moreover, the relative abundance of Streptococcus negatively correlated with the levels of IL-1β and IL-8, while that of Veillonella tended to positively correlate with the levels of cytokines and secretary IgA in the saliva of IBD patients. In this study, the relative abundance of Streptococcus negatively correlated with the levels of cytokines, such as L-1β and IL-8, while the relative abundance of Veillonella positively correlated with the salivary IgA level of patients with PBC. Multivariate analysis showed that the increased relative abundance of Veillonella in oral microbiota was independently associated with AILD patients. Recent studies have reported that PPIs affect both the gut and oral microbiota [37,38]. After administration of PPIs for 4 weeks, alterations of the microbiota in the oral carriage microbiome along with bacterial overgrowth (Streptococcus) and decreases in distinct bacterial species (Neisseria, Veillonella) were observed in healthy volunteers. In this study, our estimates revealed that the oral microbiota of PPI users was similar to that of non-PPI users among patients with AILD. Reduced salivation is a major clinical feature of most cases of Sjögren's syndrome. Reduced saliva may lead to changes in the salivary microbiota. A recent report indicated that the genera Streptococcus and Veillonella were significantly higher in patients with Sjögren's syndrome than in controls [39]. In this study, Sjögren's syndrome was associated with 1 case of PBC (PBC36) and 2 cases of AIH (AIH4, AIH10). Indeed, the relative abundance of Veillonella was high in AILD patients with Sjögren's syndrome, but even after excluding those patients, the relative abundance of Veillonella was significantly higher in AILD patients than in HCs (PBC, 8.4% vs 4.6%, p<0.0005, AIH, 9.8% vs 4.6%, p<0.001). Saliva contains a variety of components such as cytokines, immunoglobulins, and antimicrobial proteins involved in host defense mechanisms for maintaining oral and systemic health [40]. Alterations in the salivary microbiota of cirrhosis patients with hepatic encephalopathy suggest the occurrence of an inflammatory immune response in the oral cavity of cirrhosis patients as intestinal inflammation is associated with the gut microbiota of cirrhosis [14]. In this study, the levels of many pro-inflammatory cytokines, such as IL-1β, IFN-γ, and secretory IgA, were significantly higher in both AIH and PBC patients than in HCs. A previous study reported that IL-6 and IFN-γ levels were significantly increased in the saliva of PBC patients. Moreover, the IL-6 and IFN-γ levels in the saliva of PBC patients are positively associated with those in the sera of those patients [41]. Similarly, elevated levels of salivary IL-1β, IL-6, and secretory IgA in cirrhosis patients have also been reported [14]. However, it is unknown whether the inflammatory state in the oral cavity of AILD patients is the cause or a consequence of imbalances in the salivary microbiota and whether the oral cavity or the gut immune response is more responsible for the observed dysbiosis of the oral microbiota. In this study, the changes in gut microbiota composition in AILD were characterized by an increase in the order Lactobacillales and by a decrease in the genus Clostridium subcluster XIVa. Previous reports have revealed that Lactobacillus species were more prevalent and that Clostridia was less frequent in the gut microbiota of patients with Behcet's disease than in HCs [42]. Lactobacillus species had relatively large effect sizes in Behcet's disease microbiota, which is concordant with the inductive effect of Lactobacillus on systemic inflammation. Animal studies using germ-free mice reported that some bacterial species separately promoted arthritis by activating Th17 cells [43,44]. Indeed, oral intake of Lactobacillus rapidly induced arthritis in genetically modified germ-free mice [43]. Clostridium species have been suggested to activate regulatory T cells (Treg) and then modulate mucosal immune system through the production of short chain fatty acids [45]. Lactobacillus are major lactate-producing and pH-regulating bacteria with the consumption of hexose sugars [46]. In contrast to the lactate production, several genera of the order Clostridiales can utilize lactate and produce butyrate or propionate [47,48]. Interestingly, our study suggested that while the relative abundance of Lactobacillales in feces positively correlated with the relative abundance of Veillonella in saliva from patients with AIH, the relative abundance of Bifidobacterium in feces negatively correlated with the relative abundance of Veillonella in saliva from patients with PBC. Moreover, the relative abundance of Clostridium subcluster XIVa in feces positively correlated with the relative abundance of Neisseria and negatively correlated with the relative abundance of Eubacterium in saliva from patients with AIH. Dysbiosis of the oral microbiota reflects changes in the gut microbiota in patients with AILD. Recent studies have shown that Clostridia clusters XIVa, IV derived from human feces have the potential to induce Foxp3+ Tregs and are able to suppress inflammatory conditions such as colitis, experimental autoimmune encephalomyelitis, and multiple sclerosis [49][50][51]. AIH is predominately associated with Th1 responses and the decreased function and number of Tregs [52,53]. Dysbiosis of the oral microbiota is directly and/or indirectly related to the gut microbiota and may be correlated with disease onset. We examined the effects of subphenotypes on the oral microbiota in AILD patients. There were no significant differences in the relative abundance of the oral microbiota with respect to sex and disease stage among AILD patients. A previous study revealed that microbial dysbiosis in PBC was partially relieved after UDCA treatment [19]. In this study, there was no significant difference between PBC patients treated with and those treated without medications such as UDCA and bezafibrate; most PBC patients were treated with UDCA at sample collection. There was a significantly higher frequency of the genus Neisseria in salivary samples obtained from AIH patients with abnormal liver function than in those obtained from AIH patients with normal liver function, but there was a significantly lower frequency of the genus Neisseria in PSL users than in non-PSL users among AIH patients. Thus, Neisseria may be involved in the exacerbation of AIH. Our study has some limitations. First, the sample population was relatively small. Second, we did not evaluate changes in the salivary and fecal microbiota that might have occurred due to treatment in AILD patients. Conclusions This may be the first report demonstrating dysbiosis of the oral microbiota in patients with AIH or PBC. These findings suggest that the oral microbiota may play different roles in the pathophysiology of AIH and PBC. Further studies of the establishment and modification of the oral microbiota structure may contribute to the development of a therapeutic strategy for patients with AILD.
2018-07-11T01:14:47.635Z
2018-07-03T00:00:00.000
{ "year": 2018, "sha1": "b147ddb6086a4e60b13a30bb8b46b0f013766368", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0198757&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b147ddb6086a4e60b13a30bb8b46b0f013766368", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
15725403
pes2o/s2orc
v3-fos-license
Spitzer/MIPS 24um Observations of Galaxy Clusters: An Increasing Fraction of Obscured Star-forming Members from z=0.02 to z=0.83 We study the mid-infrared properties of 1315 spectroscopically confirmed members in eight massive (M>5x10^14 Msun) galaxy clusters covering the redshift range from 0.02 to 0.83. The selected clusters all have deep Spitzer MIPS 24um observations, Hubble and ground-based photometry, and extensive redshift catalogs. We observe for the first time an increase in the fraction of cluster galaxies with mid-infrared star formation rates higher than 4 solar masses per year from 3% at z=0.02 to 13% at z=0.83. This increase is reproduced even when considering only the most massive members (Mstars>4x10^10 Msun). The 24 micron observations reveal stronger evolution in the fraction of blue/star-forming cluster galaxies than color-selected samples: the number of red but strongly star-forming cluster galaxies increases with redshift, and combining these with the optically-defined Butcher-Oemler members increases the total fraction of blue/star-forming cluster galaxies to ~30% at z=0.83. These results, the first of our Spitzer/MIPS Infra-Red Cluster Survey (SMIRCS), support earlier studies indicating the increase in star-forming members is driven by cluster assembly and galaxy infall, as is expected in the framework of hierarchical formation. INTRODUCTION Butcher & Oemler (1978& Oemler ( , 1984 observed that galaxy clusters at intermediate redshift have a higher fraction of members with blue optical colors than clusters in the local universe, thus providing a key piece of evidence supporting galaxy evolution. This increase in blue members with redshift, named the Butcher-Oemler (BO) effect, was intensely debated for two decades (e.g. Mathieu & Spinrad 1981;Dressler & Gunn 1982). However, multiple optical studies based on spectroscopic observations have since confirmed the increase in blue, star-forming galaxies in higher redshift clusters (e.g. Couch & Sharples 1987;Caldwell & Rose 1997;Fisher et al. 1998;Ellingson et al. 2001), and found that BO galaxies reveal signs of recent and ongoing star formation. The paramount question now is have we seen only the tip of the iceberg? Most studies of star-forming galaxies in clusters rely on rest-frame ultraviolet or optical tracers (e.g. Balogh et al. 1998;Poggianti et al. 2006), but UV/optical tracers can suffer severely from dust obscuration, especially when star formation is concentrated in the nuclear regions (Kennicutt 1998). For example, ultraluminous infrared galaxies have SF rates of 1000M ⊙ , yet many ULIRGs fail to even be detected at UV and optical wavelengths (e.g. Houck et al. 2005). Although corrections for dust attenuation are possible, reliable estimates of SF rates cannot be achieved solely using rest-frame UV/optical observations (Bell 2002;Cardiel et al. 2003). A substantially more robust method of determining total SF rates is with mid-infrared (MIR) imaging. The first MIR imaging of galaxy clusters at intermediate redshifts was taken with ISO's ISOCAM camera, and Duc et al. (2002) found that at least 90% of the star formation was hidden at optical wavelengths. The first handful of galaxy clusters observed with the MIPS camera on the Spitzer Space Telescope (SST) have also revealed strong dust-obscured star formation (Geach et al. 2006;Marcillac et al. 2007;Bai et al. 2007). It remains unclear as to what causes the increase in star-forming galaxy cluster members. Detailed morphological studies of blue galaxies [defined as having ∆(B − V ) < −0.2] 3 with the Hubble Space Telescope (HST) find that most are disk systems similar to those in local clusters (e.g. Dressler et al. 1994;Couch et al. 1994); past studies also find that many show signs of interactions or mergers (Lavery & Henry 1988;Lavery et al. 1992;Couch et al. 1994;Oemler et al. 1997). More recently, studies indicate that galaxy infall is a viable explanation for the significant numbers of blue galaxies and their disturbed morphologies in intermediate redshift clusters (e.g. van Dokkum et al. 1998b;Ellingson et al. 2001;Tran et al. 2005), a scenario supported by hierarchical clustering models (Kauffmann 1995). In this case, galaxy clusters that are accreting a significant number of new members should have a higher fraction of starforming galaxies, especially at higher redshifts when the amount of activity was enhanced also in the field. Here we present the first comprehensive study of SST/MIPS 24µm imaging of galaxies that are spectroscopically confirmed members of eight massive (M vir 5 × 10 14 M ⊙ ) X-ray luminous clusters spanning a wide redshift range (0.02 < z < 0.83). After presenting the data in §2, we focus our analysis in §3 and §4 on the evolution of star-forming members with redshift. A cosmology with (H 0 , Ω M , Ω Λ ) = (70 km s −1 , 0.3, 0.7) is assumed throughout the paper; at z = 0.83, the look-back time is ∼ 7 Gyr. DATA We have assembled a data set of eight galaxy clusters at 0.02 ≤ z ≤ 0.83 that have a total of 1315 spectroscopically confirmed members. The core of our sample is composed of five clusters spanning the entire redshift range with large spectroscopic membership, uniform multifilter optical photometry and deep SST/MIPS imaging 4 . For the part of the analysis that does not depend on rest-frame (B − V ) color, we fold into the sample three additional clusters: Abell 1689 for which MIR data from ISOCAM is available (Duc et al. 2002), and CL0024 and MS0451, both of which have extensive redshift catalogs (Moran et al. 2005) and MIPS observations. Observational details for all clusters are in Table 1. Optical Photometry and Spectroscopy The optical photometry for the five main clusters is from Holden et al. (2007, hereafter H07) where magnitudes and colors were derived from Sersic models fitted to HST/WFPC2 images (MS1358, MS2053, and RXJ0152), HST/ACS images (MS1054), and SDSS mosaics for Coma. The conversion to rest-frame values is done by interpolating between the passbands (Blakeslee et al. 2006) and has errors of ∼ 0.02 mag. The mass-to-light ratios (M/L B ) and stellar masses were calculated using the relation between rest-frame (B−V ) color and M/L B ; see H07 for details and a discussion on the associated errors. MIPS 24µm Imaging All MIPS data sets were retrieved from the Spitzer public archive. Individual frames were corrected with scan mirror position-dependent flats and then mosaiced with the MOPEX software (Makovoz & Khan 2005) to a pixel size of 1.2 ′′5 . Integration times (t int ) and background levels (F bg ) in these mosaics are given in Table 1. Photometry was performed with APEX (Makovoz & Marleau 2005) using a 3 ′′ -diameter aperture, and an aperture correction of 9.49 as given in the MIPS data handbook. A small aperture is necessary to avoid contamination in the deep and crowded cluster fields. The fluxes are consistent with results from PSF-fitting photometry with scatter from a 1:1 relation in the range of 15-25 µJy. To estimate the completeness of each MIPS catalog, we added to the mosaics artificial sources modeled on the PSF. To avoid overcrowding, we simulated 30 signals at once, and repeated the process 30 times for each cluster (the 50% completeness limits, F 50% , are presented in Table 1). Finally, the MIPS sources were matched with the optical catalogs using a 2 ′′ search radius (Bai et al. 2007). From randomization of the MIPS coordinates, we estimate the rate of false identification to be 7 ± 4%, and little dependency of this error rate on redshift or color is observed. Star formation rates Star formation rates are based on the 24µm fluxes. First, the total infrared luminosity (F 8−1000µm ) of each galaxy was determined using a family of infrared spectral energy distributions (SEDs) from Dale & Helou (2002). We choose the range of SEDs that are representative of the galaxies in the Spitzer Infrared Nearby Galaxies Survey (Dale et al. 2007), and at each redshift adopt the median conversion factor from F 24µm to F 8−1000µm given by these models. At 0.4 z 0.6, the error due to the adopted conversion factor is ∼ 20%, but the error increases to a factor of 1.5-2.0 at lower and higher redshifts. For the parts of our analysis that are sensitive to the SF rates, we take these errors into account. As a check, we note that our total infrared luminosities in MS1054 agree well with the values in Bai et al. (2007). The conversion from total infrared luminosities to star formation rates is done following Kennicutt (1998). We assume that the emission at 24µm is due to star formation but it could also be due to dust-enshrouded active galactic nuclei (AGN). However, in comparing the X-ray and 24µm detections, only one cluster galaxy (in RXJ0152) is detected in both and rejected. While the AGN fraction in clusters seems to increase with redshift (Eastman et al. 2007), the estimated AGN fraction is only 2% at z ∼ 0.6. Johnson et al. (2003) also find evidence that at z ∼ 0.8, any excess X-ray AGN are located at R > 1 Mpc whereas we focus on the central Mpc of each cluster. Although we cannot completely rule out possible contamination by weak obscured AGN, we have excluded X-ray AGN and thus assume that the galaxies detected by MIPS are powered by dusty star formation; see Marcillac et al. (2007) for a more detailed argument on why this is a reasonable assumption. Figure 1 presents the color-magnitude diagrams of the five main clusters with photometry from H07. Because the MIPS sensitivity varies from cluster to cluster, we apply a SF rate limit of 5 M ⊙ yr −1 . The first immediate observation is that the number of strongly star-forming galaxies increases significantly with redshift. Using a field galaxy sample drawn from the same photometric and spectroscopic catalogs, we estimate a possible field contamination at z = 0.83 to be ∼ 8% (i.e. no more than one galaxy per cluster). In Figure 1, the dotted lines represent the original color criterion for BO galaxies. The ratio of the number of cluster galaxies with MIR SF rate 5M ⊙ yr −1 above this color cut to the number of blue galaxies (∆(B − V ) < −0.2) increases with redshift. The Mid-Infrared Butcher-Oemler effect For each cluster, we compute and plot in Figure 2 the fraction of confirmed star-forming cluster members after selecting by rest-frame B-band magnitude (M B ≤ −19.5), cluster-centric distance 6 , and MIR star formation rate ( 5M ⊙ yr −1 ). The errors on f SF,MIP S represent the range that can be produced by taking the minimum and maximum conversion factors from F 24µm to () is the number of galaxies within Ns with SF rates 5 M⊙ yr −1 . f We are using ISOCAM mid-IR data from Duc et al. (2002) for A1689. g Over the central 5 ′ ×5 ′ of the MIPS image. Fig. 1.-Rest-frame color-magnitude diagrams for spectroscopically confirmed members in the main cluster sample. Filled red circles are MIPS detections with SF rates 5 M ⊙ yr −1 (where all clusters are better than 50% complete). The larger symbols represent galaxies with log 10 (M * ) 10.6. The rest-frame B-band magnitude has been corrected for passive luminosity evolution, as determined from the fundamental plane (van Dokkum et al. 1998a). The vertical dashed line is the rest-frame B-band magnitude selection limit of -19.5. The solid diagonal line is the best fit to the red sequence galaxies, adopting the slope of van Dokkum et al. (1998b), and the dotted line denotes ∆(B − V ) = −0.2 mag; only galaxies below the dotted line would be part of standard BO sample. F 8−1000µm instead of a single average value for each cluster, and by varying the different selection thresholds by amounts comparable to the errors on each of these parameters. Figure 2 shows that the fraction of galaxies in clusters with MIR SF rates 5M ⊙ yr −1 steadily climbs from -Fraction of confirmed cluster galaxies that are starforming as revealed by the MIPS 24µm observations. Are considered only members with MIR SF rates 5 M ⊙ yr −1 that are brighter than M B = −19.5 and located within 1 Mpc of the cluster centers (filled circles) and 500 kpc (open squares). The points for the two z ∼ 0.83 clusters are offset slightly in z for clarity. ∼ 3% locally to ∼ 13% at z = 0.83. Because H07 showed that a cluster's morphological composition can vary depending on whether members are selected by mass or by luminosity, we apply an additional stellar mass cut of log 10 (M * ) 10.6 (Fig. 3). The mass cut is only applied to the five main clusters for which uniform photometry and thus stellar masses are available; the remaining three clusters are shown only as upper limits. While the mass cut attenuates the increase in fraction of starforming members, it does not completely suppress the trend. Thus the MIR BO effect is not due to an increase in the fraction of faint, low-mass members temporarily brightened by strong star formation. DISCUSSION Having established an increase in the fraction of MIRdetected galaxies from z ∼ 0 to z ∼ 0.8, we stress that optical studies are likely underestimating the increase in star-forming cluster galaxies with redshift. As seen in Fig. 1, an increasing number of strong dust-obscured star-forming members appear on or near the red sequence at higher redshifts; these are not included in traditional color-selected BO studies. The late-type morphologies of these members supports our intepretation of dusty star formation and red colors due to extinction (see also A901/902; Wolf et al. 2005). Using the standard BO definition of ∆(B − V ) < −0.2, the fraction of blue galaxies with M B ≤ −19.5 and R P < 1 Mpc at z ∼ 0.8 is ∼ 11%; however, including the red, massive, star-forming members raises the total fraction of blue/star-forming members to ∼ 23%. We note that for the five main clusters, the increase in the blue/star-forming fraction due to these red, star-forming members is {1.1, 1.2, 1.3, 1.7} at z ={0.02, 0.33, 0.59, 0.83}, i.e. the relative importance of including red, dusty star-forming members increases with redshift. Is this increase linked to galaxy infall? In Figure 2, both CL0024 (z ∼ 0.4) and MS2053 (z ∼ 0.6) are above the general trend established by the other six clusters. Both CL0024 and MS2053 have enhanced star formation compared to other clusters at similar redshift, and both have bimodal redshift distributions. CL0024 is made of two colliding subclusters (Czoske et al. 2002), and has an unusually large number of luminous infrared galaxies (Coia et al. 2005). Similarly, Tran et al. (2005) conclude from that MS2053 has a significant number (> 25%) of infalling galaxies; these members tend to be blue and star-forming. Both CL0024 and MS2053 are accreting a large number of new members and have high fractions of dusty star-forming galaxies. We speculate that the increase in star-forming members reflects the recent accretion of new members, i.e. galaxy infall, and that such events are more frequent at higher redshift due to the process of cluster assembly (Ellingson et al. 2001;Tran et al. 2005;Loh et al. 2008). As further evidence of this, 80% of the MIPS-detected galaxies in the z ∼ 0.8 clusters are more than 700 kpc from the cluster cores in projected distance and thus the MIR Butcher-Oemler effect is significantly altered by only considering the inner 500 kpc of the clusters (open symbols in Fig.2). SUMMARY We present the first comprehensive study of SST/MIPS 24µm observations for seven massive, X-ray luminous galaxy clusters spanning a wide redshift range (0.02 < z < 0.83). Uniform photometry, high resolution HST imaging, and extensive redshift catalogs enable us to measure the fraction of members with strong, dust-obscured star formation. The fraction of cluster galaxies with MIR star formation rates 5M ⊙ yr −1 increases from 3% in Coma to ∼ 13% in clusters at z = 0.83, and this trend is evident in both luminosity (M B ≤ −19.5) and mass-selected samples (M * 4 × 10 10 M ⊙ ). Optically-based studies increasingly underestimate the total amount of star formation in cluster galaxies with redshift because many of these dusty red star-forming members are missed in color-selected samples. These tend to be late-type galaxies that are red because of dust extinction which disguises their high levels of obscured star formation (> 5M ⊙ yr −1 ). Defining the SF fraction to include both optically blue and red, but MIPSdetected members doubles the fraction at z = 0.83 from ∼ 11% to ∼ 23% (R P < 1Mpc). Lastly, our study indicates that the BO effect and the increase in obscured star-forming members are linked to galaxy infall: 80% of the MIR-detected members at z ∼ 0.8 are outside the cluster cores (R P > 0.7Mpc), and the two clusters at z < 0.8 that are accreting a substantial number of new members also have an enhanced fraction of galaxies with MIR SF rates 5M ⊙ yr −1 .
2008-09-30T11:59:06.000Z
2008-06-13T00:00:00.000
{ "year": 2008, "sha1": "bccad6e90893c99aab37e9947f6563ca3dbc9247", "oa_license": null, "oa_url": "https://www.zora.uzh.ch/id/eprint/16565/1/Spitzer.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "05260d70b7c3891e739ee44c1a814184a10ba30e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }