id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
253833246
pes2o/s2orc
v3-fos-license
THE IMPACT OF COVID-19 PANDEMIC ON PERSONAL AND PROFESSIONAL LIFE OF KARACHI UNIVERSITY TEACHERS The COVID-19 epidemic has posed substantial issues to the global higher education sector, particularly in Pakistan. The global academic community responded to the COVID-19 epidemic by shifting to distance learning, with the vast majority of courses being provided online throughout the outbreak. A large number of researchers have shared their work on online teaching and learning in a variety of various methods with the public. As a result of the global epidemic, several Schools, Colleges, and Universities have cancelled physical education programs. In this research report, we discovered that the COVID-19 pandemic, as well as social and emotional pressures, was the primary causes of the negative effects on Karachi University Faculty. These consequences reveal themselves in the participants' personal as well as professional lives. In terms of their professional role, participants reported that the additional time and attention that they dedicated to learning new technologies, implementing new teaching practices, receiving support from co-faculty members, and striving to meet their Introduction The global pandemic Covid19 has caught the world under its implications. The restrictions have been removed due to the reduction in the disease spread and viral infection. The after impacts of the Covid19 are much visible and operational on the main account of changing the world perspectives (Dawood, 2020). Society has faced its after impacts, and in that consideration, the impacts are not only restricted to the medical and economical operations. There is an increased connection between the social and emotional well-being of mankind. All over the world, researchers have been working on the Covid19 implications in the various fields of life and in that proportion, the mental well-being cannot be underestimated in the main approach of working (Zhang, 2020). Mental well-being is a composition of different factors. In order to identify the impact of Covid19 on the factors of mental well-being, social and emotional well-being has been considered on the main account of working. The motivations of the researcher are to work on the considerations to evaluate how the pandemic has an impact on the professional and personal life of the professors. The teachers have been working effectively on the main motivation for providing better education facilities to the students, but on the other hand, working from home during the pandemic has not only impacted the education methods. The impacts on teachers or scholars are also important to consider due to the adaption of online education tactics. Covid19 impacts the activities for daily life in which education has shifted from the physical education online platforms, and this shift was not easy for the students and for the teachers as well (Barchas-Lichtenstien et al., 2020). The considerations for the impact of students are not the primary concern of the study, as the impact on the professors or teachers has been considered as the main consideration. The differences might occur in the various levels of teaching professions. It is possible that the teachers in the universities may have faced different problems as compared to the school teachers. To provide a more precise and specific approach to the research and its results, the researcher has worked on a specific segment of teaching or teacher's profession. The main composition is related to the university teachers, and the focus group working for them is related to the approach of Karachi University. Aims and Objectives The main aim of the research is to analyse how Covid19 has an impact on the personal and professional life of the professors belonging to Karachi University in the provision of Online Education. Objectives • To evaluate the impact on the emotional and social well-being of the professors. • To understand the impact of online teaching on the personal and professional life of Professors. • To identify how the provision of online teaching has an impact on the mental well-being of professors. • Significance of the study The past study by Ciotti et al. (2019) has stressed that the crises in the past have brought extensive problems for the people, and individuals from all the domains are highly impacted. In 2019, the pandemic of covid-19 emerged and disrupted the operations of the entire world and also contributed to immense social and economic problems. The COVID-19 outbreak affected various people, including the teaching professionals and is extremely vulnerable to all the social groups. The pandemic of covid-19 has also played a vital role in increasing poverty and has become a great obstacle for all the social groups, and has pushed the lower middle class behind the poverty line. The teaching professionals have been greatly impacted, and the pandemic of COVID-19 has also disrupted the teaching methods and has caused problems for the students. It has been analysed that extensive research is taking place to tackle the ongoing crisis of the pandemic of covid-19 (Waris et al., 2019). Efficient strategies have been greatly produced to minimise the effect caused by the pandemic, and the alternative of online teaching sessions has been introduced in order to fill in the gap for physical education. This study is immensely significant as it highlights how covid-19 has caused problems for the individuals, especially the professionals in the teaching sector and what measures can be adopted to combat the pandemic of covid-19 so that the effect can be minimised at the earliest. This study is also highly significant as it plays a vital role in addressing the social problems and also the effective strategies that can be executed in order to overcome the social problems such as poverty, inequality, exclusion, etc. In addition, this study also sheds light on various measures that can be taken by the professionals, particularly from the teaching sector, so that effective measures can be taken in a timely manner and future inconveniences can be avoided to the fullest. Research problem The past literature has stressed the impact of covid-19 on the lives of individuals and has also stressed that the teaching sector is facing immense problems as the educational system has shifted from the physical environment to the online environment (Chakraborty and Maity, 2020). In addition, the higher education system has different problems when compared with those of the primary education system and can also enhance with the passage of time if adequate measures are not taken. The past literature has highlighted the problems, but none of the research has specifically focused on the teaching sector of Pakistan in detail, along with the measures that can be taken in order to combat the pandemic of COVID-19 and revive the educational sector. This study will immensely focus on the provision of an excellent approach to the targeted university, Karachi University, that will assist in broadening the research area and will also maximise the practices that could be taken to improve the quality of education of the future generations. Literature Review In the consideration to generate the perspective related to the professors at the Karachi University, the experimentation and data collection has been performed. But in order to develop a general perspective regarding the impact on Professors, the considerations from literature have been approached. In that proposition of working, there are different kinds of observations regarding the personal and professional levels of working during and after the recent pandemic. Impact Covid19 on Personal Life of Karachi University Teachers Research by Aubry et al. (2021)explored the impact of Covid19 on the teachers, authors performed a survey with the professors of ecology and evolutionary biology departments in the US, and it was the result of the research, it was identified that the female professors or assistant professors with a teenage or child to care to take face a lot of difficulties and in that proportion of working it was observed that the life-cycle had been disturbed during the pandemic, and they expect that the negative impacts will last longer (Aubry et al. 2021). In another research, it has been observed on the main account the professors have faced the considerable dilemma of learning new approaches for the provision of quality education and in that approach of working it has disturbed the personal life of the professors due to less motivation for the work, considerable approaches for the financial problems (Santos et al., 2021). The teachers have faced the problem of stress and overburden in the context of Covid19. As in the pandemic, due to the context of online education, the various opportunities are very important to consider, and in that consideration, for providing the quality education, the stress levels are much higher and, in that approach, the personal life is very much disturbed (Daruka and Hoxha, 2020). The stress and anxiety have been one of the most common problems in the case of Covid19, and in that proportion, it has been seen that the faculty member has faced a lot of problems in the starting year of the Covid19 and considerably, the chances are much operational for the development of stress and anxiety. The racial tensions and the anxiety have caused a deliberated impact on the researchers . The personal life of the professors in the university has been considered on the main approach of working, and in that considered the most economical approach is to see about the impact on the private life and from the research it has been seen that on the main proportion the 65% of the professors have complained that their private life has been disturbed badly due to the online education and pandemic (Vital-Lopez et al., 2022). The psychological consideration of mental health is very important, and, in that respect, it has been seen that on account of the practical approach, there are several other things to consider (Santos et al., 2020). The anxiety and depression due to online education have increased among several professors, and on that basis, it has been seen that this type of anxiety and depression is much more operational for the people to consider . Education is the fundamental right of every individual, and in that respect, it has been seen that the various practices of working are much more operational for the professors to consider (Daruka and Hoxha, 2020). Moreover, it has been seen that on the online platform, there are several things that need to be considered, such as time management, work-life balance, and adaptability (Vital-Lopez et al., 2022). This type of approach is much more operational for the teachers to consider as they have to manage their work, and at the same time, they need to focus on their personal life as well. Influence of Covid19 on Professional Life In the professional context, the social consideration is very important, and from the research, it has been observed that on the main account the, Covid19 has an impact on the social life of the professors due to the recursion of various techniques and development of various accommodations (Santos et al., 2021). There are certainly other considerations in which the main idea and working are much operational regarding the impact on the professional life, and in that approach, there are various other considerations in which the use of electronic products is very important to consider in the proportion of considering the better approaches for the impact of Covid19 on the professors, 38% of the professors claim that the use of electronic devices impacts the various aspects of emotional well-being (Vital-lopez et al., 2022). There is also a difference between the considerations of impact on the university professors, and in that slant, it has been seen that different professors are facing different social and emotional impacts (Kotini-Shah et al., 2021). The psychological condition of the professors has an impact which is severe due to the various methods of working, and in that consideration, there are operations in which the main idea of working the Italian professors have been seen to face the various approaches of majority as their social life has been disturbed due to the emotional or psychological context (Casacchia et al. 2021). The pandemic has increased the stress on the main approach the professors in Japan have faced the significant impact on the various practices of working. There are several other factors in which the professors have complained that online education has increased the destruction in the overall approaches to the main idea of social life and its benefits with respect to physical health and activities . Impact on the emotional and social well-being of the University teachers The impact of the pandemic has also been felt on a professional level. Due to the lockdown, many professors have had to cancel classes and move to online teaching, which can be challenging and time-consuming (Braun et al., 2020). In addition, the university has had to close down many of its facilities, including libraries and laboratories, which have made research difficult. Hence, the outbreak of COVID-19 has had not only a physical impact on the health of Karachi University professors but also an emotional and social one. A study by (Ronnie et al., 2022) found that faculty members experienced increased levels of anxiety, with nearly one-third experiencing moderate to severe levels of depression. This study also found that the pandemic has had a negative impact on social life because of a decrease in social interactions. This is likely due to the fact that many faculty members are working from home and are therefore not able to interact with colleagues on a daily basis. This lack of social interaction can lead to feelings of isolation and loneliness, which can further impact mental health. In fact, a study by Vital-Lopez et al. (2022) found that faculty members have negative impacts on their emotional well-being due to the increased use of electronic devices because of online teaching systems. Impact of Covid-19 on the mental well-being of University Teachers While the pandemic has had a negative impact on the mental health of Karachi University professors, it is important to note that they are not alone in this regard. The outbreak of the coronavirus pandemic has taken a toll on everyone's mental health, but it has been especially hard on professors (Çifçi and Demir, 2020). The pandemic has forced professors to suddenly change the way they live and work. Many have had to move their classes online, which can be a challenge for those who are not familiar with the technology. In addition, the pandemic has caused a lot of anxiety and stress for professors as they worry about their own health and the health of their students. The sudden change in lifestyle and the added stress of the pandemic have resulted in a decrease in the mental well-being of professors. The stress of the situation can be overwhelming, and many professors struggle to cope (Alves et al., 2021). Some are dealing with anxiety and depression, while others are struggling with insomnia and other sleep problems. Many are also dealing with the added stress of trying to care for their families while also teaching online. Moreover, the isolation of working from home can be difficult to adjust to. The mental well-being of professors is an important issue that needs to be addressed. Hence the main reasons for mental stress are the sudden change in lifestyle and work routine, as well as the fear of contracting the virus (Çifçi and Demir, 2020). The stress of dealing with online classes and trying to maintain a work-life balance has also contributed to the mental health problems of professors. Conceptual framework The dependent variable in this study is the mental well-being of professors (Williams, 2020). The independent variables are the sudden change in lifestyle, work routine, and fear of contracting the virus. The mental well-being of professors is affected by the independent variables. The sudden change in lifestyle and work routine can lead to stress and anxiety. The fear of contracting the virus can also lead to mental health problems (Aubry et al., 2021). The lifestyle of professors due to the covid-19 pandemic has changed a lot. Their work routine, social activities, and daily routine have all been affected. The change in lifestyle has been sudden and drastic, which can lead to mental health problems. The work routine of professors has also changed. Many professors have had to move to online teaching, which can be stressful (Williams, 2020). They may also be worried about the quality of their teaching and whether their students are learning. The fear of contracting the virus is also a major concern for professors. They may be worried about their own health and the health of their families. This can lead to stress and anxiety. Literature Gap The literature on the impact of Covid-19 on the mental well-being of professors is limited. There is a need for more research on this topic in order to understand the extent of the problem and to find ways to help professors cope with the stress of the pandemic. Moreover, the literature on the impact of Covid-19 on the mental well-being of other groups of people, such as students and healthcare workers, is also limited. There is a need for more research on the mental health effects of the pandemic in order to help identify ways to address the problem. In addition, the literature on the impact of the pandemic on the mental well-being of different professors in different parts of the world is also limited. There is a need for more research on this topic in order to understand the global impact of the pandemic. Methodology The methodology is an important section of the research as it provides the knowledge regarding how the researcher has worked on the identification of various approaches which have provided the answer related to the impact of Covid19 on the personal and professional lives of the professors in Karachi University. Research Philosophy The very first in this consideration is the research philosophy, and in that section, there are two main types of philosophies explaining the epistemological state of the research. The two types are interpretivism and positivism; the first one is related to the use of observations and reviews to have an idea of a generalised view the second one is related to the approach of more specific and certain consideration with the collection of factually based data (Alharashah and Pius, 2020). The researcher has worked on using the positivism philosophy to develop a more specific result with the statistical data. Research Approach The methodology also has the inclusion of a research approach in which the deductive research approach has been selected as it helps in testing a theory and creating a more statistical approach for the development of results (Woiceshyn et al., 2018). Research Design In consideration of research design, there are two types one is the quantitative and the other is qualitative. The quantitative research approach has been selected to work on in the following research as it complies with the motivation of reality-based data collection (Rutberg et al., 2018). Data Collection and Analysis The data collection is more organised with the use of focus groups, and in that approach, the professors have been considered for data collection. The analysis is based on the statistical analysis, and in that approach, the frequency analysis discusses both open and close-ended questions of the questionnaire. The demographics and each response frequency have been evaluated and discussed. The motivation revolves around the consideration of evaluating the impact on the professional and personal lives of the professor. In consideration of research, the researcher has worked considering the various research ethics in which the consent of participants is very important to consider. Research Ethics The researcher has used the ethical considerations effectively and on the main proportion while conducting the interview with participants, and in this operation the consent of people and the valuation of not using their personal data has been considered. Results and Analysis In order to analyse how the Covid19 has impacted the professors of Karachi University, Pakistan, a questionnaire was filled from the professors, and in that questionnaire, the different numerical values have collected along with the various statements in the form of observations for the questions asked. The values have been interpreted through SPSS and have been analysed with the help of frequency analysis. There are three main factors of the analysis which are demographics, closed-ended questions and open-ended questions. Demographics In the demographics section, there are four main aspects which have been considered in the context of the following analysis, and these are the Gender, Age Group, Faculty and Designation. The table above shows the different percentages of both genders which have taken part in the process of data collection. It can be seen that in total, 100 professors participated in the research process, out of which there were 64 male professors and 36 female professors. It means that answers and results extracted have been dominated by the male professors at Karachi University. The age is very important as it provides an estimation of how the professors at various age groups have been affected by the implementation of online teaching and Covid19. From the above table, it can be seen that maximum professors are under the age of 30 years as out of 100, 53% of professors are under 30 years. 26% of professors are above the age of 30 years and less than 40 years, while 14% of the professors are between the age of 40 -50 years. There are only 7% of professors who are above 50 Years, and it shows that, on average, the considerations are more orientated with the perspective of people under the age of 30 years. In analysing the perspectives of various professors from the different departments, the main consideration in the analysis is majorly from the department of Arts. It means that out of 100, 40 professors are from the department of arts, while on the other hand, 28 professors are from the department of science. There are certain other departments which have been considered, and there have been 12 students from the faculty of Islamic Studies and 15 from the faculty of engineering. There are 2 professors from the faculty of pharmacy and 3 students from the faculty of management sciences. Age is considered for analyse, but it is important that the role of professors also be analysed so that the results can be related to the proportion of the focus group according to their role. There are 53% assistant professors in the main population and 34% lecturers. In this consideration, it can be said that the results are halfway dominated by the mediating workload of the Assistant professor, and there are also 6% of participants in the position of associate professor. There are 4 participants working on the position of professor, and 3 participants are working on some other positions. Quantitative Findings (Closed-Ended questions) Impact on Personal Life In consideration of close-ended questions asked, it was asked to form the professors how they felt about the impact of Covid19 related to social and emotional consequences. In the analysis, 48% of the professors responded to NO, and the remaining 52% responded to Yes and Maybe. But on the consideration of asking about these impacts related to the Covid19 and in that considerations, it has been seen that 48% of the participants said that the impacts of Covid19 are negative in consideration of various reasons for the professors. But the reasons which the impacts have been negative are variety in operation like 28% of the professors said that the negative impacts are due to the time management. 41% called the online transfer of education system the main problem, and 13% of professors called the online teaching overall as the main problem. 12% of the professors have been called the use of technology as the main reason for the negative impacts. As nearly 48% of professors have called the Covid19 to have negative impact over the professor's personal life so in this consideration the online teaching has also been considered to evaluate the impact. From the table it can be seen that, only 9% of the professors have said that they have deliver the online education, while, 46% of the professors have said that they did not deliver online education. 24% of the professors have answered in other and thus on the overall it can be said that majority of participants have not deliver the online classes. The consideration of online education also raised the question regarding the means for providing the online education and university related work. In that consideration it has been observed that 48% of professors do not have any laptop or personal computer. Only 28% of the professors have the personal laptop and 24% answered in have or have not means in the broken or non-useable condition. In the consideration of impact, it is important to understand that as the Covid19 has negative impact over the personal life of professors does it was worth for education and it that approach 48% of the professors said that No, the online education was helpful in Covid19 and only 28% professors claim that it is was beneficial. The professor were also asked that whether they were facilitated by the faculty in the university and in that consideration almost half of the professors (48%)claim that they have not been provided any help by facility. Only 28% of the professors said about the positive support and 24% does not get the worthy enough support. It has been shown that 28 per-cent of participants have been agreed with the given statement whilst the rest 48 per cent were against. However, 24 per-cent were not sure. Related to online education there are certain other consideration on the personal note which have been asked during the questionnaire and in that phase, the personal impact was also identified and the impact on their projects was identified, it was observed that 48% of the professors faced the distracted impact on their research project. In the delivery of online education, the technical training form the department was asked and in that approach it was observed that on the main identification that 48% of the professors claim that neither the training was provided to them, not any skill development session was provided to the professors. While on the other hand 28% of the professors said that they have attended the training and also they were provided the support in technical terms. Thus the personal impact and time management can be considered as a factor for the deviation. The considerations regarding the professional and technical support from faculty as 28 percent of the participants have got chance of attending online skill development class. As the online classes have occurred during the lockdown and almost less than 30% of the professors have delivered them during the Covid19 and in that consideration it was asked form them that whether they were aware of the online software like Zoom or any other and 48% of the professors claim that they were unaware of its use and only 28% professors were aware of its usage. Those which were not aware of it were asked about how they learn it and in that approach 38% learned it from colleagues, 31% from the online videos, 24% from the family members and the 7% used the other sources. The overall teaching experience has not been very well as it can be seen from the obtained results that only 1% of the professors said that their experience was excellent, 39% people said that the experience was very poor and only 23% said that it was neutral. Means that overall the mostly professors have rate their teaching experience as bad. In that specification the approach is also related to the context of impact on the social life. As during pandemic the social life has also been suffered on the main account of working, and the professors were also asked about it. Professional Life Few professors live at distant places from their homes, and in that sense the main problem is related to making connection using different medium of phone, zoom call and some others. It has been observed in the current analysis that only 1% professors claim it excellent, while 32% claim it to be poor and 36% less poor. 24% of professors said it to be neutral. The professors have been providing the online lectures and in that concept, their screen time was also analysed to have an idea about the connectivity on social platforms and it has been observed that majorly it has reduced as more than 41% professors claim that on the main priority the they use it less than 40 minutes a day and only 24% said that they use it for more than 3 hours. It can be said that the Covid19 had impacted negatively on the social life as well, and in that consideration, when the time to relax and socialise has been reduced, the professors were asked for their stress level and support from family, in which 30% of professors said it was more than average stress, 35% called it poor for managing the stress and only 22% people called it to be very normal. The support considerations from family during that time of bad stress levels and burden was also included in the questionnaire, and it has been analysed that 42% of professor had less than average support from family, 25% called it neutral, and 24% claimed the support to be very poor. The support for the online class was also identified, and in that approach, 37% get less than average support, 27% get poor support from family and 24% people have neutral support from family in the delivery of online classes. The resting 12% have better experience in terms of support from the family. Friends play a very important role in consideration of support, and it has been analysed that 39% of professors said yes to the support from friends in pandemics, and 37% said no. The support from family and the impact of Covid19 on the family life is important to understand in terms of social life, and it can be seen that 46% claim that yes, their life has been impacted by the Covid19 and 31% of people said No to the question. Social life also gets impacted due to the number of death, and it can be seen that 48% of people have faced deaths, while 52% of people said no. The questionnaire also included the consideration of connectivity means, and it can be seen that majorly (37%) professors use the audio mobile calls, 33% considered the use of video calls and 30% people considered the use of video calls. Descriptive Open-Ended Questions Analysis In the questionnaire, it was asked about whether they own the laptop/PC or not, and those saying no were asked about means to engage students during the Covid19, and in that sequence, one respondent said "Most of the time, I connect through email and other social media sections on mobile." The use of mobile has been increased effectively in the Covid19, as teachers and students have been using it for the purpose of connectivity, video calling and even for the online study (Subedi et al., 2020). Thus the mobile has been very effective in consideration of online education from the perspective of students and teachers. Another question regarding the help from the faculty was asked in which a few of the professors responded that the faculty had supported them in conducting online classes, and they were asked how? In that approach, one respondent said; "It was just like another group assignment; my colleagues and peers helped me where I lacked through text or video call." It shows that those who have gained any help used technological sources for the development of understanding. In consideration of impact, 28% of the professors had responded that their research project got impacted and in that approach, when they were asked about how one of them quoted; "I was unable to perform the experimentation in lab; moreover the burden of online classes and time management became the major problem." The problem of social distancing majorly impacted the research project, and when they were asked about how they managed the deadline, most of them said; "Request was implicated to the supervisor to increase the deadline understanding the current situation, and it was approved in most cases." Most of the research work has been postponed due to the implication of social distancing. Thus it can be seen that in pandemics, the research projects have been delayed on the main approach working. In consideration of socialising, professors were asked about how they spend their family time, and in that case it answered as; Thus it, can be seen that professors have considered the time as precious and enjoyed effectively with their family. In a same natured question regarding the impact of Covid19 on family life, 46% said yes and on asking how one professor said; "I personally get more time to share with family, but the burden of online education causes the problems of time management" The problem of time management can be considered as the main problem due to online teaching impact the personal life of professors in Covid, and when the emotional support from friends were an interesting answer was obtained as it says; "Yeah, we used to have video calls, group video calls and it was fun" And another question was asked about the death of family member and its impact on mental health, in that approach most of the respondents said; "It is very disturbing to hear about the death of someone dearest to you and you are not even able to see him physically last time or to embrace him. It is one of the most desperate feelings" It can be seen that Covid19 has impacted the mental and personal health of the professors and it has not only impacted the personal life, but also impacted the mental well-being of the professors in the Karachi University. Evaluate the impact on emotional and social well-being of the professors The emotional and social well-being of the professors has been impacted badly, as from the analysis, it has been seen that majority of the professors have faced the problem in time management, and the incidents like deaths of friends and family caused a heavy impact on the mental health of the professors. The social well-being has been reduced; most professors imply very lower screen time apart from the virtual teaching. Moreover, the stress levels have been disturbed for the professors. Hamilton and Gross (2021) claim that during Covid19, the professors in the public education centres have faced impacts on the social and emotional wellbeing of the students. Thus it can be said that the time management, disease and the burden of online education have impacted the social and emotional well-being of professors at Karachi University. Understand the impact of online teaching on the personal and professional life of Professors The Covid19 has started the trend of online teaching, and in that scope to provide quality education, the professional and personal life of the professors have been impacted badly. From the analysis, it has been seen that 48% of professors have claimed that the Covid19 has impacted negatively on their personal life, they need to learn the various skills by themselves and the majority of the professors did not get any faculty and technical support. In this way, a lot of time gets into it and support from the family time was also a big issue due to the extra burden. Van Leeuwan et al. (2021) analysed that faculty experiences in the provision of better teaching have been impacted badly on the main approach. Thus on a contextual approach, it can be said that in the management of better education and learning new skills, the personal and professional life has been disturbed effectively during Covid19. The provision of online teaching has an impact on the mental well-being of University Teachers Online teaching, as discussed above, causes the professors to work on the new skills learning in which the support from family, friends and faculty is very important. As it has been observed that in the Covid, the professors providing online education required training and support . But in the analysis, it has been observed that 30% of the professors have faced the bad/poor stress management considerations due to which the overall mental health of the professors has been disturbed. Thus it shows that the professors have faced a hard time managing their mental well-being while providing online education. Conclusion Summarised Findings From the analysis and the discussion, it can be concluded that the Covid 19 has impacted the overall mental and emotional well-being of the professors at Karachi University. The frequency analysis has worked on providing the rate of impact on different functions, and as the focus group includes the 64% of male professors so it can be said that these impacts are the major depiction of the problems faced by the male professors. The pandemic has impacted the social and personal life, as where the professors get the chance to enjoy quality time with the family, the implication of online teaching impacted the quality time. Majorly the professors have faced problems in providing online education as most of them were unaware of the use of technology to deliver the lectures, and they were not provided with any help from the department or faculty to learn the skills. The consideration of learning own and maintaining a quality education has caused an impact over the management of time, due to which the social and personal life gets ignored. The professors have faced poor management of stress levels, and the death of family and friends causes a lack of better mental health. Recommendations There are a few recommendations for the future research, and these are as follows: • Research must have an equal number of male and female participants to have a variety of problems regarding both genders • The specification for research is important as the variations can be developed on the personal life, professional life, online teaching and Covid19 itself separately to have a more variety of knowledge and observations. Limitations The research is based on the majority of male professors, which is a limitation as the answer has less contribution from female professors; thus, it provides less consideration for the female problems and impact. Moreover, the interview included the consideration of online teaching and Covid19 general on the personal and professional life due to which the proper considerations regarding the specific section are missing. In future, the consideration regarding online education and the Covid19 separately is very important to develop.
2022-11-24T16:06:52.249Z
2022-09-18T00:00:00.000
{ "year": 2022, "sha1": "5e349ce344c72ad2dc5b5ce18ac2c212063d51a3", "oa_license": "CCBY", "oa_url": "http://pjia.com.pk/index.php/pjia/article/download/591/433", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "2c17b1a33ae984f83dab509fd01f15b2959e6c2f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
893662
pes2o/s2orc
v3-fos-license
Lentiviral silencing of GSK-3β in adult dentate gyrus impairs contextual fear memory and synaptic plasticity Attempts have been made to use glycogen synthase kinase-3 beta (GSK3β) inhibitors for prophylactic treatment of neurocognitive conditions. However the use of lithium, a non-specific inhibitor of GSK3β results in mild cognitive impairment in humans. The effects of global GSK3β inhibition or knockout on learning and memory in healthy adult mice are also inconclusive. Our study aims to better understand the role of GSK3β in learning and memory through a more regionally, targeted approach, specifically performing lentiviral-mediated knockdown of GSK3β within the dentate gyrus (DG). DG-GSK3β-silenced mice showed impaired contextual fear memory retrieval. However, cue fear memory, spatial memory, locomotor activity and anxiety levels were similar to control. These GSK3β-silenced mice also showed increased induction and maintenance of DG long-term potentiation (DG-LTP) compared to control animals. Thus, this region-specific, targeted knockdown of GSK3β in the DG provides better understanding on the role of GSK3β in learning and memory. Introduction Glycogen synthase kinase 3-beta (GSK3β) is a constitutively active serine protein kinase highly expressed in the brain (Woodgett, 1990). Known targets of GSK3β number over 100, many of which are involved in pathways related to cell growth, apoptosis, metabolism, learning and memory among others (Kaidanovich-Beilin and Woodgett, 2011). GSK3β's kinase activity is positively regulated by phosphorylation on its tyrosine 216 residue, or inhibited by phosphorylation on its serine 9 residue (Wang et al., 1994). Lithium, a known GSK3 inhibitor was demonstrated to inhibit GSK3β by increasing phosphorylation on the key serine residue (Ser9) (Jope, 2003). It was shown that administration of lithium or another GSK3 inhibitor, AR-A014418 (ARA) was able to improve cognitive performance in mouse models of neurocognitive conditions such as Down's syndrome and Fragile X syndrome or Alzheimer's disease (AD) respectively (Yuskaitis et al., 2010;Contestabile et al., 2013;Ly et al., 2013). GSK3 inhibition has thus been suggested as a means of prophylactic treatment for human patients with these neurological disorders (Licastro et al., 1983;Terao et al., 2006;Nunes et al., 2007;Liu and Smith, 2014). However, there are also reports showing that lithium treatment increases the risk of dementia (Dunn et al., 2005) and also resulted in impairments in cognitive performance of healthy individuals (Weingartner et al., 1985;Stip et al., 2000;Wingo et al., 2009). These inconsistent outcomes from pharmacological inhibition of GSK3 activity suggest that a more specific approach through gene targeting of selected brain regions may provide a better understanding on the function of GSK3β in cognitive functions. Overexpression or knock-in studies in healthy mice where GSK3β is constitutively active showed spatial learning deficits (Hernández et al., 2002), impairments in novel object and inhibitory avoidance tests (Dewachter et al., 2009) and an increase in fear memory retention (Polter et al., 2010). However, results from inhibitor and transgenic knockout studies in healthy rodents are controversial. Many GSK3 inhibitors are also known to affect signaling cascades independent of GSK3 (Meijer et al., 2004). Studies administering lithium, a non-specific GSK3β inhibitor, mostly showed no effects in healthy mice . Mice administered with the GSK3β inhibitors valproic acid (Sintoni et al., 2013) and SB 216763 (Hu et al., 2009;Sintoni et al., 2013), showed impairments in the Morris water maze task and contextual fear memory consolidation but not reconsolidation in healthy mice. Heterozygous GSK3β knockout mice and mice treated with ARA showed impaired long term spatial memory and contextual fear memory reconsolidation but these mice exhibit normal contextual consolidation (Kimura et al., 2008). These discrepancies may be due to confounding factors which include toxicity from long term inhibitor administration, inhibitor nonspecificity, non-site specific inhibitor effects and in the case of transgenic animals, aberrations caused by the reduction of GSK3β during brain development which are carried into adulthood. Our study aims to further the understanding on the role of GSK3β in learning and memory and synaptic plasticity by genetically manipulating GSK3β levels specifically in the dentate gyrus (DG) of young adult mice. The DG was chosen as the target region for investigation as it is important for various forms of learning and memory, such as object discrimination and contextual fear memory (both for context memory) as well as Morris water maze (spatial memory; Rubin et al., 1999;Jeltsch et al., 2001;Lee and Kesner, 2004;Nakashiba et al., 2012). Animals All animal procedures and applicable regulations of animal welfare were performed in accordance with the GSK Policy on the Care, Welfare and Treatment of Laboratory Animals and also in accordance with Institutional Animal Care and Use Committee (IACUC) guidelines and approved by SingHealth IACUC, Singapore. Six to eight week old, male, C57BL/6 mice (of 24-28 g) were purchased through SingHealth Experimental Medicine Centre (SEMC), Singapore, and housed in Specific Pathogen Free (SPF) animal facility at Duke-NUS Graduate Medical School, Singapore. Food and water were provided to the animals ad libitum. Creation of Viral Vector FUGW plasmid (Addgene) used for the creation of lentivirus was modified to express shRNA sequences under the control of U6 promoter and GFP under the control of human ubiquitin C promoter. Shorthairpin (shRNA) against mouse and rat GSK3β: CATGAAAGTTAGCAGAGAT (shGSK3) (Kim et al., 2006) or scrambled control sequence: TTATCAGATAGACGATTGT (shCon) was cloned into the FUGW plasmid. FUGW plasmid was co-transfected with HIV-1 packaging vector Delta8.9 and the VSVG envelope glycoprotein into human embryonic kidney (HEK) 293 gp cells to produce viral particles. Growth medium was collected and subjected to ultracentrifugation at 28,000 g for 90 min to pellet viral particles. The viral pellet was re-suspended in sterile PBS and viral titre determined by infecting HEK 293 cells with serial dilutions of the virus. A viral titre of 1 × 10 10−11 TU/ml was used for stereotaxic injections and downstream experiments. Western Blots Hippocampal neurons were isolated from E18 rats as described previously (Shivaraj et al., 2012;Su et al., 2015). Dissociated neurons were cultured on poly-L-lysine coated plates and infected with lentivirus delivering shGSK3 or shCon the following day. After 5 days, cell lysate was prepared in radioimmunoprecipitation assay (RIPA) buffer (1% Triton X-100, 50 mM HEPES, pH 7.0, 150 mM NaCl, 2 mM EGTA, 0.25% sodium deoxycholate, 0.2 mM phenylmethylsulfonyl fluoride with phosphatase and protease inhibitors). For in vivo quantification, both hippocampi were isolated 4 weeks post stereotaxic injection and homogenized in RIPA buffer. Protein concentration was measured and 20 µg of protein was separated on an 8% SDS-PAGE gel and transferred to a polyvinylidene fluoride (PVDF) membrane. Primary antibodies used were rabbit anti-GSK3β antibody (Cell Signaling Technology, 1:5000) and Mouse anti-alpha tubulin antibody (Sigma, 1:10000). Enhanced chemiluminescence (ECL) horseradish peroxidase linked antirabbit or anti-mouse antibodies (GE Healthcare) were used as secondary antibodies. Restore Plus Western Blot Stripping Buffer (Thermo Scientific) was used for stripping purposes. SuperSignal West Pico Chemiluminescent Substrate (Thermo Scientific) was used to develop the blots. Histograms of all western blots were checked during the capture process by the GE LAS4000 imaging machine and also in image J. This is to ensure that all blots used for quantifications are not overexposed. Quantification of band intensities was done using image J. Stereotaxic Injections Stereotaxic surgical procedures were performed under deep anesthesia (Ketamine 100 mg/ml, Xylazine 20 mg/ml) at a dose of 85 mg of Ketamine and 10 mg Xylazine per kg of animal body mass. Animals were mounted on a stereotaxic frame instruments (Kopf Instruments, Tujunga, CA, USA). An incision was made along the midline of the scalp and the skull exposed. Small burr holes were drilled into the skull at the following coordinates as previously described (Ge et al., 2006;Zhao et al., 2015): (1) 2 mm posterior to bregma, ±1.6 mm lateral to midline, 2.5 mm ventral from skull; (2) 3 mm posterior to bregma, ±2.6 mm lateral to midline, 3.2 mm ventral from skull. Lentivirus was injected using a 1 µl Hamilton syringe at a volume of 0.5 µl per site (flow rate of 0.05 µl/15 s). 0.5% Bupivacaine was administered after the surgery to provide acute pain relief. 1-5 mg/kg of Butorphanol was administered subcutaneously for 2 days after surgery to relief pain from the surgical procedure and to ensure that animals experience little or no discomfort after the surgery. Animals showing signs of pain and/or obvious discomfort outside this time period were removed from the study and euthanized. Electrophysiology Hippocampal slices of 12 wild type mice of 10-12 weeks of age injected at 6-8 weeks old with shCon (six mice) or shGSK-3β (six mice) were used (4 weeks after injection) for electrophysiological recordings as previously described (Sajikumar et al., 2005). Briefly after anesthetization using CO2, mice were decapitated and the brains were quickly removed and cooled in 4 • C artificial cerebrospinal fluid (ACSF). Transverse hippocampal slices (400 µm) were prepared from the right hippocampus using a manual tissue chopper and the slices were incubated at 32 • C in an interface chamber. The ACSF contained the following (in mM): 124 NaCl, 4.9 KCl, 1.2 KH 2 PO 4 , 2.0 MgSO 4 , 2.0 CaCl 2 , 24.6 NaHCO 3 , 10 D-glucose, equilibrated with 95% O 2 -5% CO 2 (32 L/h). Slices were preincubated for 2.5 h. Recordings in the DG were performed similar to that method described in Walther et al. (1998) and Balschun et al. (1999). After the preincubation period, a monopolar lacquer-coated, stainless-steel electrode (5 MΩ; AM Systems, United States of America) was placed in the stratum moleculare of the DG to stimulate the medial performant path input. About 200 µm apart, the recording electrode was lowered to the same level to record field excitatory postsynaptic potentials (fEPSPs). The stimulation strength was adjusted to elicit a fEPSP slope of 40% of the maximum of the corresponding I/O curve. Long-term potentiation (LTP) was induced by a repeated, 3-fold tetanization paradigm consisting of 15 bursts of eight pulses, 200 Hz, interburst interval 200 ms, which were applied with an interval of 10 min. The slopes of the fEPSPs were monitored and the baseline was recorded for 30 min before LTP induction. Four 0.2-Hz biphasic constant-current pulses (0.1 ms/polarity) were used for baseline recording. Behavioral Tests All behavioral tests were performed in an individual, dedicated experimental room. On the testing day, animals were put in the test room at least 20 min before testing in order to acclimatize. The experimenter was always blind to the treatment type when performing the tests. Open Field Activity Mice were placed in a 40 × 40 × 40 cm transparent Plexiglas arena and allowed to explore freely for 10 min. Exploratory activity was recorded with a TopScan behavior monitoring and analysis system (CleverSys). Exploratory behavior in the middle of the arena was determined by splitting the arena into 4 × 4 squares within the software and considering the middle 2 × 2 square as the middle of the arena. Object Location Test The experiment was performed and data analyzed as previously described (Barker and Warburton, 2011). For the first 2 days, mice were allowed to explore and habituate in a 40 × 40 × 40 cm transparent Plexiglas arena for 10 min each day. On the third day, mice were trained by placing them in the same arena with two similar objects at different corners of the arena for 10 min. After 24 h, the mice were placed in the same arena with identical objects, with one of the object at a novel location and another at a familiar location. Exploratory behavior was recorded with a TopScan behavior monitoring and analysis system (CleverSys) for 10 min and the amount of time a mouse spends exploring an object was timed by hand. Object exploration was defined as a nose poke within a 1 cm of the object. Object location discrimination ratio was calculated by subtracting familiar object exploration time from novel object exploration time and dividing the difference by the total time spent exploring both objects. Morris Water Maze The water maze consisted of a 120 cm diameter gray circular pool filled with water (23-26 • C, 40 cm deep) made opaque by adding non-toxic white paint (Crayola®). The pool was surrounded by several distant cues from the environment of the experimental room. The animals learned to find a transparent platform (10 cm in diameter) hidden 1 cm below the water surface, and which location remained constant throughout the experiment. Each animal was tested four times a day at five consecutive days, with an inter-trial of about 20 min. For each day, mice were systematically weighted before the first trial then released facing the tank wall from one randomly selected starting points (North, South, East or West), and allowed to swim until they reached the platform. Animals that failed to find the platform within 60 s were gently directed to it and put on it for 15 s. After the trial, mice were removed from the pool, gently dried with a towel and placed individually in cage filled with paper towel and warmed with water bottles placed under the cages to avoid hypothermia. The criterion of learning success consisted to reach the platform in less than 20 s. On the 6th day, a 60 s probe trial was conducted without platform and the time spent in each quadrant was analyzed. This trial was followed by a 60 s cued test to check the visual acuity of the mice. Trials were recorded with a video camera placed above the center of the pool and the quantitative analysis was automated by means of the ANY-maze® video tracking system (Stoelting, USA). Fear Conditioning All fear training was performed in a set of four identical fearconditioning chambers (Med Associates), equipped with a Med Associates Video Freeze system. Individual boxes were enclosed in sound-attenuating chambers (Med Associates). The grid floor consisted of stainless steel rods. Chambers were individually lit from above with white house lights and cleaned with 70% isopropyl alcohol in between squads. On the day of training mice were placed into individual experimental chambers set up to conditioning context A which consisted of a bare chamber with white walls, steel rod floor, undiffused white house lights, and lightly scented with Septanol (ICM Pharma). Following a 180 s baseline period of exposure to the context A, mice were fear conditioned using a 30 s, 5000 Hz, 90 dB tone coterminating with a 0.7 mA, 2 s foot shock. Following the shock, mice were given 120 additional seconds in the same context before being removed. Contextual fear memory retrieval was assessed 24 h later by placing mice back into the same context for a 5 min exposure session. Cue fear memory retrieval was performed 1 h after contextual fear retrieval. The environment was changed to context B consisting of a smooth white floor, diffused white lighting, walls of a different texture and color, and the chamber scented with a different scent. Mice were exposed to the new context for 120 s before being exposed to a 30 s, 5000 Hz, 90 dB tone. Mice were further monitored for freezing activity for another 4.5 min before being returned to their home cages. For fear acquisition, average freezing after foot shock administration was scored and analyzed. For contextual memory test, freezing across the first 3 min of context exposure A was scored. For cue fear memory, freezing activity in the context B for 5 min from the presentation of the tone was scored and analyzed. Perfusion and Sectioning Mice were deeply anesthetized with pentobarbital and transcardially perfused with chilled physiological saline followed by 4% paraformaldehyde in PBS. Extracted mice brains were post fixed in the same buffer for 8 h before being transferred to 30% sucrose solution and stored at 4 • C. Fixed mice brains were sliced with a sliding microtome (Leica) to obtain 40 µm-thick coronal sections. Slices containing hippocampus were collected at every 6th interval in an anterior to posterior manner and used for downstream experiments. 5-ethynyl-2 -deoxyuridine (EdU) Cell Counting within the DG Animals were injected with 100 µg of EdU on the day of lentiviral injection. Animals were sacrificed after 28 days. Brain sections were stained for EdU with Click-iT EdU Alexa Fluor 647 kit (Invitrogen) and 4 ,6-diamidino-2-phenylindole (DAPI; Invitrogen) according to manufacturer's instructions. Sections were imaged with a Zeiss LSM710 Laser scanning confocal microscope. Total number of EdU positive cells within the granular layer and the area of the granular layer was quantified for each section. Granular layer volume is calculated by multiplying the total area by 40 µm which is the thickness of each section and by 6 which is the sectioning interval between each collected section. Total density of EdU positive cells in the granular layer was calculated by dividing the total EdU cell number by volume of the granular layer. Immunohistochemistry The following primary antibodies were used to immuno-stain brain sections: goat anti-doublecortin (Santa Cruz, 1:500), mouse anti-NeuN (Milipore, clone A60, 1:500), rabbit anti-GSK3β (Cell Signaling Technology, 1:200). Brain sections were blocked in 5% donkey serum in tris-buffered saline (TBS) and 0.1% Triton-X for 1 h. Brain sections were incubated with primary antibodies at 4 • C overnight. Appropriate Alexa Fluor 555 or 647 antibodies (Invitrogen, 1:500) were incubated with the brain slices for 2 h at room temperature. Further staining with Click-it EdU Alexa Fluor 647 kit (invitrogen) and DAPI (Invitrogen) was performed if needed. Brain sections were then cover slipped and kept at 4 • C in a light proof box. Statistics Data were statistically analyzed using Student's-T-test unless otherwise stated. The average values of the slope function of the field EPSP (in millivolts per millisecond) per time point were subjected to statistical analysis using the Wilcoxon signed rank test when compared within one group; p < 0.05 was considered as statistically significant different. Results Lentivirus Delivers shRNA to Cells in the Granular Layer and Hilus of the DG We first examined the delivery efficiency of lentivirus carrying shRNA constructs by injecting lentivirus constructs coding for shRNA against either GSK3β (shGSK3) or shRNA with scrambled sequence (shCon) (Figure 1A) into the DG of young adult mice. Green fluorescence protein (GFP) expression was detected in cells within the granular layer and the hilus of the DG at both sides of the brains 28 days post injection (Figure 1B). Axons and dendrites from GFP labeled cells within the granular layer are clearly visible and extend towards the CA3 region through mossy fiber and towards molecular layer respectively ( Figure 1B). Previous literature has suggested that the inhibition of GSK3β is able to promote neurogenesis in vitro and in vivo (Morales-Garcia et al., 2012). To explore if neurogenesis was impacted in our knockdown model, EdU was injected on the same day as lentiviral injections. A quantification of EdU-positive cells within the DG granular layer showed no statistically significant difference between shCon and shGSK3 groups ( Figure 1C). DG GSK3β silencing in our mice model thus, has no significant impact on neural progenitor cell proliferation within the DG. We next determined if our lentivirus has a preference to infect progenitor cells or granular layer neurons in immature or mature stages of development. Fluorescence imaging on brain slices from mice 10 days post injection showed that subpopulations of GFP expressing cells for both constructs expressed the immature neuronal marker doublecortin (DCX; Figure 1D), or the mature neuronal marker NeuN (Figure 1D), and can be labeled with EdU ( Figure 1E). We show here that there are many NeuN positive cells, lesser DCX positive cells and very few EdU infected cells, which corresponds to the typical cell composition in the DG. Our lentivirus is thus able to infect and deliver short hairpin constructs into dividing cells and neurons at various phases of development without any significant preferences for any specific cell types. shRNA against GSK3β Reduced GSK3β Protein Levels but has no Impact on Hippocampal Neurogenesis We next sought to determine the efficiency of our shGSKβ in reducing GSK3β protein levels in vitro and in vivo. Lentiviruses carrying shCon or shGSK3 were used to infect cultured rat primary cortical neurons. shGSK3, but not shCon was able to reduce the expression of GSK3β in these cultured neurons (Figure 2A). To validate the knockdown of GSK3β in vivo, mice injected with virus were sacrificed 28 days post lentiviral injections. Whole hippocampal extracts showed significantly lower levels of GSK3β (Figures 2B,C), P = 0.0003. To further verify the efficiency of GSK3β knockdown in individual cells in the DG, brains were sectioned and immuno-stained for GSK3β. Cells infected with lentivirus expressing shGSK3 showed lower levels of GSK3β fluorescence intensity in the cytoplasm as compared to those expressing shCon (Figures 2D,E). Thus, our shGSK3 construct efficiently reduced GSK3β protein levels both in vitro and in vivo. Silencing of GSK3β in the DG has no Significant Effect on Locomotor Activity or Anxiety-Like Behavior GSK3β has been implicated in hyperactivity and anxiety (Polter et al., 2010). Differences in locomotor activity or anxiety between mice groups may confound results in behavioral tests that assess learning and memory. To determine if there are differences in locomotor or anxiety-like behavior levels between shCon and shGSK3 mice, mice were monitored in an open field 4 weeks after injection (28 DPI). No significant differences in distance traveled were found between groups ( Figure 3A). The percentage time whereby a mouse spends in the middle of the open field has been used as a measure of anxiety (Ramboz et al., 1998). Both groups showed similar times spent in the middle of the open field. (Figure 3B) Silencing of GSK3β in the DG therefore did not significantly impact locomotor activity and anxiety-like behavior. Silencing of GSK3β in the DG has no Significant Effect on Acquisition and Retrieval of Long-term Spatial Memory To assess if silencing of GSK3β in the DG will affect spatial learning and memory in healthy mice, two hippocampusdependent spatial memory tasks were used, namely the spatial object location task (experimental design summarized in Figure 3C) and the Morris Water Maze. During the test phase of the spatial object location task, both groups of mice spent more time exploring the object in the novel location as represented by the positive discrimination ratios. The discrimination ratios in our study were in line with other studies showing learning in the object location task (Ennaceur et al., 1997;Kesby et al., 2015). However there were no significant differences between the discrimination ratios of both control and GSK3β-silenced mice ( Figure 3D). To validate our object location test results, a Morris water maze-based analysis was performed. With repeated training, both groups showed a similar decrease in time and swim distance required to locate the hidden platform (Figures 4A,B). During the probe trial, both groups spent significantly more time in the quadrant where the platform was previously located as compared to the opposite quadrant ( Figure 4C). The number of entries in the area where the platform was similar in both groups as well ( Figure 4D). Together these results suggest that spatial learning and memory is not significantly affected in our DG GSK3βsilenced mice. GSK3β Silencing Impairs Contextual Fear Memory In addition to spatial memory, the hippocampus is also involved in contextual memory consolidation. To assess if GSK3β silencing has an effect on contextual memory, we examined mice using a contextual fear-conditioning task (Figure 5A). During the habituation phase of training, both groups showed negligible freezing times. After exposure to a 30 s tone and 2 s shock, both groups showed significantly increased freezing indicating the acquisition of contextual fear memory ( Figure 5B). After 24 h, both groups were assessed for the retrieval of contextual fear memory. The GSK3β-silenced group showed significantly lower freezing times as compared to control (p = 0.0004), indicating an impairment in contextual fear memory ( Figure 5C). As the amygdala is well known to contribute towards contextual fear memory, we performed an amygdaladependent, hippocampus-independent cued fear memory test to assess amygdala function. After exposure to the cue in a novel environment, both groups of mice showed similar levels of increased freezing ( Figure 5D). The impairment in contextual fear memory formation observed in DG GSK3β silenced mice was therefore due to defective consolidation by the hippocampus. GSK3β Silencing Increases LTP To investigate if the contextual fear memory impairment observed in shGSK3 mice may be due to changes in synaptic plasticity, we performed LTP recordings on brain slices from naïve mice harvested 28 days post lentiviral injection. LTP was induced in both groups, and both groups showed maintenance of late LTP (Figures 6A,B). However, shGSK3 mice showed a much higher level of LTP upon induction as compared to control. This difference in LTP level between the groups persisted throughout the recording period of 60 min. Figure 6A showed statistically significant potentiation compared to its own base line (−15 min) after LTP induction (+15 min and +60 min, p = 0.0010 in both cases, Wilcoxon Sign Rank test). Similarly, post tetanic potentials at +15 min and +60 min in Figure 6B was also significantly higher compared to its own baseline (p = 0.0005 in both cases, Wilcoxon Sign Rank test). A comparison between the LTP from DG of control and shGSK3 mice (Figures 6C,D) also revealed statistically significant potentiation in DG-LTP in shGSK3 compared to control mice (p = 0.0005 and p = 0.0006 respectively at +15 and +60 min, Mann Whitney U Test). Discussion Previous studies examining the cognitive effects of GSK3β manipulation have typically used transgenic or pharmacological inhibitor animal models. However, functional dissection of a multi-functional protein kinase such as GSK3β with differing roles in the developing and adult brain (Salcedo-Tello et al., 2011) or in different tissues in the body (Patel et al., 2008) will require a more targeted approach. To our knowledge, our study is the first to assess the cognitive effects of long term GSK3β silencing in the DG of mice during young adulthood. Long term gene knockdown with the use of lentiviral vectors delivering shRNA has been successfully performed previously (Abbas-Terki et al., 2002). Our lentiviral-mediated delivery of shRNA against GSK3β into the mouse DG was to reduce GSK3β levels within DG granule cells. In our study, whole hippocampal GSK3β protein levels showed a modest but significant reduction of GSK3β. This modest reduction is likely due to diluting effects from the surrounding non-transduced tissue within and around the DG. Nonetheless, our knockdown was sufficient to elicit behavioral and electrophysiological differences between both groups of animals. Cells infected by our lentivirus included dividing cells and neurons at various stages of maturation; both of which are known to play important roles in hippocampus-dependent learning and memory (Saxe et al., 2006;Deng et al., 2009;Gu et al., 2012;Nakashiba et al., 2012;Vukovic et al., 2013). Overexpression or knock-in studies where GSK3β activity is significantly higher than physiological levels have resulted in decreased neurogenesis (Eom and Jope, 2009;Sirerol-Piquer et al., 2011). In neurocognitive disorders such as Down syndrome and Fragile X where neurogenesis is impaired, administration of GSK3β inhibitors were shown to enhance neurogenesis (Guo et al., 2012;Contestabile et al., 2013). The effects of GSK3β inhibition on neurogenesis in healthy adult mice are not as clear. Several studies utilizing various GSK3β inhibitors showed an increase in adult neurogenesis (Boku et al., 2009;Eom and Jope, 2009;Morales-Garcia et al., 2012), while a study using valproic acid (Sintoni et al., 2013) did not find any significant changes. The differences in observations may be due to differences between drugs, dosage and types of cells or animal models used. We found that shRNA-mediated silencing of GSK3β did not significantly change cell proliferation levels within the DG, suggesting that it is unlikely that the behavioral phenotypes we observed are due to the potential effect of cell-autonomous GSK3β silencing on cell proliferation. We found silencing of GSK3β in the DG particularly impacted long-term contextual fear memory retrieval but not that of longterm spatial memory. Although the DG is involved in both contextual and spatial memory, both types of memory have been shown to involve different molecular and neural pathways (Bach et al., 1995;Mizuno and Giese, 2005). Contextual fear memory has been shown to be disrupted by lesions of the entorhinal cortex while spatial memory was not affected (Burwell et al., 2004). The DG receives inputs from the entorhinal cortex via the perforant path (Lomo, 1971) and overexpression of GSK3β impairs LTP recorded from the CA3 or DG upon stimulation of the perforant path (Hooper et al., 2007;Zhu et al., 2007). Therefore, it is possible that GSK3β in the DG may have a role in the processing of inputs from the perforant pathway. The lack of spatial memory impairment observed in our study may imply that the reduction of GSK3β levels in the healthy DG does not negatively impact spatial memory or the level of viral-mediated GSK3β silencing in our study is insufficient to elicit spatial memory impairment. Previous lesion studies have shown that spatial memory remains intact even when only a small portion of the dorsal hippocampus is left unlesioned (Moser et al., 1995). Two forms of synaptic plasticity, LTP and long term depression (LTD) are generally accepted models of information storage within the hippocampus and have been considered as cellular correlates of learning and memory (Bliss and Collingridge, 1993;Bear and Abraham, 1996). LTP can be further divided into two phases; early LTP is associated with changes in trafficking and conductance of receptors at the surface of synapses while late LTP is associated with protein expression and synaptic remodeling (Abraham and Williams, 2008). GSK3β has previously been found to be implicated in cellular processes related to synaptic plasticity such as αamino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) trafficking (Du et al., 2010;Wei et al., 2010;Xie et al., 2011;Nelson et al., 2013), N-methyl-D-aspartate receptor (NMDAR) trafficking (Chen et al., 2007;Zhu et al., 2007;Peineau et al., 2009), GABA A surface receptor expression (Rui et al., 2013), pre-synaptic vesicle trafficking (Zhu et al., 2007(Zhu et al., , 2010, transcription of immediate early genes (IEGs; Graef et al., 1999), and transcription of genes required for L-LTP (Ma et al., 2011). Previous recordings from the CA1 region of the hippocampus have shown that GSK3β activity is decreased during the induction of LTP and increased during the induction of LTD (Hooper et al., 2007;Peineau et al., 2007). Subsequent pharmacological and genetic manipulations have shown that regulation of GSK3β activity is pivotal for the switch between LTP and LTD. Overactive GSK3β prevents the induction of LTP (Hooper et al., 2007;Zhu et al., 2007;Dewachter et al., 2009), while inhibition of GSK3β prevents the induction of LTD but allows LTP induction (Peineau et al., 2007(Peineau et al., , 2009Xie et al., 2011). Induction of LTP has also been shown to inhibit GSK3β activity and prevent subsequent induction of LTD for up to an hour (Peineau et al., 2007). Our finding that long term GSK3β silencing enables induction of persistently higher LTP supports the findings of a study which showed that pharmacological inhibition of GSK3β with 10 mM LiCl for 60 min prior to high frequency stimulation (HFS) is able to enhance induction of LTP in the CA1 region (Cai et al., 2008). However, another study involving pharmacological inhibition of GSK3β with 20 mM LiCl or 2 µM CT99021 for 30 min prior to HFS showed no increase in DG-LTP in wild type animals (Franklin et al., 2014). It is possible that the shorter incubation period prior to HFS may not be sufficient to elicit a difference in DG-LTP for wild type animals. The enhancement of LTP in the DG has been shown to play functionally differing roles from the CA1 region and can result in memory impairments (Okada et al., 2003). Our observation that LTP is enhanced in the DG of GSK3β silenced mice may therefore potentially correlate with the contextual fear memory deficits observed. This enhanced LTP but decreased in memory retrieval is consistent with studies reporting dissociation in hippocampal LTP and associative learning in mice (Sahún et al., 2007;Gruart et al., 2012). Although hippocampus is involved in associative learning, the contribution of different synapses is still poorly understood. Recent findings that reported differences in the functional aspects of different synapses within the hippocampus suggested that evolution of the timed changes in synaptic strength during functional organization did not coincide with the sequential distribution with respect to anatomical criteria and connectivity (Gruart et al., 2014). Moreover, hippocampal intrinsic and extrinsic circuits are involved in acquisition of cue and context information during associative learning (Carretero-Guillén et al., 2015). Future studies can explore the effects of GSK3β silencing on cellular processes related to synaptic efficiency in specific cell population such as adult DG granule cells. Administration of GSK3β inhibitors to healthy individuals have resulted in impairments in cognitive performance (Weingartner et al., 1985;Stip et al., 2000;Wingo et al., 2009). We believe a targeted approach will decrease potential detrimental side effects when GSK3β activity is manipulated for therapy or prophylaxis. Our study is the first step in such a direction and future work will involve manipulating GSK3β activity in different brain regions at different developmental stages to better understand the role of GSK3β in learning and memory. Author Contributions BC designed and performed all in vitro and in vivo experiments, all animal behavior test, analyzed data and wrote the manuscript; JRR tested efficiency of shRNA using Western blotting analysis; TN injected animals for some in vivo experiments; MD produced viruses, injected animals and did imaging for some in vivo experiments; JZ and ZZ provided materials for the experiments and provided critical inputs to the design of some experiments; ZB performed and analyzed data for Morris water maze experiments and provided critical inputs on animal behavioral studies; AD, SHN and SS performed and analyzed data for LTP recordings; ELKG initiated and directed the entire study, designed experiments, analyzed data and wrote the manuscript. Funding and Disclosure This work was supported by Competitive Research Program (CRP) funds from National Research Foundation, Singapore, GlaxoSmithKline (GSK) Academic Center of Excellence (ACE) Award and Abbott Nutrition to ELKG.
2016-05-12T22:15:10.714Z
2015-06-23T00:00:00.000
{ "year": 2015, "sha1": "d3f841e5adecd228e347e88cb92ecbed0b63a34b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2015.00158/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3f841e5adecd228e347e88cb92ecbed0b63a34b", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
271019504
pes2o/s2orc
v3-fos-license
Interaction effect between blood selenium levels and stroke history on all-cause mortality: a retrospective cohort study of NHANES Aim The study aimed to investigate the interaction effect between blood selenium levels and stroke history on all-cause mortality. Methods In this retrospective cohort study, participant data were obtained from the National Health and Nutrition Examination Survey (NHANES) 2011–2018. The covariates were screened via the backward selection method in weighted univariate and multivariate Cox regression models. Weighted univariate and multivariate Cox regression models were conducted to investigate the association of blood selenium and stroke history with all-cause mortality. The results were expressed as hazard ratios (HRs) and 95% confidence intervals (CIs). The synergy index (SI) was used to assess the assistive interaction. The association was further explored in different gender groups. Results Totally, 8,989 participants were included, of whom 861 (9.57%) died. Participants with blood selenium ≥192.96 ug/L were associated with lower odds of all-cause mortality (HR = 0.70, 95% CI: 0.58–0.84), whereas those with a stroke history were associated with a higher risk of all-cause mortality (HR = 1.57, 95% CI: 1.15–2.16). Compared to participants with blood selenium ≥192.96 ug/L and non-stroke history, participants with both blood selenium < 192.96 ug/L and stroke history had a higher all–cause mortality risk (HR = 2.31, 95% CI: 1.62–3.29; SI = 0.713, 95% CI: 0.533–0.952). All participants with blood selenium < 192.96 ug/L and stroke history were related to higher all–cause mortality risk (HR = 1.61, 95% CI: 1.21–2.13). In males, the interaction effect of blood selenium and stroke history on all–cause mortality (HR = 2.27, 95% CI: 1.50–3.46; SI = 0.651, 95% CI: 0.430–0.986) increased twice. Conclusion Blood selenium and stroke history have an interaction effect on all-cause mortality. Increasing selenium-rich food or supplement intake, especially for individuals with a stroke history, may improve poor prognosis. Introduction Stroke, a neurological emergency, is the second leading cause of death and a major contributor to disability worldwide (1).Stroke affects 13.7 million people and causes 5.5 million deaths (2).Stroke is also responsible for about 140,000 deaths in the U.S. every year, which is about one out of every 20 deaths in the country (3).Stroke history is an independent risk factor for poor prognosis in ischemic stroke patients (4).Oxidative stress and inflammation play significant roles in the pathogenesis of stroke (5)(6)(7). Selenium, an essential trace element, plays a critical role in various physiologic processes, including oxidative stress, thyroid hormone metabolism, and immune function (8,9).Lower circulating selenium levels have been linked to an elevated risk of cardiovascular disease, increased risk of ischemic stroke, and all-cause mortality (10).In patients with heart failure, blood selenium was independently associated with a 50% higher mortality rate (11).A lower concentration of selenium could increase the risk of ischemic stroke (12).And Wang et al. reported that plasma selenium was inversely associated with the risk of a first ischemic stroke (13).Zhao et al. (14) also found a negative relationship between blood selenium levels and stroke.In addition, the modifying effect of selenium was observed in metabolic disease, cardiovascular disease (CVD), and neurologic symptoms (15)(16)(17).We hypothesize that blood selenium level and stroke history may have an interaction with the long-term prognosis of participants. Thus, this study aimed to investigate the interaction effect of blood selenium and stroke on all-cause mortality.The findings of this study will contribute to existing knowledge on the role of blood selenium in stroke prognosis and provide guidelines for the development of targeted interventions to improve outcomes for individuals with stroke histories. Study design and population The data for this retrospective cohort study were extracted from the National Health and Nutritional Examination Survey (NHANES) between 2011 and 2018.NHANES is a comprehensive survey that provides valuable data on the health and nutritional status of individuals in the U.S.These secondary survey data are usually selected through a complex sampling design to collect information through interviews, physical examinations, and laboratory tests.The protocols of NHANES have been reviewed and approved by the National Center for Health Statistics (NCHS) Ethics Review Board.All participants signed written informed consent.Our study was exempted from screening by the Ethics Committee of Beijing Boai Hospital.Individuals aged ≥ 45 years were included in the database.Participants were excluded if they met any of the following criteria: (1) missing information about stroke, (2) missing data about blood selenium, (3) missing survival data, and (4) missing important co-variables. Blood selenium assessment Whole blood samples were collected in vacuum containers and transported to the National Center for Environmental Health under appropriate frozen conditions (−20 • C).After the dilution treatment, blood selenium levels were measured using inductively coupled plasma dynamic reaction cell mass spectrometry.Two groups were divided based on the median blood selenium level. Definition of a stroke Stroke was defined as the question "Has a doctor or other health professional ever told you that you had a stroke?"Participants with a response of "yes" were considered to have a stroke history (18). Covariate The following covariates were included: age, gender, race, poverty income ratio (PIR), smoking, alcohol consumption, physical activity, chronic kidney disease (CKD), anticoagulants, and cardiovascular agents.Information on age, gender, race, PIR, smoking, alcohol consumption, physical activity, and medication use was collected from family interviews and mobile examination centers using standardized questionnaires.PIR, calculated by the family income ratio to the federal poverty threshold, was used to assess the socioeconomic status of participants (19).Smoking was defined as the answer "yes" to the question "Smoked at least 100 cigarettes in life" (20).CKD was defined as an estimated glomerular filtration rate <60 mL/min/1.73m 2 or urine albumin-to-creatinine ratio ≥ 30 mg/g (21).Cardiovascular agents include agents of antiadrenergic, antianginal, antiarrhythmic, inotropic, miscellaneous cardiovascular, vasodilators, vasopressors, angiotensin II inhibitors, aldosterone receptor antagonists, renin inhibitors, neprilysin inhibitors, and antihypertensives. Outcomes The study's outcome was all-cause mortality.Mortality status, cause of death, and follow-up time were determined based on the National Death Index (NDI), which can be downloaded from the NCHS website at https://www.cdc.gov/nchs/index.htm.Included participants were followed up until 31 December 2019.The International Classification of Diseases was utilized to ascertain the cause of death. Statistical analysis Appropriate weighting (SDMVPSU SDMVSTRA WTMEC2YR ) was carried out in the statistical analysis.Continuous variables were presented as mean and standard error (SE), and differences among groups were analyzed by a weighted t-test.Categorical variables were presented as numbers and percentages, and differences among groups were analyzed by the chi-square test and Fisher's exact test.The potential covariates were selected through weighted univariate and multivariate Cox regression models.Weighted univariate and multivariate Cox The masked variance unit pseudo-stratum was SDMVSTRA, and the masked variance unit pseudoprimary sampling units was SDMVPSU. The confidence interval (CI) was applied for evaluating the reliability of an estimate. Frontiers in Neurology frontiersin.org FIGURE The screening process for participants. regression models were conducted to investigate the interaction effect of blood selenium and stroke on all-cause mortality with a Hazard Ratio (HR) and 95% Confidence Interval (CI). Characteristics of participants Figure 1 shows the screening process for participants.A total of 13,173 participants aged ≥ 45 years were extracted from the database in 2011-2018.Then, participants were excluded with https://www.r-project.org/https://www.rproject.org/about.htmlmissing information on stroke history (n = 20), blood selenium level (n = 4,119), and survival data (n = 27).Individuals who missed important covariates (n = 18) were also excluded.Finally, 8,989 participants were included in the final analysis. The characteristics of the included individuals are shown in Table 1.Totally, 6.52% (n = 586) of the participants had a stroke history.During a median follow-up of 52 months, 861 individuals died.Statistical differences were observed in age, gender, race, educational level, PIR, heavy alcohol drinking, energy intake, heart disease, dyslipidemia, CKD, height, weight, body mass index, diastolic blood pressure, albumin, creatinine, uric acid, anticoagulants, antiplatelet agents, stroke, and all-cause mortality between the blood selenium <192.96ug/L and the blood selenium ≥192.96ug/L groups (all P < 0.05). Associations of blood selenium and stroke history with all-cause mortality The associations between blood selenium, stroke, and allcause mortality are shown in Table 2.In model 2, covariates were adjusted for age, gender, race, PIR, smoking, physical activity, CKD, anticoagulants, and cardiovascular agents.Compared to those who had blood selenium <192.26ug/L, those who had blood selenium ≥192.96ug/L were associated with lower odds of allcause mortality (HR = 0.70, 95% CI: 0.58-0.84).Participants with a stroke history were associated with a higher risk of all-cause mortality compared to those without a stroke history (HR = 1.57, 95% CI: 1. 15 The interaction e ect between blood selenium and stroke history on all-cause mortality The additive interaction effects of blood selenium and stroke were established, including blood selenium ≥192.96ug/L and no stroke, blood selenium ≥192.96ug/L and stroke, blood selenium <192.26ug/L and no stroke, and blood selenium <192.26ug/L and stroke.Table 3 and Figure 2 show more detail on interaction effect terms.Compared to participants with blood selenium ≥192.96ug/L and non-stroke history, participants with blood selenium <192.26 a higher all-cause mortality risk (HR = 2.31, 95% CI: 1.62-3.29).The SI was 0.713 (95% CI: 0.533-0.952),indicating an interaction effect was observed between blood selenium and stroke history on all-cause mortality.We further investigated the association between stroke history and all-cause mortality at different blood selenium levels.Stroke history remained associated with a higher all-cause mortality risk in participants with blood selenium <192.96ug/L (HR = 1.61, 95% CI: 1.21-2.13)(Table 4). Interaction e ect of blood selenium and stroke history on all-cause mortality in participants with di erent gender groups As summarized in Table 5 and Figure 3, further analysis was performed to investigate the interaction effect of blood selenium and stroke history on all-cause mortality in populations of different genders.Participants with a stroke history and blood selenium <192.26ug/L were associated with an increased risk of allcause mortality (HR = 2.27, 95% CI: 1.50-3.46).In addition, the interaction effect of blood selenium and stroke history on all-cause mortality existed in males (SI = 0.651, 95% CI: 0.430-0.986). Discussion The study aimed to investigate the interaction effect of blood selenium and stroke history on all-cause mortality.Individuals with both low blood selenium and stroke histories have a higher allcause mortality risk compared to those with neither condition.An interaction effect was found between blood selenium and stroke history on all-cause mortality.Furthermore, the interaction effect of blood selenium and stroke history on all-cause mortality was also observed in males. Selenium is an essential nutrient for normal human physiological processes.Xing et al. (22) reported that blood selenium was associated with a lower risk of CVD mortality in heart failure.A meta-analysis also found that high blood selenium levels in the body were associated with a decreased risk of CVD incidence and mortality (23).Similarly, Zhao et al. (24) found higher selenium concentrations were related to lower all-cause mortality.Stroke is an important cause of death in the U.S (25).Stroke history was an independent risk factor for poor prognosis in ischemic stroke patients (4).A Japanese study found that older adults who have experienced a stroke could have a lower life expectancy (26).Our study found higher odds of all-cause mortality in individuals with low blood selenium and stroke histories which stresses the potential synergistic influence of these two conditions on health outcomes.Our findings were in concordance with previous studies that have reported an individual association between low selenium levels and increased mortality risk, as well as an association between stroke history and a higher incidence of mortality (10,27,28).However, the novelty of our study lies in showing the interactive effect, emphasizing the importance of assessing these conditions concurrently. In addition, the findings in subgroups suggested that the interaction effect of blood selenium and stroke history also existed in males.Li et al. (29) found a relationship between serum selenium and all-cause mortality in both genders, and higher mortality in women with ischemic stroke (30).The difference may be related to the limited sample size and lower blood selenium levels in males. To comprehend the underlying mechanisms contributing to the higher odds of all-cause mortality in individuals with low blood selenium and stroke histories, it is imperative to explore the biological pathways implicated.Selenium, as an essential immune nutrient, also plays a pivotal role in anti-oxidative mechanisms and thyroid hormone metabolism (31).In participants with stroke histories, the oxidative stress and inflammatory cascades that Frontiers in Neurology frontiersin.org FIGURE The interaction e ect between blood selenium and stroke history on all-cause mortality.ensure this may be exacerbated in individuals with suboptimal selenium levels (32).The compromised antioxidant capacity, coupled with an impaired ability to modulate inflammation, may synergistically contribute to an augmented susceptibility to adverse outcomes in the presence of a stroke history.Moreover, the observed interaction effect may also be rooted in selenium's influence on cardiovascular health.Selenium has been implicated in endothelial function, blood clotting, and vascular integrity, all of which are critical in the aftermath of a stroke event (33).In individuals with low blood selenium levels, compromised vascular resilience may synergize with the deleterious effects of stroke, leading to an enhanced risk of mortality (34).And selenium-regulated cardiomyocyte apoptosis (35).Unraveling these intricate pathways is essential for a comprehensive understanding of the observed interaction effects. Healthcare providers should monitor blood selenium levels in stroke patients, considering it a potential modifiable factor to improve long-term outcomes.Incorporating selenium-rich foods or supplements in at-risk populations, especially those with stroke, may serve as a mitigating factor against poor prognosis.Brazil nuts are one of the most abundant sources of selenium, with one nut providing nearly 100% selenium of the recommended daily intake (36).Other nuts, such as walnuts and almonds also contain selenium, albeit in smaller amounts (37).Seafood, particularly tuna, shrimp, and sardines, is another excellent source of selenium (38).For those who prefer plant-based options, whole grains like wheat and rice, as well as legumes such as lentils and chickpeas, can contribute to selenium intake (39).Incorporating these foods into daily meals can help individuals achieve desired blood selenium levels.It is essential to note that while selenium is crucial for health, excessive intake can lead to toxicity.Therefore, it is advisable to consult with a healthcare provider or nutritionist before significantly altering selenium intake, especially through supplements. FIGURE The interaction e ect between blood selenium and stroke history on all-cause mortality in di erent gender populations. We were unable to differentiate hemorrhagic and ischemic stroke and their duration from the secondary data used, which limited our ability to investigate and compare stroke subtypes and blood selenium with all-cause mortality.Given the retrospective nature of this study, some important information that could have been beneficial to this study might not have been provided.Finally, since information on stroke was selfreported by participants, this may lead to some gaps in the recall of information. Conclusion Our study suggested an interaction effect between blood selenium and stroke on all-cause mortality.The complex interplay of these factors necessitates further research to delineate the underlying mechanisms and inform targeted interventions, ultimately contributing to more effective strategies for reducing mortality risk in individuals with low blood selenium levels and stroke. Model 1 was a crude model.Model 2 was adjusted for age, gender, race, PIR, smoking, alcohol consumption, physical activity, CKD, anticoagulants, and cardiovascular agents.The synergy index (SI) was used to assess the additive interaction.When the CI of Selenium contained 1, there was no additive interaction effect-SI=(HR11-1)/[(HR01-1) (HR10-1)].HR01 and HR10 indicate that only exposure a occurs or only exposure b occurs, and HR 11 indicates that two exposures occur simultaneously.To further explore the association, subgroup analyses stratified by gender were performed.As no criteria for the division of serum selenium levels were available, we applied the median (192.96ug/L) to divide the whole blood selenium level.Sensitivity analyses were also performed to detect the availability of the median as a cut-off value, the results are shown in the Supplementary material.All statistical analyses were conducted using R , version 4.2.3 and SAS version 9.4.Results were considered statistically significant with a two-sided P<0.05. -2.16). ug/L and stroke history were associated with TABLEThe interaction e ect of blood selenium and stroke history on all-cause mortality. TABLE Associations between stroke history and all-cause mortality at di erent blood selenium levels. TABLEThe interaction e ect of blood selenium and stroke history on all-cause mortality in di erent gender groups.
2024-07-07T15:16:22.518Z
2024-07-05T00:00:00.000
{ "year": 2024, "sha1": "69d075c975a76bb83da44e6945819d8d3534de73", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fneur.2024.1404570", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6167767d25116cb6ac015db700c88813da9fb5e9", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
214724558
pes2o/s2orc
v3-fos-license
Point process temporal structure characterizes electrodermal activity Significance Electrodermal activity (EDA) is a readout of the body’s sympathetic nervous system measured as sweat-induced changes in the electrical conductance properties of the skin. Interest is growing in using EDA to track physiological conditions such as stress levels, sleep quality, and emotional states. The integrate-and-fire physiology underlying EDA production suggests that its interpulse intervals should obey an inverse Gaussian probability model. In an analysis of EDA data recorded in 11 healthy volunteers during quiet wakefulness, we established that the inverse Gaussian model accurately characterized the interpulse intervals. Our findings show a physiologically based statistical model provides a parsimonious and accurate description of EDA. Electrodermal activity (EDA) is a direct readout of the body's sympathetic nervous system measured as sweat-induced changes in the skin's electrical conductance. There is growing interest in using EDA to track physiological conditions such as stress levels, sleep quality, and emotional states. Standardized EDA data analysis methods are readily available. However, none considers an established physiological feature of EDA. The sympathetically mediated pulsatile changes in skin sweat measured as EDA resemble an integrate-and-fire process. An integrate-and-fire process modeled as a Gaussian random walk with drift diffusion yields an inverse Gaussian model as the interpulse interval distribution. Therefore, we chose an inverse Gaussian model as our principal probability model to characterize EDA interpulse interval distributions. To analyze deviations from the inverse Gaussian model, we considered a broader model set: the generalized inverse Gaussian distribution, which includes the inverse Gaussian and other diffusion and nondiffusion models; the lognormal distribution which has heavier tails (lower settling rates) than the inverse Gaussian; and the gamma and exponential probability distributions which have lighter tails (higher settling rates) than the inverse Gaussian. To assess the validity of these probability models we recorded and analyzed EDA measurements in 11 healthy volunteers during 1 h of quiet wakefulness. Each of the 11 time series was accurately described by an inverse Gaussian model measured by Kolmogorov-Smirnov measures. Our broader model set offered a useful framework to enhance further statistical descriptions of EDA. Our findings establish that a physiologically based inverse Gaussian probability model provides a parsimonious and accurate description of EDA. electrodermal activity | point processes | statistics | autonomic nervous system | signal processing E lectrodermal activity (EDA), which measures the electrical properties of the skin as changes in conductance, is mediated almost exclusively by the sympathetic branch of the autonomic nervous system (1). The skin continuously receives sympathetic innervation. Consequently, EDA is continuously present due to sweat glands filling and releasing sweat onto the skin. Changes in the level of filling occur in response to internal stimuli (physiological and psychological) and external stimuli such as threats or dramatic changes in ambient temperature. EDA is a component of the primal flight-or-fight response that is routinely used as a measure of the sympathetic nervous system activity in psychological studies, polygraph tests and studies of stress (1). Measures of EDA are now being developed as a neuromarketing tool to evaluate consumer responses to different products or promotions (2). For this reason, there is growing interest in the development of analysis methods to characterize EDA accurately. EDA has a characteristic pattern consisting of two distinct components. There is a baseline or tonic component that drifts gradually with time. On top of the tonic component is a phasic component composed of pulse events that vary in amplitude, shape, and spacing. Sweat release is a pulsatile process because a sufficient volume of sweat has to accumulate, fill the glands, and be released onto the skin to observe an EDA change. This integrate-and-fire nature of the phasic component is believed to represent fast changes in sympathetic nervous system activity. EDA activity can show large variation in both baseline activity and pulse activity within and between individuals. Most current EDA analysis methods focus mainly on the phasic component to characterize sympathetic activity. These methods fall in two categories: rate-based methods and deconvolution methods. The rate-based methods specify a time window and estimate a moving average of the number of pulse events per time window (3)(4)(5). For the deconvolution methods, a single pulse shape is assumed, and the EDA signal is represented as the convolution of this pulse shape with neural inputs. For this technique, the neural input is deconvolved from the EDA while simultaneously fitting pulse shape parameters (6)(7)(8)(9)(10). Hence, deconvolution methods report the occurrence times and amplitudes of the pulse events. Although widely used, neither the rate-based nor the deconvolution methods are based on EDA's well-established physiology. Likewise formal statistical modeling is not used to characterize the interpulse interval dynamics (pulse rate and pulse times) or the pulse amplitudes. The important advance that we report here is the application of elementary point process models based on physiology to characterize the dynamics of EDA interpulse intervals. These Significance Electrodermal activity (EDA) is a readout of the body's sympathetic nervous system measured as sweat-induced changes in the electrical conductance properties of the skin. Interest is growing in using EDA to track physiological conditions such as stress levels, sleep quality, and emotional states. The integrate-and-fire physiology underlying EDA production suggests that its interpulse intervals should obey an inverse Gaussian probability model. In an analysis of EDA data recorded in 11 healthy volunteers during quiet wakefulness, we established that the inverse Gaussian model accurately characterized the interpulse intervals. Our findings show a physiologically based statistical model provides a parsimonious and accurate description of EDA. parsimonious probability models provide a physiologically based framework for statistical analysis of EDA. The balance of this paper is organized as follows. In Physiology and Theory, we formulate generalized inverse Gaussian integrate-and-fire probability models derived from EDA's integrate-and-fire properties and, as alternatives, lognormal, exponential, gamma, and generalized inverse Gaussian non-integrate-and-fire probability models. In Application and Results, we use these models in the analysis of EDA recorded from 11 healthy subjects during quiet wakefulness. The Discussion describes the implications of our findings for future basic science and translational studies. Physiology and Theory The Anatomy and Physiology of Electrodermal Activity. To develop our statistical models of EDA activity we review the anatomy and physiology of sweat production in the skin (1). Each sweat gland consists of three parts: the dermal gland, the duct that connects the gland to the skin surface, and the pore where the duct opens to the skin ( Fig. 1A; 1). The dermal portion of the gland is innervated by sudomotor nerves, a part of the peripheral nervous system that is predominantly under sympathetic control (1). Sweat is produced in the gland by sympathetically induced abrupt increases in spiking activity (action potentials) in the sudomotor nerves. These electrical events are called sudomotor bursts. Sweat produced in response to these bursts accumulates in the duct. Once the duct is full, it pushes open the pore and spills onto the skin. The sweat either evaporates or is reabsorbed through the walls of the duct. With the duct now empty, the accumulation process begins again (1,11,12). Sweat on the skin's surface increases its electrical conductance (inverse of resistance) because its salt content facilitates the propagation of electrical currents. Electrical conductance across the skin can be measured in a standard fashion by placing two electrodes on either the palm or fingers, applying a constant voltage, and measuring the current (1). The pulsatile effects of the sudomotor bursts measured at the skin are termed galvanic skin responses (GSRs). The second-to-second changes in skin sweating measured as second-to-second changes in skin conductance are termed EDA (1). EDA measurements have two distinct components: tonic activity and phasic activity. Tonic activity represents generally ongoing EDA that reflects the background state of the EDA. The phasic activity reflects primarily the GSR or pulsatile events. Statistical Models of EDA Pulses. We investigate four statistical models to characterize the times between GSR events recorded during EDA measurements: generalized inverse Gaussian, lognormal, gamma, and exponential. The choice of the generalized inverse Gaussian model follows directly from the physiology described previously. The phasic component of the EDA measurements is comprised of pulsatile events. The times between these pulse events are governed by sympathetic stimulation of the glands, the sweat production and its accumulation in the glands, sweat release on the skin, and sweat reabsorption and evaporation. We represent this four-step sequence of sweat accumulation in the gland and its release onto the skin as an integrate-and-fire process defined by a Gaussian random walk with drift diffusion (Fig. 1B). It is well known that the times between firing events for this elementary diffusion process obey an inverse Gaussian probability model (13)(14)(15). Therefore, we chose as the first model the generalized form of the inverse Gaussian probability density defined for an interpulse interval time x > 0 as where −∞ < λ < ∞, ψ ≥ 0, χ ≥ 0, and K λ is the modified Bessel function of the third kind with index λ (16). The generalized inverse Gaussian is a flexible family of probability models that includes the inverse Gaussian model as a special case λ = − 1 2 . For λ ≤ 0, the generalized inverse Gaussian probability density gives diffusion (integrate-and-fire) models other than the inverse Gaussian. For λ > 0, the generalized inverse Gaussian density gives nondiffusion (non-integrate-and-fire) models. As χ → 0, the generalized inverse Gaussian yields the special case of a gamma probability model with parameters λ > 0 and 2 −1 ψ ≥ 0. When λ = 1 as χ → 0, the generalized inverse Gaussian becomes the exponential probability density (16). Hence, by fitting the generalized inverse Gaussian model we can study systematically how well EDA interpulse interval time series can be characterized by a broad class of integrate-and-fire and non-integrate-and-fire models. To broaden the class of alternatives to integrate-and-fire models, we also consider the lognormal distribution. Its probability density function is defined as where −∞ < μ < ∞ and σ > 0. The lognormal has been widely used as an empirical model to characterize interevent distributions in point process time series (16). We postulate that under stable, or approximately stable, background conditions the inverse Gaussian model will provide an accurate description of the inter-GSR (interpulse) event times as the integrate-and-fire model is a plausible description of skin sweat production. This assumes that the group activity of the sweat glands shows a coherent behavior. If there is variation in the behavior of the individual glands, even when the background state is stable, then it is possible that the elementary inverse Gaussian model may not apply. There may be a mixture of inverse Gaussian models which could produce either longer or shorter interevent intervals relative to a single inverse Gaussian model. These differences could be manifested by either heavier or lighter tails relative to an inverse Gaussian model. To model heavy tail distributions, we consider the lognormal and the gamma distributions, whereas to model the light tail distributions, we consider the gamma and the exponential distributions. The gamma is flexible in that it can have either a heavier or a lighter tail than the inverse Gaussian depending on the values of the parameters (17). Although the exponential distribution is a special case of the gamma, it represents the null model of an underlying Poisson process governing GSRevent production. We quantify the tail behavior of a probability density f (x) by evaluating the settling rate, which is defined as , [3] where F(x) is the corresponding cumulative distribution function. The settling rate is the limit of the hazard function as x tends to infinity (17). By this definition, a distribution is commonly classified as light, medium, or heavy tailed based on whether the settling rate is infinite, positive but finite, or zero, respectively. The more slowly (rapidly) the tail of a distribution settles, the heavier (lighter) the tail. We can divide our set of probability models into four medium-tailed (exponential, gamma, inverse Gaussian, and generalized inverse Gaussian) and one heavy-tailed distribution (lognormal). The four medium-tailed distributions are distinguished relative to each other based on their parameter values, which determine the respective settling rates (17,18). Application Experimental Data. Our experimental protocol was approved by the Massachusetts Institute of Technology Institutional Review Board, and all subjects provided written informed consent. We collected EDA data from 12 healthy volunteers (six men) between the ages of 22 and 34 while awake and at rest (19). Electrodes were connected to the second most distal phalange of the second and fourth digits of each subject's nondominant hand. Approximately 1 h of EDA data were collected at 256 Hz. Subjects were seated upright and instructed to remain awake. They were allowed to read, meditate, or watch something on a laptop or tablet but not to use the instrumented hand. We assumed skin and ambient temperature were constant for the duration of the experiment. One subject's data were not included in the analysis because we learned, after the data collection, that this individual occasionally experienced a Raynaud's type phenomenon. This would affect the quality of the EDA data. Data from the remaining 11 subjects were analyzed using MATLAB 2017a. Data Preprocessing and EDA Pulse Selection. Preprocessing consisted of two steps: 1) detecting and removing artifacts and 2) isolating the phasic component. Artifact detection was done based on the derivative of the time series since large rapid changes are physiologically impossible for skin conductance. Artifact removal was done in two parts, first correcting for artifact-related large magnitude changes in the remainder of the signal and then interpolating the few seconds around the artifact itself. Then a low-pass FIR filter was used to estimate and remove the slow-moving tonic component of the signals, thereby, isolating the phasic component. The data preprocessing is described in further detail in Subramanian et al. (20). Since there are underlying tonic fluctuations, absolute pulse amplitude alone is insufficient to extract pulses reliably. Therefore, we computed locally adjusted amplitudes for all detected peaks using the MATLAB function findpeaks. The findpeaks algorithm computes a prominence or relative amplitude for each peak, by adjusting the amplitude of each peak as the height above the highest of neighboring valleys on either side. The valleys are chosen based on the lowest point in the signal between the peak and the next intersection with the signal of equal height on either side. With this method, a peak with small absolute amplitude can be rewarded in its prominence value if it is in a region of data with low activity. We then used a threshold on this prominence value to account for the varying baseline, instead of on the absolute amplitude. We used a prominence threshold of 0.005 to extract peaks across all subjects, unless this resulted in too few or too many pulses for each hour-long recording. This was primarily verified by visual inspection of extracted pulses, as well as rough estimates of 60 and 360 pulses as generous bounds for what should be expected of 1 h of data at rest with little external stimulation. This corresponds to one pulse every 10 to 60 s on average (4,(21)(22)(23)(24)(25)(26). In the case that too few pulses were extracted, we gradually reduced the prominence threshold by 0.001 until the number of pulses exceeded 100 and no obvious pulses were missed by visual inspection (verified by three different viewers independently). In the case too many pulses were extracted, we gradually increased the prominence threshold by 0.001 until the number of pulses was fewer than 360 and the extracted pulses were distinguishable from sensor noise by visual inspection (verified by three different viewers independently). If changing the threshold in increments of 0.001 resulted in drastic changes in the number of pulses each time, we reduced the increment to 0.0005. Although not fully automated at this stage, the design of this method to extract pulses attempted to take into account the wide variation in baseline levels of EDA activity seen across subjects. The extracted peaks included smaller peaks that other methods would generally ignore as noise. However, we chose to include them in the analysis. Statistical Model Fitting and Comparison. First, we restricted the parameter space to λ ≤ 0 for the generalized inverse Gaussian model to find the best integrate-and-fire model for each data series. To allow for statistical improvements by capturing deviations from the integrate-and-fire models, we fit four other probabilities models to each data series: generalized inverse Gaussian non-integrate-and-fire (λ > 0), lognormal, gamma, and exponential models. We fit all models by maximum likelihood (27,28). We assessed goodness-of-fit by using KS plots and by computing Akaike's information criterion (AIC), defined as AIC = -2 log fθ ML ( )+ 2p, [4] where f (θ ML ) is the likelihood evaluated at the maximum likelihood parameter estimates and p is the number of parameters. A lower AIC indicates a better fit. A KS plot compares the rescaled quantiles from the fit of the estimated probability model with the quantiles of an exponential distribution with rate 1 using the time-rescaling theorem (29). This theorem states that any point process can be rescaled to an exponential distribution with rate 1 using its conditional intensity (hazard) function. The KS distance computes the maximum distance between the quantiles of the rescaled data and the uniform distribution, which is a simple transform of an exponential distribution with rate 1. A smaller KS distance indicates that the model is more similar to the structure observed in the data. We computed 95% confidence intervals (5% significance cutoffs) for the KS plot and compared the KS distances across models (30). A KS distance that is within (outside) the 95% confidence intervals suggests that the model offers (fails to offer) a reasonably accurate description of the data. We also compared the models using a tail behavior analysis, in which the settling rates of the models were compared to determine the heaviness of the tails of the distributions. We hypothesized that each EDA interpulse interval distribution results from the activity of multiple sweat glands. This could lead to deviations from the inverse Gaussian with slightly heavier or lighter tails. These deviations can be captured statistically by right-skewed models such as the lognormal, generalized inverse Gaussian and gamma that together allow for more flexibility in tail behavior. Results Extraction of EDA Pulses. Fig. 1C shows an example of an excerpt of extracted pulses for Subject S07. This includes pulses large enough to be used in most analyses as well as those that are much smaller and usually either smoothed out or ignored as noise. We included both types of pulses for all subjects and did not distinguish between them. The majority of subjects showed appreciable fluctuations in the tonic component across time (SI Appendix, Fig. S1). Across the 11 subjects, the total number of pulses in the 1-h time window ranged between 97 and 348, including the distantly spaced smaller pulses (SI Appendix, Fig. S2). The best performing model per subject is in bold. The final prominence threshold used and number of pulses extracted is also indicated for each subject. GIG, generalized inverse Gaussian. The final prominence thresholds used ranged between 0.0025 and 0.023. Findings from Statistical Model Comparison. For all 11 subjects, the optimal generalized inverse Gaussian diffusion model was an inverse Gaussian, indicated by λ=−0.5 in Table 1. This inverse Gaussian was always within the significance cutoff according to KS distance, indicating that it provides an accurate description of the data and supporting our hypothesis that the pulsatile sweat release events of EDA can be modeled as an integrate-and-fire process. When allowing for deviations from the inverse Gaussian, for 9 of the 11 subjects, one or more of the lognormal, gamma, or generalized inverse Gaussian nondiffusion models was able to improve statistically on the fit of the inverse Gaussian (Tables 2 and 3; Fig. 2; and SI Appendix, Figs. S3-S7). The exponential was never a better fit than the inverse Gaussian, and was only within the significance cutoff for 4 of the 11 subjects. This suggests that for the majority of subjects, the exponential model did not offer an accurate description of the data. Both AIC and KS distance were in agreement for the best fit models for the data of 8 of the 11 subjects. This corresponded to 1 generalized inverse Gaussian diffusion model (S08), 1 generalized inverse Gaussian nondiffusion model (S10), 4 lognormal (S02, S03, S09, and S11), and 2 gamma (S04 and S05) models. For S01 and S06, the AICs and KS distances suggested different best fit distributions between inverse Gaussian and lognormal. For S07, they identified different best fit distributions between lognormal and gamma. It is reasonable to expect that the results from AIC and from KS distance will not match exactly since different metrics are intended to capture different aspects of model fits. However, the fact that they agree across the majority of subjects reinforces that there is specific statistical structure in the data that can be captured with a parsimonious model. The second phase of comparing the models was analyzing the tail behavior (Table 4) using the settling rates estimated for each of the five models for all subjects. Across all 11 subjects, the lognormal always had the heaviest tail since it is commonly classified as a heavy-tailed distribution. The generalized inverse Gaussian integrate-and-fire model had the next heaviest tail, indicated by the next smallest settling rate. The lognormal was the only other class of models besides the generalized inverse Gaussian diffusion models that was within the significance cutoff for all subjects. This suggests that deviations from the inverse Gaussian tended toward heavier tails that were best captured by the lognormal. Among the remaining models, the generalized inverse Gaussian non-integrate-and-fire models always had lighter tails than the integrate-and-fire models. The tails of the gamma were even lighter. Omitting the exponential, since it fit poorly in the majority of cases, the remaining models-lognormal, generalized inverse Gaussian (diffusion and nondiffusion), and gamma-together provide a systematic framework to: 1) evaluate the presence of inverse Gaussian structure in EDA using diffusion models; and 2) enhance statistical descriptions using models capable of capturing a range of tail behavior properties. Among the three subjects for whom the AIC and KS distance disagreed on the best fit model (subjects S01, S06, and S07), there was only one case (S07) in which this disagreement predicted very different tail behavior. In this case, the AIC and KS distance were divided between the lognormal and the gamma as the best model, one of which predicted a very heavy tail while the other predicted a very light tail. However, the tail of the interpulse interval distribution may have been relatively underrepresented for this subject, so the fit was primarily based on the shape near the mode, which both the lognormal and gamma distributions fit well. With a data collection period longer than 1 h, we would most certainly arrive at a more accurate description of the tail behavior. Discussion In this study, we used EDA data collected from 11 healthy volunteers at rest to test our hypothesis that EDA contains highly regular statistical structure that is consistent with integrate-and-fire physiology that describes sweat production and release. To do that, we fit probability models to the EDA time series and quantified the goodness of fit using AIC and KS distance. We also assessed tail behavior by computing the settling rates. Our findings show that for each of the 11 data series, the model fits and tail behavior were consistent with integrate-and-fire sweat gland physiology. There was also room for statistical improvements to capture deviations due to EDA data reflecting the simultaneous activity of many sweat glands. The physiology of sweat gland activity predicts that the interpulse intervals for sweat gland pulses should obey an inverse Gaussian distribution. This is an elementary model for processes where there is a build-up of a continuous variable to a threshold. Crossing the threshold leads to an observed event. Here, the build-up is the accumulation of sweat in the gland in response to sympathetic stimulation. The observed event is the GSR pulse measured as EDA. Our results show that the EDA data are consistent with an inverse Gaussian model for all subjects (Table 3), and the inverse Gaussian is also the optimal integrate-and-fire probability model (λ = −0.5) from the entire class of generalized inverse Gaussian integrate-and-fire models fit to these data (Table 1). We refined our hypothesis further by considering that measured EDA is the aggregation of data from hundreds of sweat glands. This predicts that the interpulse intervals could likely follow a mixture of inverse Gaussian models. This mixture could deviate from a single inverse Gaussian in tail behavior, which can be captured by other non-integrate-and-fire models. Lighter tails were modeled as gamma or generalized inverse Gaussian nonintegrate-and-fire models, whereas heavier tails were modeled as lognormal ( Table 3). More of the data are consistent with the heavier tail lognormal distribution, suggesting more frequent longer interpulse intervals across sweat glands than would be predicted by a homogeneous inverse Gaussian model. Furthermore, we did not observe any multimodal structure in the EDA suggesting a degree of coherent behavior among the sweat glands in the recording area. The non-integrate-and-fire models provide a systematic framework to make individualized improvements in the statistical fits. However, the fact remains that there is an inverse Gaussian model that is an accurate statistical description of the data for each subject. This reinforces the idea that the statistical structure in the data are fundamentally guided by the physiology of sweat gland activity which can be approximated well as an elementary integrate-and-fire process which we took to be a Gaussian random walk with drift diffusion. Our results link directly the physiology of sweat glands and the statistical structure of the EDA data collected at the skin surface. Current detailed signal processing methods for EDA analysis require significant computational complexity (6-10). However, looking to the physiology provided a principled framework by which to drastically reduce model dimensionality-all of our models had only one, two, or three parameters-and increase the accuracy of the data description. This result has implications for understanding and tracking the sympathetic activity in the autonomic nervous system in a more informed way. Several important extensions are possible in future work. We will study EDA pulse amplitudes along with interpulse intervals, by taking into account that both arise from the same integrate-and-fire process. Having established a natural point process structure in the EDA time series under approximately stable conditions, we can now study their dynamics over longer time periods, by applying history-dependent inverse Gaussian models like those developed for heart rate variability (31)(32)(33)(34)(35). These more detailed models could include other relevant covariates such as skin and environmental temperature. We will also study EDA in other contexts, such as under different emotional conditions, during sleep, in response to painful stimuli, and under general anesthesia. Our findings provide a principled, physiologically based approach for extending EDA analyses to these more complex and important applications.
2020-03-19T10:37:11.133Z
2020-03-12T00:00:00.000
{ "year": 2020, "sha1": "e94511b2d3b5b3bb6a6fa286effa6f850d3fbe65", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1073/pnas.2004403117", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9f04e1c1e6e1e3a9e5d6dedcd3b859f4f3521a64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Mathematics", "Biology", "Medicine" ] }
208248340
pes2o/s2orc
v3-fos-license
Quantifying interference in multipartite quantum systems The characterization of quantum correlations is crucial to the development of new quantum technologies and to understand how dramatically quantum theory departs from classical physics. Here we systematically study single- and multiparticle interference patterns produced by general two- and three-qubit systems. From this we establish on phenomenological grounds a new type of quantum correlation for these systems, which we name quantum interference, deriving some quantifiers that are given explicitly in terms of the density matrix elements of the complete system. By using these quantifiers, we show that, contrary to our expectations, entanglement is not a required property for a composite quantum system to manifest multiparticle interference. I. INTRODUCTION The concept of wave-particle duality, commonly described as the ability of a quantum particle to produce interference, is a central ingredient of quantum theory, which is absent in our classical intuition of the physical world. In fact, as famously stated by Feynman [1], this is "the mystery" manifested by microscopic particles, "which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics". Perhaps, the best illustration of wave-particle duality, now understood as a consequence of the quantum superposition principle, is given by the double-slit experiment. This experiment was originally presented by Young in the early 1800s to ascertain the wave properties of light and has become widely used to understand many fundamental aspects of quantum mechanics since the inception of the theory, e.g. the complementarity principle [2]. In this experiment, a beam of particles impinges on a mask with two closely spaced slits through which some of them can pass to have their position detected by a sensitive screen placed on the opposite side. For the case in which there is no information about which slit each of the particles traverses, the detection screen exhibits an interference pattern, therefore, revealing the wave behavior [3]. The first experimental realization of the double-slit experiment on the molecular level was presented in the 1960s using electrons [4], later conducted with C 60 [5], as well as with larger molecules [6]. However, only a few years ago this experiment could be realized in full agreement with Feynman's idea [7]. In the late 1980s, a remarkable new class of experiments expanding the successful category of single-particle interferometers was inaugurated; the so-called multiparticle interferometers. Ref. [8] provides a comprehensive review on this topic. At that time, the new interferometers brought about an alternative way of studying quantum phenomena which are richer and even more intriguing than those resulting from quantum superposition, evidenced in singleparticle systems. The experimental realizations were first obtained using entangled photons [9][10][11], and more than a decade later were performed using the internal states of four ions in a single trap [12]. Above all, it was claimed that multiparticle interferometers can provide measurable outcomes that are associated with the very nature of quantum entanglement. In this case, the signature of entanglement was recognized to be a surprising interference effect that takes place if one monitors the arrival positions of the particles composing the system in coincidence [13,14]. In contrast, no single-particle interference is detectable with this configuration. This behavior, although surprising, is to some extent intuitive if we recall that quantum superposition gives rise to wave-particle duality, and that entanglement was the name given by Schrödinger to quantum superposition in a many-particle system [15]. As a matter of fact, one of the main lessons that we have learned from the longstanding studies with interferometers was that the fundamental concepts of quantum superposition and entanglement are the responsible for the emergence of single-and multiparticle interference, respectively. Here, we study the single-and multiparticle interference behavior of bipartite and tripartite systems when each of the parties is submitted to a double-slit interferometer. From the analysis of the bipartite case, we derive a formula to quantify the amount of two-particle interference, which is shown to be a quantum correlation different from Bell nonlocality [16], entanglement [17] and discord [18]. Similarly, we show how to quantify three-particle interference for the important case of general three-qubit systems. In this last case, we identify two different classes of three-particle interference, one represented by GHZ-like states and the other by W -like states. Nevertheless, contrary to what has been believed so far [19,20], we find that it is possible for a composite quantum system to produce multiparticle interference without being entangled. Our paper is organized as follows: In Sec. II we give a clear definition of two-particle interference relying on a scheme in which both particles are individually submitted to a double-slit interferometer. Based on the interference patterns manifested in this thought experiment, we derive an expression that quantifies quantum interference for general two-qubit system and provide some key examples to compare with the results of other nonclassical correlations. In Sec. III, we extend the quantum interference study to the important case of three-qubit systems. Again, the system is analyzed under the perspective that each particle experiences a double-slit apparatus. The validity of the derived three-particle interference quantifier is also illustrated with some examples of pure and mixed states. Conclusions and remarks are given in Sec. VI. II. TWO-QUBIT CASE In this section we want to quantify interference for a general two-qubit system in terms of the interference properties that the subsystems produce upon individual and joint observation. To do so, we consider a thought experiment which consists in a source which emits pairs of particles, a and b, in opposite directions so that each particle is submitted to a double-slit experiment to be further detected on a screen which marks the detection positions. The scheme, which was previously studied by one of the authors in the context of decoherence theory [21], is shown in Fig. 1. Here, we call |A 1 and |A 2 the states of particle a when it passes through the upper and lower slits on the right, respectively. Similarly, we denote by |B 1 and |B 2 the respective states of particle b. If we use the four states |A i |B j , with i, j = 1, 2, as a basis to describe this bipartite system, the general state is described by the density operator whose density matrix has 16 entries: ρ ij with i, j = 1, 2, 3 and 4. Now, if we are interested in obtaining information about the joint probability of detecting particle a at a point z A on the left screen and particle b at z B on the right screen in coincidence, this is given by the relation where |z A and |z B are the respective position eigenstates. Let us consider that after passing through a slit, the wavefunction associated with the emerging particle is spherical. This assumption allows us to write the wavefunctions and for the particles on the left and on the right side, respectively. Here, k represents the wavenumber, and r (A,B)l the distances from the slits to the detection points, with (l = 1, 2), as seen in Fig. 1. FIG. 1. (Color online) Schematic illustration for the two-qubit system discussed in the text. A source S generates a pair of particles a and b, which are submitted to the left and right double-slit apparatuses, respectively. After passing through the double-slit stage, the particles are detected on the screens S A and S B , which permanently mark their position as z A and z B . The observable manifestation of single-and two-particle interference in this apparatus is used to quantify interference for general two-qubit systems. In the regime in which the distance between the slits is much smaller than the distance between the double-slit apparatus and the screen, the Fraunhofer diffraction limit is valid [21,23], such that we have r A(1,2) ≈ L ∓ θz A and r B(1,2) ≈ L ∓ θz B , with L and θ defined in Fig. 2. Taking all this information into consideration, together with Eq. (1), it turns out that the joint probability density will have the following form: After some algebra, we find that the joint probability density becomes where we used the notation ρ ij = R ij + iI ij for the entries of the density matrix, with i as the imaginary unit, i.e., R ij and I ij are the real and imaginary parts of ρ ij , respectively. Observe that the first four terms in Eq. (6) (outside brackets) represent the probability density of detecting particles a and b, respectively, at z A and z B for the cases in which there is a complete information about the path they have taken, i.e., the slit they have traversed. These terms do not contribute with any type of interference effects. The first four terms inside brackets quantifies the existence of oscillation in the coincidence detection rate (CDR) of particles a and b. Finally, the last four terms inside brackets are the responsible for the single-particle interference effects. That is, the existence of these terms gives rise to individual spatial interference patterns on the detection screens. A genuine oscillation in the CDR of particles a and b at the distant screens is a phenomenon that we expect only if these particles have a nonclassical correlation. Therefore, at this point we argue that the "detectability" of this type of oscillation is directly linked to the amount of quantum interference that these two particles have. Later, we show that this correlation is not entanglement, as widely accepted in the literature of multiparticle interference. Having thus described the problem, we shall seek a proper quantifier of multiparticle interference, starting from this simplest case of two qubits. However, it is important to envisage that, even if particles a and b present only single-particle interference on their respective screens, these two independent interference patterns exert an influence on the measured oscillation in the CDR obtained when the data of the two screens are considered together. In this form, we must keep in mind that genuine two-particle interference only occurs if, after subtracting the single-particle oscillation contributions from the two-particle ones, the oscillation in the CDR still remains. In other words, this residual oscillatory effect is the signature of genuine two-particle interference, which implies the existence of quantum-mechanical correlations. In order to be more precise, observe, for example, from Eq. (6) that if the sum 2(R 23 + R 14 ) and the product 4[(R 13 + R 24 )(R 12 + R 34 )] are nonzero, both will contribute with an interference mode of the type cos(2kθz A )cos(2kθz B ). However, the first term embodies the contributions to this mode of both genuine two-particle interference and the combined single-particle interferences fringes, whereas the second term represents a contribution only of the combined single-particle interference fringes formed on each screen. Therefore, if these two terms are equal, it is because there is no genuine two-particle interference with this mode. On the other hand, if the first term contribution is larger than the second, the system manifests genuine two-particle interference, i.e., quantum correlations. Similarly, three other independent analyses can be made relating the contribution of single-and twoparticle interference. If the terms 2(R 23 − R 14 ) and 4[(I 24 − I 13 )(I 34 + I 12 )] are nonzero, they contribute with an interference of the type sin(2kθz A )sin(2kθz B ); if the terms 2(I 23 + I 14 ) and 4[(I 24 − I 13 )(R 12 + R 34 )] are nonzero, both contribute with a sin(2kθz A )cos(2kθz B ) interference mode; and if the terms 2(I 14 −I 23 ) and 4[(R 13 +R 24 )(I 34 +I 12 )] survive, they will contribute with an interference of the type cos(2kθz A )sin(2kθz B ). It is important to observe that these four types of two-particle interference compose a basis for a two-dimensional Fourier series, which can generate any sinusoidal function f (z A , z B ), with periodicity ℓ = π/kθ both in z A and z B , with As a result, it signifies that the four types of interference are linearly independent (LI), such that their influence can be analyzed separately. Based on the arguments above, in what follows we shall establish our quantum interference quantifier for a pair of qubits built upon the oscillations in the CDR that they are capable to produce. In this respect, we mathematically define our quantifier with basis on the imbalance between the two-and single-particle interference contributions to the CDR for each of the four LI oscillatory modes. If this imbalance is null, the distant particles do not manifest any detectable oscillation of this type, and consequently the state has no (multiparticle) quantum interference. This feature indicates the complete absence of quantum correlations between the two particles. On the other hand, if there exists some nonzero oscillation in the CDR of the particles, it is because the state of the particles has some amount of two-particle interference. Under this phenomenological definition, our two-qubit quantum interference quantifier assumes the following form: which is given explicitly in terms of the entries of the density matrix, ρ ij = R ij + iI ij . The subscript and superscript 2 in I (2) 2 (ρ) stand for the number of parties and the dimension of the Hilbert space of each party, respectively. The four absolute value terms in Eq. (12) represent the imbalance between the square of the two-and single-particle contributions to each of the four LI modes: cos(2kθz A )cos(2kθz B ), sin(2kθz A )sin(2kθz B ), sin(2kθz A )cos(2kθz B ) and cos(2kθz A )sin(2kθz B ), respectively [22]. We also included a multiplicative (normalization) factor 1/2 in order for the maximally entangled Bell states provide I Let us now present some key examples of the application of the interference quantifier of Eq. (12). For the sake of clarity, in the following examples we will use the simpler computational basis states: . Accordingly, the pure two-qubit states |Φ(θ, φ) = cos θ |00 + e iφ sin θ |11 and |Ψ(θ, φ) = cos θ |01 + e iφ sin θ |10 provide the result I (2) 2 = sin 2 (2θ). This result is satisfactory since it is independent of φ, and I (2) 2 = 1 for the four maximally entangled Bell states, which are obtained when θ = π/4 and θ = 3π/4 (for φ = 0), and I (2) 2 = 0 for the separable states that emerge when θ = 0 and θ = π/2. Another key example is the case of an arbitrary separable pure state |ψ = |ψ A |ψ B , with |ψ A = cos(θ 1 /2) |0 + e iφ 1 sin(θ 1 /2) |1 and |ψ B = cos(θ 2 /2) |0 + e iφ 2 sin(θ 2 /2) |1 . For this case, after some calculations, Eq. (12) yields I (2) 2 = 0, which is the expected result for an arbitrary separable pure state. Another interesting fact to observe is that Eq. (12) does not depend on the diagonal terms of the density matrix. Thus, for any mixed state of the typê we have I (2) 2 = 0, as expected. We now consider as an example a special case of two-qubit mixed states, the Werner state. This state, which plays a central role in quantum information theory, is defined as a mixture of a singlet state, |ψ (−) = 1/ √ 2(|01 − |01 ), and completely depolarized noise [24]: with 0 ≤ p ≤ 1. Werner demonstrated that this state is entangled only if p > 1/3. Another remarkable result about this state is that quantum discord, another measure of nonclassical correlations, is nonzero whenever p > 0 [25,26]. As a consequence, this result showed that there exists quantumness in some separable states. In Fig. 3 we show the behavior of quantum interference (given by the definition of Eq. (12)), entanglement and discord as a function of the parameter p. From the quantum interference curve, we can extract another remarkable result based on the study of Werner states: it is possible to have two-particle interference in the absence of entanglement. In fact, the separable Werner states given by 0 < p ≤ 1/3 are able to produce two-particle interference. This finding contradicts the widely accepted idea that multiparticle interference demands entanglement [8,19,20]. Furthermore, we also have that states with nonzero quantum interference do not necessarily violates a Bell inequality. Indeed, the state of Eq. (14) only violates the CHSH inequality if p > 1/ √ 2 ≈ 0.707 [16]. In the next section we extend the study of quantification of interference to the three-qubit case. 2 (ρ W ), discord, and entanglement of formation for the two-qubit Werner state in Eq. (14). As can be seen, both interference and discord are nonzero in the interval 0 ≤ p ≤ 1/3, in which the state is separable. All three correlations reach unit when p = 1. III. THREE-QUBIT CASE In the same spirit of the previous section, we now want to quantify interference for a general three-qubit system by means of the interference properties that the subsystems manifest upon individual and joint observations. In the present case, we consider a gedanken experiment consisting in a source that creates simultaneously three particles a, b and c, which are individually sent towards one of three double-slit apparatuses to be further detected on the screens S A , S B and S C , as shown in Fig. 4. Similar to the two-qubit case, the quantum states of the particles according to the slit they traverse are denoted by |A 1 and |A 2 for particle a; |B 1 and |B 2 for particle b; and |C 1 and |C 2 for particle c. By choosing the eight product states |A i |B j |C k , with i, j, k = 1, 2, as the basis states for representing the general tripartite states of the particles, we have that the density operator of the system is written as follows: FIG. 4. (Color online) Schematic illustration of the three-qubit system discussed in the text. A source S creates three particles a, b and c, which are individually submitted to a corresponding double-slit apparatus. After, the particles are detected on the screens S A , S B and S C , which mark their position at the points z A , z B and z C , respectively. By investigating the single-, two-and three-particle interference behavior that they produce, information about the tripartite quantum interference can be obtained. which has 64 entries: ρ ij with i and j varying from 1 to 8. In this case, we have that the joint probability of measuring particles a, b and c at the points z A , z B and z C in coincidence is given by By following the same steps of the previous section, if we assume that the wavefunctions of the particles after emerging from the slits are spherical, and the geometry of all three double-slit apparatuses are such that the Fraunhofer diffraction limit is applicable, we find that the joint probability density is given by: with ρ ij = R ij + iI ij being the coefficients of the density operator in Eq. (15), i.e., R ij and I ij are the real and imaginary parts of ρ ij , respectively. Notably, each of the 34 terms in Eq. (17) has an important physical significance. The first eight terms, which depend on the diagonal entries ρ ii of the density matrix of the operator of Eq. (15), represent the probability density of measuring particles a, b and c, respectively at z A , z B and z C , for the cases in which there is a complete information about the slits they pass through before detection. As such, these terms alone cannot give rise to any type of interference effects. The following eight terms, which are expressed in terms of products of three oscillatory functions of z A , z B or z C , are the responsible for the emergence of all types of three-particle interference, i.e., interference in the CDR when all three particles are monitored. In a similar fashion, the following twelve terms, which contain products of two oscillatory functions of one of the variables z A , z B and z C , correspond to all possible two-particle interference patterns, that is, interference in the CDR when the particles are monitored in pairs. Finally, the last six terms, which are written in terms of single oscillatory functions of one of the variables z A , z B and z C , account for all types single-particle interference. These terms are the responsible for the emergence of spatial interference on the screens S A , S B and S C . It is well known that there exist two different classes of genuine tripartite entangled states for three-qubit systems [27,28]. One represented by the Greenberger-Horne-Zeilinger (GHZ) state, |GHZ = 1/ √ 2(|000 + |111 ), and the other by the W state, |W = 1/ √ 3(|001 + |010 ) + |100 ). For the GHZ state, if we measure the state of one of the subsystems such that the resulting state is either |0 or |1 , the other two subsystems are left in a separable pure state. Differently, the W state retains a pair of subsystems in a maximally bipartite entangled state when one of the subsystems is measured under equivalent conditions [29]. In this context, a very important property of three-qubit systems is that states pertaining to these two different classes cannot be converted into each other by any local operation and classical communication (LOCC) process. By the same token, we infer that a three-qubit system can produce two types of three-particle interference: one observed when all three particles are monitored simultaneously, that we call GHZ-like interference; and another which appears only when the particles are monitored in pairs, irrespective of the pair, that we call W -like interference. We shall see that these two types of interference are independent of each other. In this perspective, this means that a general interference quantifier, which is supposed to work for an arbitrary three-qubit system, mixed or not, must quantify separately the amount of GHZ-like and W -like interference, and then sum both together. In this form, our first objective in this section is to understand how to quantify these two types of interference behaviors from the single-, two-and three-particle interference produced by the system of Fig. 4. To do this, we must first understand in more details what kind of interference signatures GHZ and W states may exhibit in that apparatus. Initially, let us address the issue of how GHZ states manifest interference effects in the scheme of Fig. 4. In this case, the central idea is that interference in a GHZ state is only detected if the three parties are jointly observed, and no quantum effect is evident when only two or one of the parties are evaluated [30,31]. Therefore, we expect that the only interference effect produced by GHZ states in the experiment of Fig. 4 is an oscillation in the CDR of the three particles, when their arrival times on the screens S A S B and S C are investigated. Such oscillations are observed when one varies the relative position among the detection points z A , z B and z C . On the other hand, no other type of interference effect is expected for GHZ states, e.g. oscillations in the CDR when only two of the particles are observed, or single-particle spatial interference. Given this idea, our next step is to find a way to quantify genuine three-particle interference effects from the expression of the probability density in Eq. (17). As mentioned above, the terms which contain the product of three oscillatory functions of the detection points z A , z B and z C are the responsible for the appearance of three-particle interference. However, we have to observe that combinations of the product of two oscillatory functions with a single oscillatory function, or of three single oscillatory functions, can produce a similar behavior. Therefore, to correctly extract the effect of GHZ-like three-particle interference, we must subtract the effect of these combinations from that of the three-particle interference to express the genuine three-particle interference. By inspection of Eq. (17), if we are interested for example in studying the emergence of an interference effect of the type cos(2kθy A ) cos(2kθy B ) cos(2kθy C ), we see that it can happen whether the sum R 18 +R 27 +R 36 +R 45 , one of the products (R 15 +R 26 +R 37 +R 48 )(R 14 +R 23 + R 58 +R 67 ), (R 13 +R 24 +R 57 +R 68 )(R 16 +R 25 +R 38 +R 47 ), (R 12 +R 34 +R 56 +R 78 )(R 17 +R 28 + R 35 +R 46 ), or the product (R 15 +R 26 +R 37 +R 48 )(R 13 +R 24 +R 57 +R 68 )(R 12 +R 34 +R 56 +R 78 ) is nonzero. However, similar to the case of two qubits, we have to pay attention to which of these terms really contribute with genuine three-particle interference of this type. Among all these contributions, the only one which embodies the possibility of genuine three-particle interference is the sum R 18 + R 27 + R 36 + R 45 . Besides the interference mode of the type cos(2kθy A ) cos(2kθy B ) cos(2kθy C ), seven other types of three-particle interference, which are LI, can take place. They will all be listed below. Nevertheless, before doing that, we anticipate that our GHZ-like interference quantifier will be given by the imbalance among the three-, two-and single-particle interference contributions to the CDR of all eight LI oscillatory modes. Given this phenomenological definition, the amount of GHZ-like interference contained in each of the eight possible three-particle interference modes are given by: i) cos(2kθy A ) cos(2kθy B ) cos(2kθy C ) mode for particles A-B-C: ii) cos(2kθy A ) cos(2kθy B ) sin(2kθy C ) mode for particles A-B-C: iii) cos(2kθy A ) sin(2kθy B ) cos(2kθy C ) mode for particles A-B-C: v) sin(2kθy A ) sin(2kθy B ) cos(2kθy C ) mode for particles A-B-C: vi) cos(2kθy A ) sin(2kθy B ) sin(2kθy C ) mode for particles A-B-C: viii) sin(2kθy A ) cos(2kθy B ) sin(2kθy C ) mode for particles A-B-C: As can be seen, the eight interference mode quantifiers represent the imbalance among the square of the three-, two-and single interference contributions to each of the possible LI modes, similar to what was realized in the two-qubit case. If such imbalance is zero, it means that the oscillatory mode in question is not detectable. Therefore, if the tripartite state has some amount of genuine GHZ-like interference, at least one of the above mode quantifiers will be nonzero. Overall, the total amount of GHZ-like interference in an arbitrary threequbit state is given by with the I (i) GHZ elements given in Eqs. (18) to (25). The multiplicative (normalization) factor 1/4 was placed in order to obtain I GHZ (ρ) = 1 for the maximally entangled GHZ state. Let us now give some examples to illustrate the validity of the interference quantifier of Eq. (26). Again, for clarity of the examples, we will use the computational basis in the following form: . In this form, as a first example we have that the state |ψ 1 = cos(α) |000 + e iφ sin(α) |111 provides I GHZ = sin 2 (2α), which attains unit only for the maximally entangled GHZ state, when α = π/4 or 3π/4. On the other hand, for the important case of a general pure separable state, |ψ 2 = |ψ A |ψ B |ψ C , with |ψ A = cos(θ 1 /2) |0 + e iφ 1 sin(θ 1 /2) |1 , |ψ B = cos(θ 2 /2) |0 + e iφ 2 sin(θ 2 /2) |1 and |ψ C = cos(θ 3 /2) |0 + e iφ 3 sin(θ 3 /2) |1 , after some calculations we find that I GHZ = 0. Now, we address the issue of how W states produce interference effects in the apparatus of Fig. 4. In this case, we have that the three particles must manifest interference in the CDR when they are observed two at a time, but no interference is detected when the three particles are simultaneously monitored, and no single-particle (spatial) interference is visualized. Thus, interference effects for these states are only detectable for the arrival times of the particles when they are measured in pairs. That is to say that oscillations can be found only when one varies the relative position between any pair of detection points among z A , z B and z C . As such, we now proceed to find a way to quantify W -like interference from the probability density given in Eq. (17). In doing so, we first need to identify all terms which contain the product of two oscillatory functions of the variables z A , z B and z C , as well as the combination (product) of two single oscillatory functions producing similar effects. Below, we list the twelve possible LI oscillatory modes which are important for quantifying W -like interference, and the corresponding contributions that they have from the terms of Eq. (17). Again, we shall consider the contributions of two-particle interference and subtract the analogous contribution due to combinations of single oscillatory functions in order to express the genuine two-particle interferences. As we have done in the case of two-qubits and GHZ states, we quantify the amount of interference for each mode with the absolute value of the difference between the square of the two-particle and combined contributions: i) cos(2kθy A ) cos(2kθy B ) mode for particles A-B: ii) cos(2kθy A ) sin(2kθy B ) mode for particles A-B: Given these quantifiers for all possible two-particle interference modes, and being aware that a W -like interference only exists if all three particles are quantum correlated, but with the quantum correlations established only for pairs of particles, we will now write the interference quantifier for W -like states. To this end, we must understand that for the Wlike interference to exist, at least one of the modes of each type A-B, A-C, and B-C must be nonzero. Therefore, the total amount of W -like interference in an arbitrary three-qubit state is given by with the I W bc elements given as in Eqs. (27) to (38). The normalization factor (9/8) 3 was introduced to set I W (ρ) = 1 for the maximally entangled W state. By testing this quantifier for an arbitrary pure separable state |ψ 1 = |ψ A |ψ B |ψ C , with |ψ A = cos(θ 1 /2) |0 + e iφ 1 sin(θ 1 /2) |1 , |ψ B = cos(θ 2 /2) |0 + e iφ 2 sin(θ 2 /2) |1 and |ψ C = cos(θ 3 /2) |0 + e iφ 3 sin(θ 3 /2) |1 , we obtain I W (ρ) = 0, as expected. Also, for a general W state, |ψ 2 = 1 √ 3 [|100 + e iφ 1 |010 + e iφ 2 |001 ], we obtain I W (ρ) = 1. Overall, if we want to quantify interference for a general three-qubit state, we just need to sum the quantifiers for GHZ-like and W -like states, which are independent of each other, i.e., quantify different three-qubit classes of multiparticle interference. In this form, the general quantifier is: with I GHZ (ρ) and I W (ρ) given by the relations of Eqs. (26) and (39), respectively. The subscript 3 and superscript 2 in I 3 (ρ) stand for the number of parties and the dimension of the Hilbert space of each party, respectively. According to the tests realized with this general interference quantifier, we observed that I GHZ (ρ) > 0 and I W (ρ) = 0 for all GHZ-like states, and I GHZ (ρ) = 0 and I W (ρ) > 0 for all W -like states, as expected. As an example of our quantum interference quantifier applied to three-qubit mixed states, we consider the case of the Werner-GHZ state. This state is defined as a mixture of a GHZ state and completely depolarized noise [29,34,35]: with 0 ≤ p ≤ 1. In Fig. 5 we compare our results of quantum interference using Eq. (40) with those of global quantum discord computed in Ref. [36] for this state. As can be seen, both quantum interference and discord are nonzero for p > 0. However, this state is known to be separable if p < 1/5 [27,37], and biseparable if p < 3/7 [38]. As such, this finding confirms that some separable states, as well as biseparable states, can exhibit three-particle interference. It is important to call attention to the fact that the quantum interference of the Werner-GHZ state, I IV. CONCLUSIONS AND REMARKS In conclusion, we have presented a new method of quantifying quantum correlations based on the multiparticle interference produced by a composite quantum system when each of the ) and global discord for the Werner-GHZ state of Eq. (41). As can be seen, both quantifiers provide nonzero values in the interval 0 ≤ p ≤ 1/5, in which the state is separable [27,37]. constituent subsystems is submitted to a double-slit apparatus. By means of the expressions of the single-and multiparticle interference patterns manifested in this scheme, we were able to write two formulas that quantifies the amount of quantum correlation for general two-and three-qubit systems. Remarkably, these expressions could be derived explicitly in terms of the density matrix elements, independent if the bipartite or tripartite state is pure or mixed. Interestingly, for the special case of pure states, we verified that our quantum interference quantifiers for two-and three-qubit systems could also work as entanglement quantifiers. In fact, applying Eqs. (12) and (40) to pure two-and three-qubit states, respectively, we verified that our quantifiers, I 2 (ρ) and I 3 (ρ) = 0 ifρ is a separable state: we verified this point by analyzing separable pure states in both cases. Moreover, despite not being a requirement, our quantifiers also satisfy the criterion of (iii) Normalization: I(|Φ d ) = log 2 d, where |Φ d is a maximally entangled state, with d as the dimension of the Hilbert space of each subsystem. On the other hand, by investigating our two quantifiers for the case of mixed Werner states, we observed that the amount of multiparticle quantum interference of some separable states is nonzero. It is important to call attention that these states were also demonstrated to have nonzero discord [25,26,36]. In this context, from the analyses realized with two-and three-qubit Werner states, one can see that the profiles of quantum discord and our definition of quantum interference are similar, principally in the first case, as can be seen in Fig 3. Remarkably, our results showed that entanglement is not a requirement for a composite quantum system to produce multiparticle interference, as widely accepted in the literature [8,13,14,19]. We believe the present method of quantifying quantum correlations through the study of the interference patterns produced by many-particle systems will advance our understanding of quantumness in composite systems and bring important insights into some central questions on the foundations of quantum theory. Particularly, we envisage that our approach to quantify quantum correlations can be extended to more than three qubits, or to higher dimensional systems.
2019-11-21T20:26:52.000Z
2019-11-21T00:00:00.000
{ "year": 2020, "sha1": "e7b41afeb72e2f6cad7c298d29585dc7698f3ba8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1911.09736", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e7b41afeb72e2f6cad7c298d29585dc7698f3ba8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
213740665
pes2o/s2orc
v3-fos-license
Plasma generation in the arc discharge with a thermionic cathode in current stabilization conditions The paper investigates plasma generation in a PINK plasma generator with thermionic and hollow cathodes under stabilizeddischarge current. The possibility of not completely closing of the thermionic electrons current into an external circuit is shown under these conditions. The current of emitted from the thermionic cathode electrons mostly closes inside the device, thus increasing the thermal and current load on the electrodes under such conditions. At this, the current direction in the hollow cathode circuit may change, which has no effecton external circuits. It is further shown that the redistribution of currents in the electrode circuits depends on the discharge conditions (including thermionic cathode heating current and voltage, as well as operating pressure and type of working gas), which, in turn, have influence on the instantaneous potentials of the cathode electrodes. Studies of the main plasma characteristics in different phases of the thermionic cathode heating current and under different plasma generation conditions have shown that changing the current direction in the hollow cathode circuit has in significant influence on the instantaneous values of the main plasma parameters, however, plasma generation is not optimalunder these conditions. Introdution Plasma generators based on a non-self-sustained discharge with a thermionic cathode have been long and successfully used in science and industry [1,2]. Extensive use of this type of discharge is ensured by its operation at different discharge currentsvarying from fractions to hundreds of amperes, which provides for a wide adjustment range of plasma density. The presence of a thermionic cathode allows increasing plasma concentration at relatively low discharge voltages in the absence of the microdroplet phase. It also becomes possible to control the discharge current independently of its voltage and the pressure in the working chamber. The operating pressure of such systems rangesfrom 0.01 Pa to 5 Pa and allows to realize efficient ion cleaning and ion-plasma nitriding, as well as ion-plasma assistance during vacuum arc or magnetron deposition of coatings. Due to separate dischargegeneration areas in such systems, it is possible to independently adjust current densities of gas and metal ions and, therefore, independently control both the intensity of ion bombardment and the stoichiometric composition of the obtained coating. At the same time, the kinetics of the film growth changes under the influence of a large number of gas ions during coating deposition. All this factors together under lays a potentially wide spread use of the thermionic cathode discharge in dense low-temperature plasma generators. The Institute of High Current Electronics SB RAS designed a PINK plasma generator that based on combination of thermionic and hollow cathodes [3,4]. The PINK generator demonstrates high efficiency in generating bulk gas plasma in various technological applications. At present, the efficiency of using plasma sources depends not only on the parameters of the produced plasma and the overall quality of their structures, but also on the utilizedpower supply systems and modes. The paper investigates plasma generation in the PINK generator equipped with a power source used to stabilized a discharge current. Experimental The experiments were carried out in a vacuum setup NNV-6.6-I1 that was retrofitted with a coaxial PINK plasma generator mounted on the upper flange of the working chamber of 600×600×600 mm. The chamber was pumped out with a turbo-molecular pump with the pumping capacity of 500 ls -1 . (figure 1). A power supply source with a stabilized output current 5-100 A and an operating voltage of up to 75 V equipped with an arc suppression system was used to power the discharge. The inner walls of the vacuum chamber were the anode of a non-self-sustained arc gas discharge with a thermionic cathode. Four W-shaped tungsten filaments 1 mm connected in parallel were used as a thermionic cathode. The thermionic cathode was heated from an AC power source of 50 Hz with an autotransformer control. The total discharge current (I d ), the current in the hollow cathode circuit (I h ), the main discharge voltage (U d ) and the potential of the free end of the thermionic cathode (U t ) were directly measured during the experiments. The current in the thermionic cathode circuit (I t = I d -I h ) and the thermionic cathode heating voltage (U w = U d -U t ) were calculated from these the above values. The heating current of the tungsten cathode (I w ) was directly measured by current clamps S-Line M266 without the main discharge switching on, which excluded the influence of the discharge current (I t ) closing through the thermionic cathode circuit in the final measurement. The remaining values were controlled with a Techtronix TDS2024C oscilloscope; Hall sensors HONEYWELL CSNR161 were used to current measurement. In addition to oscillography the main discharge parameters have been produced by probe plasma studies. Instantaneous currents and potentials of a single Langmuir cylindrical probe powered from a separate source were fixed using a device of an original design [5]. The walls of the working chamber (anode) were used as a supporting electrode. The probe with dimension of  0.4×4 mm was made of tungsten and located at the half-height of the working chamber (150 mm from the external end of the hollow cathode) and was perpendicular to the plasma generator axis. The current-voltage characteristics of the probe were measured in a predetermined phase of the thermionic cathode heating voltage U h . For each CVC of the probe 1000 points were recorded. Instantaneous values of plasma parameters in different phases of the thermionic cathode heating voltage were obtained due to external synchronization of the probe CVC measures and the possibility to specify the delay. Results and discussions Previous studies of plasma generation in a PINK generator with a non-stabilized discharge power source showed that main discharge current mostly close on the thermionic cathode when the discharge voltages lower than  65 V, which causes amodulation of the output current and discharge voltage by the heating voltage of the thermionic cathode [4]. However, under thestabilized discharge current I d , the oscillograms look different. The principle of output current stabilizing is based on the output voltage (U d ) amplitude decreasing when the monitored current (I d ) exceeds some set value and U d increasing when I d getting lower the set value. The discharge current, however, remains almost unchanged over time and the voltages on the electrodes (U d and U t ) change with a period that corresponds to the period of the thermionic cathode heating voltage (U w ) (figure 2, all voltages on the oscillograms are inverted). The currents in the electrode circuits (I t and I h ) can also have a complex appearance but most of the discharge current always closes on the thermionic cathode (I t > I h ). It can be seen that when the discharge current stabilizes, the currents in the cathode circuit (I t ) can remain unchanged in time (figure 2b),can significantly change over the period of the thermionic cathode heating voltage (U w ) while remaining unchanged in the direction (figure 2c, 2d, 2f); otherwise the current in the hollow cathode circuit (I h ) changes not only its amplitude, but also the direction while the current in the thermionic cathode circuit (I t ) exceeds the discharge current (I d ) ( figure 2a, 2e). Changes in the current direction in the hollow cathode circuit can be explained by the fact that the current to the hollow cathode consists not only of ions coming from the plasma (and, accordingly, the  -electrons emitted from its surface), which can be designated as I ha (figure 3), but also from a certain reverse electron current (I th ) that closes on the hollow cathode when the voltage amplitude U t exceeds U d . Thus, the current in the thermionic cathode circuit (I t ) should also be represented as the sum of the currents closing through the plasma on the hollow cathode (I th ) and the anode (I ta ). The ratios of these currents depend on the discharge conditions such as the type of gas, operating pressure, thermionic cathode heating current and voltage, the amplitude of the stabilized discharge current, etc. At a sufficiently high thermionic cathode heating current 130 A and, accordingly, heating voltage up to 20 V, the discharge voltage drops to values below 15 V (figure 2a), which is close to the argon ionization potential. As a result, when the hollow cathode potential is 18 V and a thermionic cathode potential exceeds the hollow cathode potential by 5 V, the amplitudes of currents I ha and I th become equal (i.e. the current in the hollow cathode circuit I h becomes 0) and the current in the thermionic cathode circuit (I t ) becomes equal to the total discharge current (I d ). It should be noted that the conversion of I h to 0 does not mean the conversion of the current of plasma-emitted ions (I ha ) to 0 since the plasma in the cathode cavity continues to be generated, which is confirmed by the emission from the thermionic cathode, which is impossible without plasma at the applied voltages (up to 60 V). Under these conditions, the energy of electrons accelerated from the thermionic cathode increases and the potential barrier near the hollow cathode, which rejects electrons, decreases. Under the conditions when the accelerated electrons do not have time to lose their energy on collisions with the operating gas inside the cavity, an increase of the current I th is observed. A further increase of the voltage U w leads to a decrease of the hollow cathode potential and an increase of the current I th ; therefore, the current direction in the hollow cathodecircuit changes. When the amplitude of the stabilized main discharge current reduces (figure 2e), all the above effects manifest themselves even more clearly to the extent when the hollow cathode potential almost matches the anode potential (U d  0). In this case, not only electrons accelerated in the cathode layer near the thermionic cathode, but also plasma electrons produced in the cavity by ionizing the working gas can close on the hollow cathode. The excess of the current I t over the discharge current I d indicates a large loss of accelerated thermal electrons on the walls of the hollow cathode and is undesirable. An increase in the electrons free path with the operating pressure decreasing is partially compensated by 2c). As a result, the average plasma concentration inside the hollow cathode remains the same as evidenced by the almost constant current I h when U d exceeds U t (i.e. the current in the hollow cathode circuit is completely determined by I ha ). An increase of the discharge voltage (U d ) reduces the probability of electrons accelerated from the thermionic cathode reaching the hollow cathode (i.e. I th decreases) when U t exceeds U d . Reducing the thermionic cathode heating current by reducing its voltage leads to a decrease in the thermionic cathode temperature and, accordingly, to a decrease in the thermal emission current (which largely determines the current I t ). This leads to an increase in the discharge voltage U d when the discharge current is stabilized (figure 2b). It can be seen that the potential of the cathode electrodes does not fall below 35 V in this case. This is sufficient to ensure the absence of a significant effect of the hollow and thermionic cathode potentials on the currents in their circuits at the operating pressure of argon of 1 Pa. The use ofnitrogen instead of argon results in a significant change in the shape of the cathode voltages (figure 2f). Since the ionization cross section of nitrogen is lower than that of argon, and, on the contrary, the probability of inelastic collisions of electrons with nitrogen moleculesis higher, the ion energy rate of nitrogen is higher than that of argon. Ceteris paribus, this leads to an increased voltage of a non-self-sustained arc discharge with a thermionic cathode in the nitrogen atmosphere. The oscillograms of the hollow cathode voltage U d demonstrate a shelf  30 V when the thermionic cathode potential (U t ) exceeds the hollow cathode potential (U d ) and the current in its circuit (I h ) is close to zero. It is known that the potential of an object placed in plasma, when the currents of its ionic and electronic components are equal, is called floating. Thuswhen the current in the hollow cathode circuit (I h ) turns to zero, i.e I ha = I th , its potential is a floating potential. It can be seen (figure 2f) that under the given conditions, it remains constant regardless of the current in the thermionic cathode circuit (I t ) and its potential (U t ), which at this moment is the discharge voltage. The behavior of the cathode potentials under these conditions when the potential of the hollow cathode exceeds the potential of the thermionic cathode is of interest. A decrease in U t down to  30 V leads to a decrease in the efficiency of nitrogen ionization, which in turn requires the discharge power supply to raise the output voltage (U d ) in order to stabilize the discharge current, which leads to an increase in both cathode potentials. It should be noted that a similar behavior of the cathode potentials is observed when working with argon at p Ar = 0.6 Pa and I w = 120 A (figure 2d); however, it is manifested to a less extent. Instantaneous plasma parameters (table 1) for all discharge conditions were investigated when the thermionic cathode heating voltage (U w ) was equal to zero (sync "0", 0 ms in figure 2), which corresponds to the equality of the potentials of the thermionic and hollow cathodes. For the cases when a significant repolarization of the current I h was observed, current-voltage characteristics of the probe were additionally fixed when the heating voltage U w was at its maximum (sync "-", -5 ms in figure 2) and minimum (sync "+", +5 ms in figure 2) in order to determine the possible changes in plasma parameters over a period of the heating voltage. Plasma concentration was determined by the electron saturation current. It can be seen (table 1) that a significant redistribution of currents in the electrode circuits under a stable discharge current and an argon pressure of 1 Pa does not lead to reliable changes in the plasma parameters. It can only be noted a slight increase in the floating potential at "-" synchronization. Against the background of a constant temperature in the main group of plasma electrons, this may indicate the appearance or amplification of the second group of electrons in the plasma generated by the device under study due to a high current of accelerated thermionic electrons [6] amplified at the maximum thermionic cathode heating voltage. The decrease in argon pressure, as well as the replacement of argon with nitrogen lead to a slight increase in the electron temperature. This can be explained by an increase in the free path of electrons accelerated from the thermionic cathode and an increase in their energy when operating in argon, or an increased ion energy rate when operating in nitrogen. At the same time, the floating potential increases and the plasma concentration slightly decreases, which is due to the necessity to preserve the anode electron current density from the plasma with an increase in the electron temperature. Reducing the thermionic cathodeheating current and, therefore, the heating voltage leads to a noticeable increase in plasma concentration without changing the electron temperature and the floating potential. This is primarily due to an increase in the discharge voltage. With a constant discharge current, this leads to an increase in the energy imparted to thermionic electrons in the cathode layer. Constant electron temperature in the plasma indicates that the primary electrons have time to lose their energy when moving to the probe. Accordingly, more energy is spent on inelastic collisions with gas, including ionization. In constant operating pressure, this leads to an increase in plasma concentration. Thus, the optimization of thermionic cathode heating current and voltage leads to an increase in the efficiency of gas plasma generation in the device under study. A decrease in the discharge current leads to a significant decrease in plasma concentration due to a decrease in the number of gas ionization events; the repolarization of the current I h does not lead to a significant change in the plasma parameters. Floating potential U f (V) Conclusion The research has found that the use of the discharge power source that enables current stabilization in a PINK plasma generator can lead to a significant redistribution of currents in the circuits of thermionic and hollow cathodes up to a change in the direction of the latter. Such a redistribution depends on the instantaneous ratio of the potentials of the thermionic and hollow cathodes, as well as on the absolute potential of the hollow cathode (discharge voltage), which affect the efficient reach ofthe hollow cathodebyelectrons ejected from the thermionic cathode. It has been further established that there is no significant change in the plasma parameters under changes in the direction of the current in the hollow cathode circuit. However, this operation mode of the plasma generator is not optimal due to both the increased thermal and current loads on the electrodes, which are not reflected in the external circuit, as well as the reduced efficiency of plasma generation.
2019-12-05T09:25:22.111Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "fe0e5253c6e6ff46a8867af0eb4d8333d25b98f3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1393/1/012046", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "cb68ddf51d42b8bf7833194f86ec41734d3d73df", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
214547903
pes2o/s2orc
v3-fos-license
A Novel pH-Regulated, Unusual 603 bp Overlapping Protein Coding Gene pop Is Encoded Antisense to ompA in Escherichia coli O157:H7 (EHEC) Antisense transcription is well known in bacteria. However, translation of antisense RNAs is typically not considered, as the implied overlapping coding at a DNA locus is assumed to be highly improbable. Therefore, such overlapping genes are systematically excluded in prokaryotic genome annotation. Here we report an exceptional 603 bp long open reading frame completely embedded in antisense to the gene of the outer membrane protein ompA. An active σ70 promoter, transcription start site (TSS), Shine-Dalgarno motif and rho-independent terminator were experimentally validated, providing evidence that this open reading frame has all the structural features of a functional gene. Furthermore, ribosomal profiling revealed translation of the mRNA, the protein was detected in Western blots and a pH-dependent phenotype conferred by the protein was shown in competitive overexpression growth experiments of a translationally arrested mutant versus wild type. We designate this novel gene pop (pH-regulated overlapping protein-coding gene), thus adding another example to the growing list of overlapping, protein coding genes in bacteria. INTRODUCTION Due to the nature of the genetic triplet code, six reading frames exist on the two strands of a DNA molecule. Two genes encoded by two different reading frames (ORFs) at the same DNA locus are termed "non-trivially overlapping genes" (OLGs) if the area of sequence overlap is substantial (at least 90 base pairs) and both reading frames encode a protein. Such overlapping genes were discovered in bacteriophage φX174 by Barrell et al. as early as 1976(Barrell et al., 1976. Today, the existence of protein coding OLGs is accepted in viruses, although the evolutionary pressures behind the development of gene overlaps are still debated. Theories about size constraint of the genome in the viral capsid, gene novelty, and evolutionary exploration have been discussed (Chirico et al., 2010;Brandes and Linial, 2016). In contrast, most overlaps reported in bacterial genomes are very short; the majority being only 1 or 4 bp in same-strand orientation, and we term these trivially overlapping genes. Such very small overlaps seem to increase fitness (e.g., Saha et al., 2016) which might be explained by the translational coupling of expression of the overlapping genes. Due to requiring only a small-scale slippage of the ribosome, mediated by the short overlap, translation is faster and highly efficient in contrast to the conventional translation process which includes dissociation of the ribosome after translation of the upstream gene and time consuming re-association to the downstream ORF of the mRNA (Johnson and Chisholm, 2004). Very little work has been devoted to the exploration of long overlapping reading frames in prokaryotes, where one ORF is embedded completely in the other ORF (Rogozin et al., 2002;Ellis and Brown, 2003). As bacterial genomes are typically much larger than those of viruses, the original hypothesis suggesting a selection pressure associated with the evolution of overlapping genes in viruses due to an increase of the coding capacity in size-restricted genomes (Normark et al., 1983) has been assumed to be invalid for prokaryotes. In line with this assumption, overlapping genes are systematically excluded in prokaryotic genome annotations (e.g., Warren et al., 2010), which is certainly one reason for the lack of knowledge about such amazing gene constructs in bacteria. Nevertheless, statistical analysis of bacterial genomes has shown that ORFs overlapping annotated genes in alternative reading frames are longer than expected, leading to the hypothesis of a potential selection pressure due to overlapping protein-coding genes (Mir et al., 2012). Besides this, functionality of at least a few non-trivially overlapping genes has been demonstrated (e.g., Behrens et al., 2002;Balabanov et al., 2012). It is assumed that overlapping genes originated by overprinting of existing, annotated genes (Sabath et al., 2012) and may constitute an evolutionarily young part of the functional genome of bacteria (Fellner et al., 2014(Fellner et al., , 2015. In contrast to older genes with highly conserved and essential functions, young overlapping genes appear to have weak expression (Donoghue et al., 2011) and their protein functions are suggested to be not essential (Chen et al., 2012). Therefore, the task of functionally characterizing OLGs is challenging. In order to capture weak and condition-specific phenotypic effects caused by the weak expression of non-essential overlapping genes, sensitive methods are necessary (Deutschbauer et al., 2014). We study non-trivially overlapping genes in the human pathogenic bacterium Escherichia coli O157:H7 (EHEC). Its genome is well characterized, especially with respect to virulence and the associated diseases like enterocolitis, diarrhea, and hemolytic uremic syndrome (Lim et al., 2010;Stevens and Frankel, 2014;Betz et al., 2016). Nevertheless, the coding capacity of EHEC's genome is likely to be significantly underestimated, both regarding short intergenic genes (Neuhaus et al., 2016;Hücker et al., 2017) and non-trivially overlapping genes (Hücker et al., 2018a,b;Vanderhaeghen et al., 2018). Additionally, using a variety of different next generation sequencing based methods (e.g., RNAseq, Cappable-seq, ribosome profiling) evidence for widespread antisense transcription has accumulated (Conway et al., 2014). In particular, ribosome profiling has been shown to be a powerful technique to investigate the translated part of an organisms' transcriptome with high precision, through deep sequencing of ribosome-protected mRNA fragments (Ingolia et al., 2009;Hwang and Buskirk, 2016;Nakahigashi et al., 2016). Furthermore, variations of this method were developed to resolve specific features of translation, such as alternative translation initiation sites, translational pausing or translation termination (Woolstenhulme et al., 2015;Baggett et al., 2017;Meydan et al., 2019). Based on such techniques, surprising additional complexity of the bacterial translatome has been uncovered. In particular, findings of putatively translated antisense RNAs could be very significant with respect to overlapping genes (Meydan et al., 2019). Nevertheless, the specificity of the signals found in all NGS experiments needs to be assessed and differentiated from a potentially pervasive background translation, i.e., undirected binding of ribosomes to RNAs (Ingolia et al., 2014). It was reported that pervasive translation initiation sites in bacteria predominantly lead to short translation products with an uncertain functionality status (Smith et al., 2019). However, the metabolic cost of pervasive translation would be high and cells should be driven to minimize such costly side reactions. To gather further evidence for an overlapping coding potential, individual overlapping genes have to be characterized in detail. Such research is in its infancy in bacteria. Here, we report on a functional analysis of the unusually long, non-trivially overlapping gene pop from E. coli O157:H7 strain EDL933, which is fully embedded in antisense to the annotated gene of the outer membrane protein ompA. OmpA is highly conserved among proteobacteria and represents the major outer membrane protein in E. coli with about 100,000 copies per cell (Koebnik et al., 2000). Extensive studies led to the discovery of the β-barrel structure of OmpA (Vogel and Jähnig, 1986) as well as diverse functions of this protein, such as a porin function (Arora et al., 2001) and a local cell wall stabilizing action through interaction of OmpA with TolR (Boags et al., 2019). Fisher Scientific). Vector constructs were transformed in E. coli Top10 cells and plated on LB with required antibiotics. Plasmids were isolated (GenElute Plasmid Miniprep Kit, Sigma Aldrich, St. Louis, MO, United States) and sequenced with suitable primers (Eurofins Genomics, Ebersberg, Germany) to verify the sequence. Creation of Translationally Arrested Knock-Out Mutants The genomic knock-outs E. coli O157:H7 EDL933 pop and E. coli O157:H7 EDL933 pop v2 were produced for subsequent competitive growth experiments. The method was adapted from Fellner et al. (2014). Mutation fragments were amplified with primer pairs 1 + 6 and 2 + 5 for the knock-out pop. For the knock-out pop v2, primer pairs 3 + 7 and 4 + 5 were used. The fragments gained were used in the subsequent overlap extension PCR with primers 5 + 6 or 5 + 7, respectively. The resulting mutation cassettes, pop and pop v2, were cloned in the plasmid pMRS101 (Sarker and Cornelis, 1997) using ApaI/SpeI and ApaI/XbaI, respectively (selection with ampicillin). The plasmids pMRS101+ pop and pMRS101+ pop v2 were isolated and sequenced with primers 7 and 8, respectively. The following steps were performed for both plasmids: A restriction digest with NotI was conducted to remove the high copy ori. The plasmid was re-ligated to the π-protein dependent, low copy plasmid pKNG101+x (x denotes either insert pop or pop v2), whose maintenance relies either on cells expressing the pir gene, which enables replication, or on integration of the plasmid via homologous recombination -in case the cell does not express the pir gene (Kaniga et al., 1991). Plasmid propagation was performed in E. coli CC118λpir (selection with streptomycin). The conjugation strain E. coli SM10λpir was transformed with pKNG101+x. Overnight cultures (500 µl) of E. coli SM10λpir + pKNG101+x and E. coli O157:H7 EDL933 + pSLTS (selection marker ampicillin, temperature sensitive ori) were mixed and cultivated on LB plates (24 h, 30 • C) for conjugation and integration of the plasmid into the genome of EHEC through homologous recombination. Conjugated EHEC cells were transferred on LB/ampicillin/streptomycin plates and selectively cultivated (24 h, 30 • C). Correct insertion of the plasmid was confirmed by a PCR using primers 8 + 12 for pKNG101+ pop or 10 + 12 for pKNG101+ pop v2. A double-resistant strain was used for loop-out of the mutation plasmid. For this, conjugated EHEC + pSLTS was cultivated in LB at 30 • C at 150 rpm until an optical density of OD 600 = 0.5 and counter-selected on sucrose agar (modified LB without NaCl supplemented with sucrose) containing 0.02% arabinose to induce the λ red recombination system on pSLTS. Cells with integrated pKNG101+x express the enzyme levansucrase, encoded by the gene sacB, which catalyzes the hydrolysis of sucrose and synthesis of levans. It is proposed that these toxic fructose polymers accumulate in the periplasm of Gram-negative bacteria leading to cell death (Reyrat et al., 1998). Therefore, only sucrose-resistant cells, achieving the second recombination step, have lost the plasmid with its streptomycin resistance. PCR fragments of streptomycin sensitive clones produced with primers 8 + 9 and 10 + 11 for EHEC pop and EHEC pop v2, respectively, were sequenced to verify integration of the desired mutations into the chromosome. E. coli O157:H7 EDL933 pop and E. coli O157:H7 EDL933 pop v2 were cultivated at 37 • C to cure the cells from the plasmid pSLTS. Cloning of pBAD+pop and pBAD+ pop for Overexpression Phenotyping For overexpression competitive growth testing, plasmids pBAD+pop and pBAD+ pop were constructed. For the former construct, primers 14 + 15 were used. The latter construct was created similarly to the mutation cassette described in the previous section (i.e., primers for the mutation fragments are 1 + 14 and 2 + 15; primers for the mutation cassette are 14 + 15). Both PCR fragments, either wild type or mutant, were cloned in the NcoI and PstI sites of pBAD/myc-HisC and plasmids were sequenced with primers 16 + 17. Each of the plasmids was transformed in wild type E. coli O157:H7 EDL933 for subsequent competitive growth assays. Competitive Growth Assays For competitive growth, overnight cultures of EHEC transformants containing pBAD+pop or pBAD+ pop were diluted to OD 600 = 1 and mixed in equal amounts. Plasmids were isolated from the bacteria mixture and used as time point zero reference. One hundred microliters of a 1:300 dilution of the initial 1:1 bacteria mixture was used to inoculate 10 ml culture medium with appropriate additives (for working concentration of chemicals see Supplementary Table S2; selection marker ampicillin for plasmid maintenance). Overexpression of pop and pop cloned on pBAD was induced with L-arabinose (0.02%) added at the two time points t 0 = 0 h and t 1 = 6.5 h. Plasmids were isolated after t 2 = 22 h and sequenced with primer 16. The competitive index, based on t 0 of the mixture pop wild type and pop mutant expressing cells, was calculated. For this, the peak heights (fluorescence signals in Sanger sequencing) of mutated and wild type base at the mutated position were measured. The CI values were calculated according to this formula: CI = (Mt x /Wt x )/(Mt t 0 /Wt t0 ) with Wt and Mt the peak heights of wild type and mutant plasmid, respectively, in stress condition or reference condition t 0 . Mean values and standard deviations of at least three biological replicates were calculated. Significance of a possible growth phenotype was tested with a paired t-test between CI values of the time point t 0 reference and the cultured samples (p-value ≤ 0.05). Competitive growth of wild type EHEC and translationally arrested mutants E. coli O157:H7 EDL933 pop or E. coli O157:H7 EDL933 pop v2 was conducted and evaluated as described above with some exceptions: no selection marker was used; no protein expression was induced; cells were harvested after t x = 18 h; peak heights were determined in t 0 and cultured samples by sequencing PCR products amplified from cell lysates with primers 8 + 9 or 10 + 11 for pop or pop v2 used in competitive growth, respectively (primer for sequencing: 8 or 11). Copy Number Estimation Overnight culture of E. coli O157:H7 EDL933 with either pBAD+pop (pop sample) or pBAD+ pop ( pop sample) were diluted to OD 600 = 1. Diluted cultures (1:300) were used to inoculate 10 ml LB, LB + malic acid or LB + bicine (for working concentration of chemicals see Supplementary Table S2; selection marker ampicillin for plasmid maintenance). Transcripts of pop and pop were induced as described for the competitive growth assay. DNA (genomic and plasmid) was isolated after growth of 22 h using phenol/chloroform/isoamyl alcohol (Carl Roth, Karlsruhe, Germany). For this, cultured cells were pelleted and resuspended in 700 µl Tris/EDTA (pH 8) and disrupted with bead beating (0.1 mm zirconia beads) using a FastPrep (three-times at 6.5 ms −1 for 45 s, rest 5 min on ice between the runs). The cell debris was removed after centrifugation (5 min, 16.000 × g, 4 • C). Nucleic acids in the supernatant were extracted with 1 Vol phenol/chloroform/isoamyl alcohol twice (vigorously shaking, 5 min, 16.000 × g, 4 • C) and precipitated using 2 Vol 100% EtOH and 0.1 Vol 5M NaOAc at −20 • C for at least 30 min. After centrifugation (10 min, 16.000 × g, 4 • C), the cell pellet was washed twice with 1 ml 70% EtOH (incubation 5 min at room temperature, centrifugation 5 min, 16.000 × g, 4 • C). The dried pellet was rehydrated with an appropriate amount of water. RNA was digested using 0.1 Vol of RNase A (Thermo Fisher Scientific) and DNA was recovered by phenol/chloroform/isoamyl alcohol isolation as before. Genomic and plasmid DNA was relatively quantified in biological and technical triplicates by qPCR using a genomic specific primer pair amplifying a 105 bp long fragment of the siroheme synthase gene cysG (primer 34 + 35, Zhou et al., 2011) and plasmid specific primers amplifying a 101 bp long fragment of the β-lactamase gene bla (primer 36 + 37, Roschanski et al., 2014). DNA samples were used at a concentration of 100 ng/µl. Amplification cycle differences were calculated for each of the culture conditions [ Cq(cysG-bla)] for pop and pop DNA samples. The ratio of condition specific Cq values for pop/ pop samples was calculated to estimate the deviation of copy numbers in cells overexpressing either of the plasmids. Statistically significant differences of copy number ratios between t 0 and each cultured sample was tested for with a paired two sample t-test (p-values ≤ 0.05). Construction of an Overexpression Plasmid and Western Blot The plasmid pBAD/myc-HisC, which codes for the peptide tags myc and 6xHis, was modified to obtain the overexpression plasmid pBAD/SPA with the SPA-tag instead (sequential peptide affinity tag, dual epitope tag, consists of calmodulin binding peptide and 3xFLAG-tag separated by a TEV protease cleavage site, Zeghouf et al., 2004). For this, primers 19 + 20 were annealed (heating at 90 • C, slow cooling) and completed in a PCR where primers 21 + 22 were added after 5 cycles to amplify the fragment. This PCR product was cloned into pBAD/myc-HisC using SalI and HindIII restriction enzymes. This resulted in an excision of the myc-epitope and in-frame insertion of the SPA-tag. The sequence of pop was cloned next after amplification with primers 14 + 18 in the NcoI and HindIII sites of pBAD/SPA. The plasmid pBAD/SPA+pop was sequenced with primers 16 and 17 for verification and transformed into E. coli O157:H7 EDL933. Overexpression was performed in LB medium and bicinebuffered LB medium. Cells were cultivated and protein production was induced with 0.002% arabinose when an optical density of OD 600 = 0.3 was reached. Cells were harvested right before induction (uninduced control) and at time points 0.5, 1, 1.5, 2, 2.5, 3, and 4 h after induction. The cell volume harvested was adjusted to achieve the same OD 600 for all samples regarding uninduced cells (OD 600 = 0.3). Whole cell lysates were prepared by adding 50 µl SDS sample buffer (2% SDS, 2% β-mercaptoethanol, 40% glycerin, 0.04% Coomassie blue G250, 200 mM tris/HCl; pH 6.8) and heating at 95 • C for 10 min. Proteins in 10 µl of the lysates were separated on a 16% tricine gel prepared according to Schägger (2006), and detected afterward in a Western blot. For this purpose, proteins were blotted semidry (12 V, 20 min) on a PVDF membrane (PSQ membrane, 0.2 µm, Merck Millipore, Burlington, Massachusetts, United States). After incubating the membrane 5 min in 3% TCA, it was blocked with non-fat dried milk at 4 • C. After three washing steps (TBS-T), the membrane was incubated in a 1:1000 dilution of ANTI-FLAG R M2-Alkaline Phosphatase antibody (Sigma Aldrich), which binds the FLAG epitope of SPA-tagged proteins, in TBS-T. SPA tagged proteins were visualized with BCIP/NBT. Determination of Promoter Activity by a GFP Assay The promoter sequence of pop was amplified with primers 23 + 24. The product was cloned N-terminally into the promoterless GFP-reporter plasmid pProbe-NT using restriction enzymes SalI and EcoRI resulting in pProbe-NT+promoter-pop. The promoter sequence was verified by sequencing the plasmid with primer 25. The promoter activity was measured in E. coli Top10. For this, 10 ml LB with the appropriate additive (for working concentration of chemicals see Supplementary Table S2; selection marker kanamycin) was inoculated 1:100 with overnight cultures of E. coli Top10, E. coli Top10 + pProbe-NT, and E. coli Top10 + pProbe-NT+promoter-pop and cultivated up to OD 600 = 0.6. An appropriate number of cells were harvested, washed once and afterward resuspended in 1xPBS. Fluorescence of 200 µl cell suspension was measured in four technical replicates (Victor3, Perkin Elmer, excitation 485 nm, emission 535 nm, measuring time 1 s). Self-fluorescence of cells was subtracted. Mean values and standard deviation of three independent biological replicates were calculated. Statistically significant differences in the fluorescence of promoter construct and empty plasmid or between promoter constructs in different growth conditions were determined using the Welch two sample t-test (p-value ≤ 0.05). RNA Isolation RNA was isolated from exponentially grown EHEC cultures (OD 600 = 0.3 in LB, LB + L-malic acid, LB + bicine) using Trizol Reagent (Thermo Fisher Scientific). Cell pellets were resuspended in 600 µl cooled Trizol and disrupted with bead beating (0.1 mm zirconia beads) using a FastPrep (3-times at 6.5 ms −1 for 45 s, rest 5 min on ice between the runs). Cooled chloroform (120 µl) was added, mixed vigorously and incubated 5 min at room temperature. Phases were separated by centrifugation for 15 min (4 • C, 12000 × g) and total RNA in the aqueous upper phase was precipitated with isopropanol, NaOAc and glycogen (690, 27, and 1 µl, respectively) at −20 • C for 1 h. RNA was pelleted by centrifugation for 10 min and washed twice with 80% ethanol. Air-dried RNA was dissolved in an appropriate volume of RNase-free H 2 O. DNase Digestion DNA in RNA samples was digested with Turbo DNase (Thermo Fisher Scientific) according to the manufacturer's instructions. The reaction was stopped with 15 mM EDTA and heating for 10 min at 75 • C. Digested RNA was precipitated with isopropanol, NaOAc and glycogen (690, 27, and 1 µl, respectively) at −20 • C overnight. After centrifugation (20 min, 12000 × g), the pellet was washed once with 80% ethanol. Air-dried RNA was dissolved in an appropriate volume of RNase-free H 2 O. Successful DNA depletion was verified with a standard PCR using Taq-polymerase (NEB) and primers 26 + 27 binding to the 16S rRNA genes. cDNA Synthesis and RT-PCR DNA-depleted total RNA (500 ng) was used for cDNA synthesis with SuperScript III reverse transcriptase (Invitrogen, Thermo Fisher Scientific) according to the manufacturer using 50 pmol random nonamer primer for 16S rRNA reverse transcription (Sigma Aldrich) or 10 pmol gene specific primers for pop reverse transcription as indicated. SUPERase In RNase Inhibitor (20 U/µl, Invitrogen) was added as well. "No RT" controls contained all components apart from the reverse transcriptase. For RT-PCR, 1 µl of the cDNA sample was used in a standard PCR using Taqpolymerase (NEB) with 20 cycles for product amplification using the primer pairs indicated. Binding of primers was verified in a PCR with genomic DNA as template (not shown). Quantitative PCR (qPCR) Relative quantification of pop RNA and 16S rRNA based on cDNA (reverse transcribed with primer 8 and random nonamer primer, respectively) was conducted by qPCR using the SYBR Select Master Mix (Applied Biosystems). The reactions contained 12.5 µl master mix, 0.5 µl of forward and reverse primer (50 µM) and 1 µl cDNA at a total volume of 25 µl. Amplification of pop and 16S rRNA was performed with primers 8 + 9 and 26 + 27, respectively. The reaction conditions were as follows: 95 • C (5 min, initial denaturation), 40 cycles of denaturation, annealing and elongation at 95 • C (15 s), 61 • C (30 s), and 72 • C (30 s). Finally, a melting curve was acquired for quality control of the amplification products (61 • C to 95 • C in 0.5 • C steps for 5 s). qPCR was performed in three biological replicates in each condition (LB, LB + L-malic acid, and LB + bicine) with three technical replicates for every sample. A no-RT control was included for all samples to verify specificity of the amplification from cDNA (e.g., exclude DNA contamination). pop mRNA was quantified with the Cq method using 16S rRNA as reference (Pfaffl, 2001). Statistical significance was calculated by means of a one-tailed Welch two sample t-test (p-value ≤ 0.05). Promoter Determination The programs BPROM (Solovyev and Salamov, 2011) and bTSSfinder (Shahmuradov et al., 2017) were used to determine the promoter of pop. The input sequence for BPROM was 100 bp long and started 65 bp upstream of the identified TSS. The input for bTSSfinder needed to be longer; it spans 300 bp and starts 197 bp upstream of the TSS. BPROM specifies the promoter strength as a linear discriminant function (LDF) and a sequence with LDF = 0.2 indicates a promoter with 80% accuracy and specificity. bTSSfinder calculates scores based on position weight matrices for different sigma factors and accepts promoters greater than the scoring thresholds (0.06 for σ 70 ). Terminator Analysis The program FindTerm (Solovyev and Salamov, 2011) was used to analyze 900 bp downstream of ompA for a rhoindependent terminator (threshold −3). The 120 bp long terminator identified was split consecutively into 30 bp segments and all 91 sequences were folded with Mfold (Zuker, 2003) to identify the stem loop structure. Shine-Dalgarno Sequence Identification Presence of a Shine-Dalgarno sequence in the region 30 bp upstream of the start codon was analyzed according to Ma et al. (2002). A minimum of G • = −2.9 kcal/mol is required for detection of a ribosome binding site. Ribosomal Profiling Analysis Ribosome profiling data of E. coli O157:H7 EDL933 (Neuhaus et al., 2017), samples in LB for two biological replicates, SRR5266618, SRR5266620), E. coli O157:H7 Sakai [Hücker et al. (2017), sample in LB, SRR5874484; files for the two separate biological replicates were kindly provided by Sarah Hücker] and E. coli MG1655 [Wang et al. (2015), samples in LB for two biological replicates; ERR618775, ERR618771] were downloaded from NCBI. Data for E. coli LF82 (GenBank accession: NC_011993) was produced in our lab according to the methods of Hücker et al. (2017) in Schaedler broth medium (anaerobic cultivation). Data evaluation was conducted as following: Adapters were trimmed with cutadapt (Martin, 2011) with a minimum quality score of 10 (q 10) and minimum length of 12 nucleotides (m 12). The trimmed reads were subsequently aligned to the reference chromosome using bowtie2 (Langmead and Salzberg, 2012) in local alignment mode, with zero mismatches (N 0) and a seed length of 19 (L 19). Reads overlapping ribosomal and tRNAs were removed using bedtools (Quinlan and Hall, 2010). Read counts, RPKMs, and coverage were then calculated with respect to the filtered BAM files, using bedtools and a custom bash script. Stalled-ribosome profiling data from the E. coli strain BL21 was obtained from Meydan et al. (2019). The adapter sequence was predicted using DNApi.py (Tsuji and Weng, 2016), and adapter trimming, alignment, and removal of rRNAs and tRNAs was conducted as described above. The positions of all reads mapped to the forward strand were obtained using SAMtools (Li et al., 2009) and the "bamtobed" tool from BamTools (Barnett et al., 2011). Reads with predicted ribosomal p-sites within 30 nucleotides in each direction of an annotated forwardstrand gene start codon ("start region") were extracted. Weakly expressed annotated genes with no single position (peak) represented by three or more reads, and also with at least four reads situated within the start region, were found using a custom bash script, as a positive control for weak gene expression. Localization of pop in the Context of the EHEC Genome and Its Expression The overlapping gene pop from E. coli O157:H7 (EHEC) EDL933 probably starts at genome position 1236020 (coordinates following the genome annotation of Latif et al. (2014), GenBank accession CP008957) and has a length of 603 bp (Figure 1 and Supplementary Figure S1). It is completely embedded in antisense to the coding sequence of the annotated, highly conserved outer membrane protein gene ompA (1065 bp). pop is located in frame −1 with respect to ompA ( Figure 1A). Ribosome profiling of EHEC EDL933 revealed clear evidence of translation of this OLG in LB medium, which is reproducible across biological replicates (Figure 2A and Supplementary Table S3) and EHEC strains (Figures 2A-C, see below). Expression of ompA is about 150 times higher than pop, which is not surprising since OmpA is one of the most highly expressed proteins in E. coli (Ortiz-Suarez et al., 2016). The annotated gene ycbG (453 bp), encoding a macrodomain ter protein, is located upstream of pop. RPKM (reads per kilobase per million mapped reads) values of ycbG are on average three times higher than values of pop (Supplementary Table S3 and Figure 2D). However, the RPKM of pop in ribosome profiling of EDL933 (i.e., RPKM ≈ 60) is at the same order of magnitude as the median RPKM of all annotated genes with an RPKM of at least 10 (RPKM = 70 and RPKM = 63 for ribosome profiling experiments SRR5266618 and SRR5266620, respectively), supporting genuine expression of pop. In addition to the level of protein expression given by the ribosome profiling RPKM value, the ribosome coverage value (RCV) describes the "translatability" of a particular gene's messenger RNA, i.e., RCV = RPKM(translatome) RPKM(transcriptome) (Hücker et al., 2017). For pop, the RCV is high, greater than 1 in a few instances. According to Neuhaus et al. (2017), transcripts with an RCV higher than 0.35 can be considered to be translated, while untranslated RNAs have a clearly lower RCV. Therefore, we propose that pop is translated in all pathogenic E. coli strains investigated. Notably, the RCV as measure of the translation of an mRNA into protein is on average higher for pop than for the annotated upstream gene ycbG (Figure 2E and Supplementary The region between ycbG and pop contains the transcription start site (TSS) and a σ 70 promoter ( Figure 1B, details further below). Two downstream ORFs, which are arranged in frames −1 and −2 with respect to ompA, are a little over 200 bp long and mostly overlap with ompA (Figure 1). Despite a downstream rho-independent terminator ( Figure 1D, details further below) neither of these ORFs appears to be transcribed or translated to a major degree (Supplementary Table S3) and, therefore, we designate the two ORFs in the following simply as downstream ORFs. Upstream of the pop-ORF, we detected a Shine-Dalgarno sequence ( G • = −3.6 kcal/mol) and the rare start codon CTG nearby (position 1236020, Figure 1C, see also Supplementary Figure S1). Additional evidence for this CTG probably being the translation initiation site is found in recently published stalled-ribosome profiling data using the antibiotic retapamulin in the strain BL21 (Meydan et al., 2019). This antibiotic leads to an arrest of ribosomes starting biosynthesis in the region of translation initiation. Five reads are antisense to ompA, and all of these are clustered in the vicinity of the putative CTG start site of pop ( Figure 3A). The read count observed would be unexpected if the cause was a random background translation event. A very conservative calculation of the binomial probability gives P(x ≥ 5) = 0.016 indicating non-random clustering of reads antisense to ompA at the pop start site (Figure 3A). A comparison to weakly expressed annotated genes (selection described in methods section) shows that the putative location of the pop translation initiation site is within the typical range for such genes (Figure 3B), and provides evidence locating the start codon within at most a few nucleotides of the predicted site. Similarly, we find that with pooled ribosome profiling data from EDL933, using the method of Meydan et al. (2019) to predict the ribosomal p-site as described in their methods section precisely identifies the start of the previously mentioned CTG codon (position 1236020) as a translation initiation site. In summary, pop was identified as a translated open reading frame based on ribosome profiling experiments. In the following, we present further data supporting a proteincoding status for the gene as well as expression and functionality The predicted SD sequence ( G • = −3.6 kcal/mol) upstream of the putative start codon is aligned to the consensus of the anti-SD sequence of the 16S rRNA in the 30S ribosomal subunit (Ma et al., 2002). The core of the ribosome binding site is displayed in bold letters. (D) Secondary structure of the first 40 bp of the predicted terminator. The folding was conducted with Mfold and the structure has a final energy of G = −8.6 kcal/mol. of this overlapping gene in the human pathogenic bacterium E. coli O157:H7 EDL933. Overexpression Phenotypes Indicate Functionality of pop Competitive growth experiments were conducted to analyze the influence of pop on EHEC's growth. For this purpose, the longest possible ORF of pop and the translationally arrested mutant ORF pop were cloned in an overexpression plasmid under the control of an arabinose-inducible promoter with the optimal ribosome binding site of the plasmid (pBAD+pop and pBAD+ pop). The mutant plasmid differs in just one base from the wild type plasmid and this single base substitution introduces a stop codon in the overlapping gene ( Figure 1A). It is assumed that these small alterations do not change the activity and function of expressed pop RNA possibly working as interfering ncRNA. This would indeed affect ompA RNA levels but for both plasmids equally. However, protein production from the pop gene is ceased in only pBAD+pop. Thus, any difference in growth after overexpression of either the intact or the mutated pop-ORF can be explained by the presence or absence of a protein (i.e., Pop) encoded by this OLG. The competition experiment was conducted in different stress conditions ( Figure 4A). Altered growth of cells overexpressing mutant or wild type sequences was detected in LB-based media supplemented with different stressors, whereas plain LB medium did not have a significant influence on the relative growth of mutant and wild type. For instance, addition of the organic acids L-malic acid and malonic acid as stressors led to better growth of cells containing the wild type plasmid compared to cells expressing the mutated sequence, indicated by a significantly lower CI compared to the t 0 condition; thus, the presence of pop is advantageous in these conditions. Addition of the acidic substances resulted in an initial pH shift from 7.4 to 5.8. A higher CI was detected when LB was buffered with bicine to a pH of 8.7. However, LB adjusted to acidic (pH 5.8) or near neutral (pH 7.4) milieu with the biologic buffers MES and MOPS, respectively, did not result in significant growth differences. We estimated copy number differences of competitors separately grown in LB, LB + L-malic acid and LB + bicine to exclude competitive growth effects occurring due to different plasmid amounts within the cells. In each condition, the cycle threshold differences of the plasmid-encoded gene coding for β-lactamase (bla) and the genome-encoded gene coding for the siroheme synthase (cysG) in cells overexpressing pop or pop were determined ( Cq). The Cq ratios of the two competitive strains do not significantly differ before and after growth in any of the conditions (Welch two-sample t-test, p-values > 0.05, Figure 4B and Supplementary Table S4). Thus, growth differences are true effects due to overexpression In accordance with the growth advantage of the wild type in the presence of malic acid (Figure 4A), pop RNA quantification with qPCR showed increased mRNA levels of pop relative to 16S rRNA levels in the presence of L-malic acid (fold change 2.4, Figure 4C and Supplementary Table S5). In contrast, less mRNA was detected in bicine-buffered LB medium at pH 8.7 (fold change 0.35, Figure 4C). Although significantly different Cq values were detected only in the alkaline medium (onetailed Welch two sample t-test, p-value = 0.03), we suggest that the fold change in L-malic acid also differs, though the p-value is 0.17, as p-values are often combined with a fold change to identify differentially expressed genes (e.g., Huggins et al., 2008;McCarthy and Smyth, 2009). We find that pop expression is differentially regulated between malic acid and bicine based on these qPCR results (fold change 6.9). Next, genomic knock-outs for pop in E. coli O157:H7 EDL933 were constructed ( pop and pop v2). Base substitutions were introduced 64 and 282 bp downstream of the potential start codon CTG. The stop codon mutation of EHEC pop v2 was inserted after the codon GTG in peak region 3 identified in ribosome profiling data (Figures 2A-C, also discussed below). The mutations each led to a stop codon in pop, whereas amino acids in ompA remained unchanged ( Figure 1A). We tested the mutant pop and pop v2 in several relevant stress conditions in competitive growth against the wild type strain, but did not detect a significant difference of growth in any condition for either of the mutants (Supplementary Figure S2). As there is no clear peak, instead the mean of all of the p-sites was calculated. The mean is shown with a dotted red line, pictured in relation to the putative start codon CTG in green. Significance of 5 reads was calculated based on the read distribution modeled as a binomial process. The total sequence space available is 3943267 nucleotides. If we conservatively assume a target size of 100 bp, this equates to a probability of success for a single trial of 100/3943267 = 0.0000254. We find 56812 reads antisense to annotated genes, which is the number of independent trials, if we generously assume as the null hypothesis that all are due to non-functional "background" translation events. With these parameters, the probability of obtaining five or more reads in our target region, i.e., P(x ≥ 5), is equal to 0.0159, therefore 5 reads are significant (p-value ≤ 0.05). (B) Average ribosomal p-site positions for 85 weakly expressed genes. Positions of average p-sites relative to annotated start codons (blue dotted line), as illustrated in panel (A), are plotted for all 85 weakly expressed forward (+) strand annotated gene start regions. Weakly expressed is here defined as having at least four mapped reads within 30 nucleotides of the annotated start site, but no single position (peak) with three or more reads. The location of the average p-site for pop (red asterisk) lies within this distribution, indicating that the observed cluster of ribosome-stalled reads near the CTG site is informative. Thus, we assume that CTG might be the start codon for pop. Based on the clear effect of overexpression, we propose that pop codes for a protein, as mRNAs transcribed from the intact sequence and the translationally arrested variant differ in one nucleotide only. Thus, RNA interactions of pop and ompA are probably not affected. Opposite overexpression phenotypes were found in alkaline buffered and acidified media, so we propose a FIGURE 4 | Effect of pop expression in various pH ranges. (A) Overexpression phenotypes of pop in competitive growth assays. Competitive growth of EHEC while overexpressing either intact (pBAD+pop) or translationally arrested pop (pBAD+ pop) was conducted in conditions as indicated, i.e., LB medium supplemented with an organic acid or a biological buffer. Mean competitive indices CI are given as the ratio of relative abundance of cells expressing mutant or wild type plasmid measured by peak heights, i.e., fluorescence intensities in sequencing electropherograms at the mutated positions, in different culture conditions regarding the input ratio t 0 . Error bars indicate standard deviations. Statistical significant differences of the CI (tested with paired t-test) before and after growth is indicated (*p ≤ 0.05; **p ≤ 0.01; ns, not significant). (B) Copy number estimation for both plasmids. Genomic and plasmid DNA of EHEC transformants (pBAD+pop and pBAD+ pop) separately grown in the indicated growth conditions was relatively quantified by qPCR of a genome section (cysG-gene) and plasmid (bla-gene) specific gene segment. The mean ratios of quantification cycle differences [ Cq(Cq(cysG)-Cq(bla)) = copy number] for the two transformants of three biological replicates are given. Error bars indicate standard deviations. No statistical significant ratio difference before and after growth was detected (tested with paired t-test; ns, not significant, p-value > 0.05) (C) pop expression measured by quantitative PCR (qPCR). Fold change of pop mRNA has been calculated based on Cq (difference in cycles of quantification) values of pop mRNA and 16S rRNA of EHEC grown to early exponential phase (OD 600 = 0.3) in LB, LB + L-malic acid or bicine-buffered LB medium. Mean values and standard deviations of three biological replicates are shown. Statistical significance was tested with one-tailed Welch two sample t-test (*p ≤ 0.05; ns, not significant). pH-dependent function. In line with this hypothesis, the mRNA is differentially regulated in various pH conditions ( Figure 4C). The Transcriptional Unit of pop Includes an Active Promoter and a Rho-Independent Terminator Cappable-seq (Ettwiller et al., 2016) is a recently developed approach detecting the TSS of mRNA with next generation sequencing. Using this method, a weak but significant transcriptional start site was determined at genome position 1235862 in the intergenic region between ycbG and pop in independent biological experiments (Figure 1, TSS; Supplementary Figure S1). Two independent bioinformatics tools, BPROM and bTSSfinder, were used to analyze the upstream region of the TSS for potential promoter sequences. Both programs identified a σ 70 promoter [BPROM LDF score 0.59, Solovyev and Salamov (2011), bTSSfinder score 1.86, Shahmuradov et al. (2017), Figure 1B and Supplementary Figure S1]. Although the distance between the transcriptional start site and −10 box of the promoter is not optimal (2 bp instead of approx. 7 bp), promoter sequence activity was verified by means of a GFP-assay ( Figure 5A). We found a significantly enhanced fluorescence in cells harboring the plasmid containing the putative promoter sequence compared to those with the empty vector in LB and bicine-buffered medium, indicating an active promoter sequence upstream of the TSS of pop. The fluorescence signal of the promoter in the basic milieu (pH 8.7) was strikingly higher, but this may result from GFP accumulation during longer incubation times necessary in this medium (Miller et al., 2000). Since the promoter activity for pop is weak compared to promoters of annotated genes, we tested for polycistronic expression starting from the promoter of ycbG. Reverse transcription PCR (RT-PCR) was performed to examine the transcript of pop ( Figure 5B). No mRNA spanning both genes was detectable, thus, we propose that pop is transcribed from the tested promoter monocistronically. A 120 bp long rho-independent terminator was predicted 295 bp downstream of the stop codon of pop using FindTerm (Solovyev and Salamov, 2011). Hypothetical secondary structures of 30-bp segments of this region were created with the tool Quickfold of Mfold (Zuker, 2003). A stable stem loop structure ( G = −8.6 kcal/mol) within bases 35-78 of the predicted terminator sequence was detected ( Figure 1D). To verify the 3 -end of the mRNA downstream of the hairpin structure, RT-PCRs were performed. We used reverse primers binding either within the downstream ORFs or further downstream, beyond the secondary structure. We observed that pop and the downstream ORFs are co-transcribed and transcription is terminated just downstream of the predicted stem loop structure ( Figure 5C). Based on these results, we conclude that pop forms an approximately 1120 bp long transcriptional unit covering almost the entire open reading frame of the annotated gene ompA, excluding the upstream gene ycbG but including the downstream ORFs, ending with a rho-independent terminator. Western Blot of Pop Since we detected an active promoter ( Figure 5A) and phenotypes in competitive overexpression experiments (Figure 4A), the coding capacity of pop was assessed using Western blotting. pop was cloned in-frame with an SPA tag (7.7 kDa; following Zuker, 2003) on a pBAD-based plasmid, and overexpressed in EHEC. SPA-tagged proteins were visualized after separating whole cell lysates on tricine gels. The experiment was performed in LB at pH 7.4 (Figure 6). Besides the expected full-length protein (theoretically 30 kDa, detected approx. 34 kDa), shorter products were immunostained (approx. 20 and 24 kDa). The amount of the full Pop protein appears to increase within the first 1.5 h after induction and decreases afterward when overexpressed, pointing to an instability of the protein. However, this experiment does not prove natural occurrence of the protein, due to artificial overexpression of the protein. On the other hand, detection of an endogenously expressed protein in Western blots failed as pop-tagged cells could not be recovered due to technical issues. Nevertheless, we could detect an initially stable Pop protein, supporting a protein coding potential of pop in general. Bioinformatic Evidence for pop Being a Protein-Coding Gene Protein databases were searched for Pop homologs in order to find hints of a specific function. No significant similarities with annotated proteins were found using blastp analysis in PDB (Protein Data Bank), UniProtKB/Swiss-Prot and the Ref-Seq protein database, but homologous proteins were detected in NCBI's non-redundant protein sequence (nr) database. However, the hits covered at best 67% of the amino acid sequence of pop. A deeper analysis of the top hit (uncharacterized protein, 67% coverage, 99% identity, e-value of 4 −91 ) and the genomic sequence of the target organism Shigella sonnei showed that its ompA homolog was not annotated due to ambiguous bases at its 5 end, which resulted in a missing start codon for ompA in this case. Consequently, pop was "allowed" to be predicted ab initio, as ompA had no obvious gene structure and was, therefore, rejected during annotation. This result corroborates the known function of many algorithms like Glimmer, Prodigal or Prokka, which systematically avoid annotation of long (nontrivially) overlapping genes (Delcher et al., 2007;Hyatt et al., 2010;Seemann, 2014). Further, NCBI explicitly forbids long overlaps in their prokaryote genome annotation standards (NCBI, 2018). To check whether pop is recognized by gene finding algorithms in the case of an absent ompA annotation, we applied Prodigal to four genomes of bacteria in the family Enterobacteriaceae (E. coli O157:H7 EDL933, S. dysenteriae, K. pneumoniae, E. cloacae). Potential start codons of ompA were masked with N bases in each genome and consequently ompA was not detected. In contrast, pop was predicted as a proteincoding gene in all four genomes (Supplementary Table S6). The absolute prediction scores of all annotated protein-coding genes in this analysis ranged from −0.5 to >1000 in EHEC. The total score of pop is 14.37 and falls within the lowest 10% of the 5351 predicted EHEC coding sequences. Nevertheless, sequences with even lower scores than pop represent conserved annotated genes, e.g., a fimbrial chaperon or the entericidin A protein, to name two of many. Thus, pop has elements of a gene structure which enable its identification as a proteincoding gene when ompA is masked. In the normal case, pop is apparently rejected in annotation solely due to its overlapping gene partner ompA rather than any property of the sequence itself. DISCUSSION Antisense transcription is a widespread phenomenon in bacteria and often connected to regulatory function of the RNAs (Dornenburg et al., 2010;Ettwiller et al., 2016). However, there is increasing evidence that antisense RNAs can be templates for ribosomes to synthesize proteins (Miranda-CasoLuengo et al., 2016;Weaver et al., 2019). So far, characterized non-trivially overlapping genes are typically short (e.g., Fellner et al., 2014;Haycocks and Grainger, 2016;Hücker et al., 2018a). Therefore, the discovery and analysis of pop with a length of 200 amino acids is of special interest. The number of coding sequences in bacteria predicted by genome annotation algorithms is underestimated, in particular because neither small genes nor genes with extensive overlap are considered to be true genes (Burge and Karlin, 1998;Delcher et al., 2007;Hücker et al., 2017). Therefore, it is not surprising that pop has until now escaped attention. In our study, we detected translation of pop in three pathogenic E. coli strains. Although whether ribosome-profiling signals indicate translation of genes in all cases is debated, independent confirmation of expression or function for specific genes can be achieved either by chromosomal tagging (e.g., Baek et al., 2017;Meydan et al., 2019) or functional characterization, as for example presented here using competitive growth. The pattern of translation in ribosome-profiling data of this ORF is conserved across widely divergent E. coli strains, albeit with very low translation in some strains. It has been shown that translation of even short proteins in E. coli is associated with a significant bioenergetic cost (Lynch and Marinov, 2015), and specific translation of non-functional genes would therefore be expected to be acted against by selective processes relatively quickly. The strains compared diverged more than 4 million years ago according to molecular clock methods in the case of K12 and Sakai (Reid et al., 2000). This corresponds to more than 1 billion generations, and LF82 is still more distantly related. Consequently, we would expect all non-functional translated products shared with the common ancestor of these strains to have been lost. In contrast to conserved translation in pathogenic E. coli, pop translation was not observed in the well-studied E. coli K12. This finding, in combination with the discarding of pop in automated annotation, as it is embedded antisense in the conserved outer membrane protein ompA, leads us to propose that it was simply overlooked so far. We studied the transcriptional unit of pop and identified (i) a TSS (ii) downstream of a σ 70 promoter, (iii) a potentially coding ORF (i.e., pop) with a putative CTG start codon, and (iv) an experimentally verified rho-independent terminator of pop. In the ribosome profiling data, we identified three peak regions, which are evidence for translation initiation sites in translatome data (Oh et al., 2011;Woolstenhulme et al., 2015); a putative start codon of pop could be contained in each of these (regions 1-3 in Figure 2A and Supplementary Figure S1). All regions are covered with a substantial number of ribosomal profiling reads, and region 2 is covered best, particularly in EHEC EDL933. We assume that translation for pop starts in region 2, especially since a Shine-Dalgarno motif for ribosome binding was predicted and ribosome profiling data across divergent strains point to a putative translation initiation site therein. As mentioned, a nearby CTG is found downstream of a ribosome-binding site, representing a rare but sometimes-used start codon for prokaryotes (Spiers and Bergquist, 1992;Sussman et al., 1996;Hecht et al., 2017;Yamamoto et al., 2018). Furthermore, a TTG start codon is present in region 1, representing the longest potential ORF for pop. However, we could not find evidence for a TSS or SD-sequence, though the latter is not obligatory for gene expression (Moll et al., 2002;Gualerzi and Pon, 2015). This TTG was the start codon predicted by Prodigal as the most probable one, however, as the upstream gene ycbG has a predicted terminator ( G = −12.20 kcal/mol, indicated in Figures 1, 5B) and bicistronic expression of pop along with ycbG was excluded by our data, we propose that this TTG is not a start codon here. The start codon in region 3 (GTG) is located 45 amino acids downstream of the mutation introduced in pop for analysis in competitive growth. We did not find a phenotype for a translationally arrested mutant regarding this putative start codon. Furthermore, overexpression growth phenotypes found in competitive growth experiments are not conferred by the protein translated from this start codon. While these points are strong evidence that this GTG is not the start codon, formation of a protein isoform not carrying a phenotype in the conditions analyzed here cannot be excluded. In addition to the gene structure, the Pop protein was analyzed in our study. A Western blot verified the presence of a protein, which appears to be unstable when expressed from the plasmid. Nevertheless, native protein expression could not be investigated immunologically and natural occurrence of the protein Pop remains unclear. Pop might be stable, degraded or exist in different isoforms, a phenomenon reported for some bacterial proteins recently (Waters et al., 2011;Di Martino et al., 2016;Nakahigashi et al., 2016;Vanderhaeghen et al., 2018;Meydan et al., 2019). Most importantly, competitive overexpression growth assays conducted in this study are the best indication for a proteinaceous nature of the pop gene product. As recently shown, not only lossof-function screenings but also overexpression phenotyping is an appropriate approach to find novel genes and to elucidate their function (Mutalik et al., 2019). However, as shown previously, overexpression of unnecessary but usually non-toxic proteins often leads to decreased growth rates (Dong et al., 1995;Shachrai et al., 2010). This could be assumed in our assay conducted in bicine-buffered LB, in which cells expressing the full-length protein had significantly lower growth. Nevertheless, as growth behaviors of mutant and wild type pop expressing cells did not change in pure LB, the phenotype seems to be rather specific for the alkaline stress conditions and not due to an effect of overexpression stress. However, in acidified medium the cells had a growth advantage in comparison to cells expressing the truncated form and, thus, pop overexpression is beneficial to EHEC at low pH. This is important since this effect cannot be explained by stressed cells due to protein overexpression. In contrast, analysis of a genomic knock-out suggests that the absence of the protein is not deleterious for EHEC under the conditions tested. While it has been shown that effects of overexpression and knock-out can be complementary, this is not always the case (Prelich, 2012). Several examples exist in which the actions of genes can be compensated by each other [e.g., CLN1 and CLN2 in S. cerevisiae, Hadwiger et al. (1989); cold shock proteins in bacteria, Xia et al. (2001)]. For CLN1 and CLN2, both have similar effects when overexpressed separately, but absence of one of the genes can be balanced out by the other and only a double knock-out has a phenotype. In summary, we suggest that the investigated open reading frame encodes a protein, since it has all structural features of a protein coding gene, is translated and it shows overexpression phenotypes in pH stress. We propose the name pop (pHregulated overlapping protein-coding gene) for this novel overlapping gene. It should be noted that the hemC/F/H/L genes were previously referred to as popA/B/C/E but the OLG pop is not associated with any function of these. It could be speculated that the positive effect of overexpressed pop in acidic medium correlates with the acid tolerance of EHEC necessary to overcome the acidic barrier in the stomach after ingestion (Nguyen and Sperandio, 2012). If true, pop could be a pathogenicity or host-environment related gene of EHEC only activated upon specific stress. Long ORFs embedded antisense to annotated genes like pop, as well as other overlapping ORFs, may form a hitherto greatly underestimated source of proteins. Recently developed methods like dRNA-seq (Sharma et al., 2010) and Cappable-seq (Ettwiller et al., 2016) identified hundreds of TSSs antisense to annotated genes producing antisense transcripts with unknown translation status and function. Modern ribosome profiling techniques, including stalling ribosomes at translation initiation sites, identified several unambiguous start codons for protein coding genes, which overlap with annotated genes either in sense or in antisense direction (Meydan et al., 2019;Weaver et al., 2019). We suggest that these "abnormal" transcriptional and translational signals in next generation sequencing analysis should not be neglected but analyzed in more detail, as has been conducted for the long overlapping gene pop. Many novel functional elements, especially for pathogenicity in novel hosts or survival in new niches, might be "hiding" in the genome of any bacterium. DATA AVAILABILITY STATEMENT The datasets generated for E. coli LF82 can be found in the Sequence read archive; accession numbers are SRR11217090, SRR11217089, SRR11217088, and SRR11217087. AUTHOR CONTRIBUTIONS BZ performed the experimental analysis on pop in EHEC EDL933 and database searches. ZA conducted the analysis of ribosomal profiling data. MK performed the ribosomal profiling in LF82. SS and KN supervised the study. BZ wrote the first draft of the manuscript including the figures with the help of KN and SS. All authors read and approved the final version of the manuscript.
2019-11-28T12:24:04.422Z
2019-11-23T00:00:00.000
{ "year": 2020, "sha1": "bd9d6a92480c6fbdfe4dd877cf8c40dc68c208fd", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.00377/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82ca831e550cdb7e83e77364d62caefb840e56fd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
1686533
pes2o/s2orc
v3-fos-license
Teachers’ Attitudes, Competencies, and Readiness to Adopt Mobile Learning Approaches — This study explores how teachers' attitudes and competencies influence their willingness to adopt mobile learning approaches. By mobile learning we mean teaching approaches that use mobile devices to enliven and extend traditional teaching. Of particular interest is exploring how first-order (e.g. lack of adequate access, time, training and support) and second-order (e.g. teacher's pedagogical and technological beliefs and willingness to change) barriers affect teachers' beliefs and attitudes. In the autumn of 2012, we conducted three mobile learning case studies in Central Finland. We used semi-structured interviews to collect data. The study indicated that positive experiences raised teachers’ willingness to use the mobile technologies again. Observations also highlighted the need for adequate support (i.e. technological and pedagogical support), and teachers' professional development. In particular, the teachers should have the confidence to embrace mobile technology in their teaching practices. Lack of confidence heightened other barriers and reduced the willingness to adopt mobile learning approaches. I. INTRODUCTION In recent years, technologies have become an important part of everyday life. This rapid diffusion of technologies has also affected the educational context. Traditional teaching and learning approaches are insufficient to fulfill the expectations of today's learners [1]. Students come into school as digital natives, and thus the lack of technologies can frustrate them [2]. An educational technology boom was experienced particularly in the 1990s [3]. Nowadays, various technologies offer a great deal of flexibility in when, where, and how teaching is distributed. Mobile devices, for instance, can offer 'just in time, just enough, just for me' learning [4]. Unfortunately, the reality is that most classrooms look the same as they did 100 years ago, and matters are no better in those countries that have started to integrate ICT (information and communications technology or technologies) into the curriculum and pedagogy [2]. Teachers mainly prefer to use conventional technologies to reproduce old pedagogy, and textbooks still are the most common and widespread resource in schooling [5]. Some teachers do not even see mobile devices as relevant for learning [5], and many policy-makers, educators, and parents fret about disruptive behavior (e.g. text messaging during lessons, using mobile phones for cheating or bullying), and therefore think that mobile devices are an inappropriate and potentially disruptive force in education [6]. Despite the huge number of studies which have explored the barriers and difficulties around ICT integration in education, it is essential to continue to develop an understanding of the problem. In particular, information about teachers' attitudes to and beliefs about mobile learning is needed, because mobile learning pilots and trials have until now been characterized by short-term, small-scale studies which focus on learner acceptance and attitudes [7]. This paper will explore how teachers' attitudes and competencies influence their willingness to adopt mobile learning approaches. Different learning theories offer different perspectives and practices to mobile learning. The behaviorist mobile learning approach, for instance, includes 'drill and feedback' activities, while more recent approaches highlight the use of authentic contexts and real-life problems [9]. In this study, the particular interest lies in exploring how first-order (e.g. lack of adequate access, time, training, and support) and second-order (e.g. teacher's pedagogical and technological beliefs and willingness to change) barriers proposed by Ertmer [9] affect teachers' beliefs and attitudes. The following section presents a review of the literature related to the factors that affect teachers' adoption of ICT and mobile learning. As such, it forms the theoretical framework on teachers' attitudes, competencies, and readiness to adopt mobile learning approach. The paper continues with the research method and results and concludes with reflective remarks for future research. II. BACKGROUND Teachers seem to have a growing interest in integrating ICT into education, but they can meet many barriers that can hinder the process and lessen their enthusiasm [10]. It has been repeatedly supported in the literature that integrating technology into instruction tends to move classrooms from teacher-dominated to student-centered environments [11]. Nonetheless, it is very important to realize that technologybased learning activity does not exist ready-made in a piece of software. The same software application can generate entirely different activities in different classrooms. Therefore, learning activities are always constituted through a situated interaction of students, teachers, and technologies. In other words, the learning culture impacts on the usage of technologies and software [12]. Five conditions can promote technology usage in an educational setting: technological infrastructure and support, teacher's beliefs and practice, curriculum, school leadership, and professional development [13]. Teachers particularly need confidence and school-level support in order to embrace technology and to change their teaching practices [14]. Three factors particularly affect teachers' adoption of ICT: the institution, resources and the teacher themselves [15]. From the perspective of the institution, the school leadership has a strong influence on resources, the curriculum, and professional development. School leaders should actively promote ICT usage through budget and funding decisions, adequate technological infrastructure and support, and opportunities for professional development [13]. A lack of ICT resources may prevent teachers from utilizing ICT in their teaching. This lack may be a result of the poor organization of resources or the concrete lack of physical devices [14]. Inasmuch as ICT integration is a very complex and multifaceted issue, teachers need professional development, and tools and techniques if they are to integrate new tools meaningfully into teaching. In other words, ICT integration can be seen as a collaboration between technologies, pedagogy, and content, and teachers need knowledge of these three components [16]. Teachers' perceptions and experiences of ICT vary and, therefore, even if teachers have up-to-date technology and support, they may not be enthusiastic enough to use technology in their classroom [15]. One very significant determinant of teachers' levels of engagement is their level of confidence in using the technology. Those teachers who consider that they are not skilled in using ICT can feel anxious using it in a classroom [14]. A teacher's confidence, in turn, is affected by the levels of adequate access, training, and support available [12]. For example, technical problems can have a direct effect on a teacher's confidence. In addition, teachers who do not realize the advantages of using technology in their teaching are less likely to use ICT [14]. The main reasons for teachers not realizing the advantages of technology are that they are not familiar with the specific tools or are not able to see the link between the tools and learning opportunities [5]. Moreover, teachers' theories of teaching are central in their ICT usage [15]. Hence, the user's overall attitude toward using technology is a major determinant of whether or not they use it. This attitude, in turn, is influenced by beliefs about perceived usefulness and perceived ease of use. In other words, people tend to use technology if they believe that it will help them to do their tasks better. However, despite any perceived usefulness, the user can believe that the technology is difficult to use and for that reason refuse it [17]. Consequently, internal beliefs can be affected by the external context and barriers. In other words, although teachers may recognize the importance of integrating technology into curricula, their efforts may be limited by both external (first-order) and internal (second-order) barriers (see Fig. 1). The term 'firstorder barriers' refers to extrinsic obstacles such as missing resources. Second-order barriers, in turn, are rooted in teachers' underlying beliefs about teaching and learning. Even though second-order barriers may not be observable, their presence can be noticed in teachers' reasoning about their frustration. Thus, first-order barriers can be significant obstacles for technology integration, and second-order barriers can either reduce or magnify their effects [8]. The majority of early integration efforts focused on eliminating the first-order barriers. The initial view was that classroom integration would happen if teachers had access to enough equipment and training. However, even if every firstorder barrier were removed, teachers would not automatically use technology. This highlights the fact that second-order barriers underlie and affect technology usage [9]. Teachers' attitudes toward mobile learning vary greatly. Many mobile learning projects in educational settings have already been initiated. However, the practice of using mobile devices is still emerging, and the concept of mobile learning has not yet reached the policy level. Projects are typically conducted on a small scale and driven by enthusiastic teachers. Nonetheless, many European governments, policy-makers, parents, and teachers treat mobile technologies as disruptive devices. Some countries have even banned or restricted mobile device usage in school. [18] Even though teachers are central to the success and sustainability of mobile learning [19], many mobile learning pilots and trials have mainly focused on learner acceptance or attitudes. These studies have found overall student perceptions of mobile learning to be positive [20]. In addition, students' intention to adopt mobile learning has reported to be high [21; 22]. Research looking specifically at teachers' perceptions and beliefs about the use of mobile learning are somewhat limited. For this reason, it is important to understand teachers' attitudes to and beliefs about mobile learning, thus the primary interest in this study is to explore how first-order and second-order barriers proposed by Ertmer [9] affect teachers' beliefs about and attitudes to mobile learning. III. DESCRIPTION OF THE STUDY In order to explore teachers' beliefs and attitudes, we conducted three mobile learning case studies at schools in Central Finland in the autumn of 2012. The case studies were part of the Personal Mobile Space project (see [23]). The use of a case study method is appropriate because it can provide an in-depth examination and give an understanding of perspectives, opinions and expectations. Even though case studies have been criticized in particular for their lack of representativeness, as well as a lack of rigor in the collection, construction, and analysis of empirical materials [24], nevertheless the conclusions and explanations can be the most generalizable and most interesting aspect of case study research. The collected data may be specific to a particular school, student, or teacher, but the conclusions and explanations can be usable and generalizable in understanding how other schools, students, or teachers work [25]. For this study, three cases were selected. A total of six volunteer interested teachers cooperated when developing new ways of embedding mobile technologies for learning. We selected the cases in such way that it is possible to draw a broad picture of teachers' beliefs and attitudes from them. In other words, each case was unique, but the phenomenon under examination was the same in all of the cases; teachers' beliefs about and attitudes to mobile learning. All of the six teachers were female. Some of them had already taught for over twenty years, and others were at the beginning of their teaching studies and teaching career: in other words, they were trainee teachers. The teachers had little or no experience of mobile learning, but were very enthusiastic to see how they could use mobile technologies in their classrooms. Cooperating with the teachers gave us the opportunity to design learning activities using meaningful content that has relevance to the school curriculum. We used a semi-structured interview to collect data for the study. We designed the interview questions to cover the core aspects of mobile learning as well as to understand the teachers' experiences of and opinions about mobile learning. Table I presents the interview framework and themes. However, the actual questions were adapted to the individual case, taking into account, for example, the application(s) used. A. The Case Studies The three cases, 1) Nature Tour, 2) Math Trail and 3) Literature Tree, varied in terms of mobile learning application, objectives, duration, students and teacher (see Table II). In all cases, equal opportunities were offered to take advantage of loaned equipment as well as support during the experiment. The three cases will be introduced in more detail in the following sub-sections. 1) Case: Nature Tour The Nature Tour case explored the implementation of the Nature Tour mobile application in Finnish early childhood education settings. The primary objective of the Nature Tour mobile application is to enhance children's outdoor learning experiences by helping with the documentation of the field trips. A total of twenty-nine students, two teachers, and one assistant participated for two months. They used loaned smart phones that were preinstalled with the prototype of the Nature Tour application. The mobile application was used in appropriate situations on field trips to arouse children's interest in nature. The objective of the implementation was to begin nature education by observing plants and fungi in an authentic context. The researcher provided a short orientation session for the teachers, but otherwise the teachers worked independently during the two months. 2) Case: Math Trail The Math Trail case explored the implementation of Quick Response (QR) codes in Finnish primary school. A total of twenty-four students and their teacher participated for two weeks. The overall objective of the implementation was to advance the students' mathematical skills. The learning subject and objective were to learn about decimal numbers. At the beginning of each math lesson, the teacher taught the theory and the students solved five problems from the textbook. After solving textbook problems, the students could circulate along the math trail, which included QR code activities. Each student was given one loaned smart phone and a map of the trail, including QR code locations. For each QR code location, the students answered one problem by scanning the code and submitting their answer using the online form on the mobile device. If the answer was correct, the student received a hint about the following QR code location. The math trail included a total of 65 textbook-like decimal number problems planned together with the teacher. The researcher prepared the online form (implemented using HTML and JavaScript) and QR codes, and provided a short orientation session for the teacher and the students on how to scan QR codes. Otherwise the teacher worked independently during the two weeks. 3) Case: Literature Tree The Literature Tree case explored the implementation of QR codes in a Finnish secondary school. A total of sixteen students, two teacher trainees, and one teacher participated. The overall objective of the implementation was to revise lessons learned earlier about Finnish literary history. The activity included a 'literature tree,' a certain kind of map where the students were asked to place certain concepts in the right places. The QR codes contained hints, such as weblog texts and pictures, which helped the students to place the concepts in the right place on the literature tree. The graphics of the literature tree, as well as the contents of the activity, were designed by the trainee teacher, but the technical implementation, such as QR codes, was undertaken by the researcher. The researcher also provided a short orientation session for the teachers and students on how to scan QR codes, but otherwise the teachers worked independently during the QR sessions. IV. RESULTS The collected data were analyzed through the first-order (i.e. external) and second-order (i.e. internal) barriers. Table III brings together the findings and observations made in the teacher interviews. A. First-order barriers and second-order barriers Both first-order and second-order barriers were observed in the teachers' interviews. The first-order barriers mainly concerned lack of resources and training. Mobile technologies are still a new phenomenon in Finnish education, therefore it was predictable that the teachers would think that they should have more training. In all cases, equal opportunity was offered to take advantage of loaned equipment. However, teachers highlighted the fact that a lack of adequate access could be a major barrier in the future. The second-order barriers mainly concerned resistance to change and lack of confidence. Traditional unchanged views and lack of confidence strongly affected teachers' attitudes and readiness to adopt mobile learning approaches. A positive, interested and curious attitude, in turn, gave them the courage to step out from their comfort zone. As a result, the experience was positive and gave them courage to continue. Next we will introduce the results in more detail, in the order used In Table III, including a few relevant quotes from the interviews. 1) Lack of adequate access In all cases, equal opportunity was offered to take advantage of loaned equipment. Four teachers considered that in the future, a lack of adequate access could be a significant barrier for utilizing mobile devices as part of teaching and learning practices. In other words, one obvious challenge when implementing mobile learning is whether the school can provide the necessary tools and devices. One solution could be BYOD (Bring your own device) which means that students bring their own devices to school for educational purposes. The positive aspect of BYOD is that the students' own devices are familiar to them, and they are able to customize them to their needs. However, it is important to realize that not all students have the necessary equipment. This was also highlighted in the teacher interviews: Some students have flashy devices, but some only have standard phones, not even smart phones. (Teacher 3) In this sense, school leaders have a strong influence on mobile learning, as they make the decisions about the technological infrastructure and budget, and consequently the supply of equipment. 2) Lack of time From the policy level view, opportunities for professional development as well as time for training play a major role when considering ICT usage. There are plenty of educational materials to go through as well as learning goals to be reached during the school year. Education is also scheduled on a semester, daily and hourly basis. The teacher's time for training on new techniques and approaches is quite limited. One of the participants clearly highlighted in the interview that teachers should be provided with resources and time to practice and improve their skills: I am interested in incorporating technology into the classroom, but I complain that the employer must provide time and resources. (Teacher 2) 3) Lack of training All of the teachers reported that they would need more training. Because the mobile learning approach was new to them, they ended up having to repeat the same tasks that they would undertake using more conventional tools. One of the teachers, for instance, mentioned that creativity was missing, because they were not familiar with the use of mobile technology. In particular, it seems that teachers would need training on how to solve technical problems. Three teachers reported technical problems during the experiment. In Case 1, the problems were such that solving them alone was hard. Because of this, the teachers began to think that they did not have sufficient skills, and that the mobile application was not relevant to their early childhood group. In Case 2, the technical problems were minor and did not interfere with the experiment significantly, but frustrated both teacher and students. 4) Lack of support All of the participants were offered equal opportunities for support during the experiment. It is not obvious why the teachers in Case 1 did not utilize this support, even though they evidently experienced some technical problems and needed help during the experiment. Solving the problems alone was hard, and the teachers began to think that they did not have sufficient skills and that the mobile application was not relevant for them. Other teachers did not report a lack of support during the experiment. In contrast, they thanked the researcher for the support and assistance they received. For instance, a teacher in Case 2 had some problems during the experiment, but she reported that she received sufficient support and help whenever she needed, and that she felt that she did not have to deal with the problems on her own. These observations indicate the need for sufficient support. The support should be both technical and pedagogical. Because the mobile learning approach was new for the teachers, they did not utilize the potentials of mobile technologies. However, all of the teachers realized the eminent potentials of mobile technologies. One of the teachers, for instance, commented: When the role of the mobile application was invisible and technically automatic, you definitely would be able to apply and use it in different places and with different themes. (Teacher 2) In all cases, the activities were more or less behaviorist, presenting learning materials, obtaining responses from learners, and providing feedback. In an ideal scenario, mobile learning activities could be arranged in authentic real-life contexts where students discover and solve problems relating to what they find. In other words, teachers would evidently need support and knowledge on how to use mobile technologies as well as how to utilize them in a more studentcentered way. 5) Resistance to change A positive, interested and curious attitude gave the teachers the courage to step out of their comfort zone. As a result, the experience was positive and gave them even more courage to continue. The teachers in Cases 2 and 3 also noticed that students are sometimes better with the technology than them, but that this is acceptable and at times even a good thing. The teachers also reported that it is nice to use tools that are familiar to the students, and in that way come closer to the students' world. All of these observations indicate willingness to change. One of the teachers commented: When you are considering this time and this life, then yes, you should include some good things into teaching as well. (Teacher 3) The teachers in Case 1, in their turn, highlighted their lack of adequate skills and confidence, which were clearly reflected in their overall experience as well as their readiness and willingness to change and adopt a mobile learning approach. 6) Lack of confidence The teachers in Case 1 were not confident, and they highlighted their lack of adequate skills during the whole interview. The lack of confidence gave rise to some fear of utilizing the mobile application. Because of this fear, the teachers opened the application for the children. They thought that they should be better able than the children in order to be able to guide them. Both teachers, however, believed that the children would have learned to use the application, but as they feared that something could go wrong, they preferred to set things up in advance. One teacher reflected on her lack of confidence: My anxiety would have likely been a lot lower if the children had known how to use the application. (Teacher 1) The teacher in Case 2 also highlighted that she does not have much knowledge or skills about mobile devices. She stated that the anxiety was reduced by the fact that the students already knew how to use the equipment, and that there was an opportunity to obtain support during the whole experiment. 7) Traditional teaching practices Traditional teaching practices were particularly identified in two teacher interviews. Teachers 2 and 3 reflected on their traditional teacher-centered approaches. Teacher 2 has over 20 years' teaching experience. The earlier short mobile learning experience was very positive, and she decided to take part again. She commented that they already had plenty of material to use, and new tools do not always integrate easily, especially if one is not technically skilled. During the experiment, she noticed that her teaching practices are quite traditional and stereotyped. These beliefs may not have been apparent to her until now, but they may have impacted on her attitudes during the experiment. She reflected: I think that the children have also already learned our overspecialized practices. When they were asked if they are able to use the devices in the classroom, they answered no they are not. At that point, I thought that our thoughts are quite stereotyped. (Teacher 2) Teacher 3 also has over 20 years' teaching experience. She sees mobile learning as one method of enriching teaching, but considers that new things should always be taught in a teachercentered way. In other words, she thinks that mobile learning could be one method of occasionally inspiring teaching, but that its constant use could be inconvenient. The experiment was interesting and she definitely would like to know about other possibilities for utilizing mobile technologies as part of teaching and learning. In other words, even though her pedagogical views are quite teacher-centered and traditional, her willingness to change and curious attitude gave her the inspiration to try something new. 8) Interest in adopting mobile learning approaches Four teachers reported that they would try a mobile learning approach again. In particular, the positive experience gave them the courage to try again. One teacher trainee reflected after the experiment: After this experiment, I think that I have the courage to utilize mobile devices as part of my teaching practice. (Teacher trainee 1) When the experience was more challenging, this reduced the teacher's enthusiasm and willingness to adopt mobile learning approaches again. B. Other emerging aspects Two teachers (Teachers 3 and 4) highlighted the abusive use of mobile devices. This thinking about and fear of abusive and disruptive use can lead to teachers banning the mobile devices in the classroom. A further interesting research topic could be how to contribute this perception, as it could be a very significant barrier. It has been argued that teachers that begin with limited knowledge may initially provide restricted student use until they have mastered the relevant skills themselves [9]. Teacher 3 wanted to keep things under control, and highlighted that she does not have adequate skills. Perhaps this is the reason why she also commented that there should be some sort of restriction in order to ensure the safe use of devices. The teachers' opinions about mobile learning were mainly very positive. The teachers reported that the experiment extended their thoughts and that they had started to consider additional uses of mobile technologies in an educational context. Many good suggestions were presented. However, all of them said that they would need more practice. One teacher (Teacher 3) suggested that mobile learning activities such as QR code activities, etc., could be included in and enclosed with the textbook; this would make it easier for teachers to organize mobile learning activities. She reflected: If someone made ready-made solutions, then of course I would employ them. (Teacher 3) One big issue that was also raised in the interviews was the fear and anxiety that teachers need to overcome before they can start to utilize mobile technologies as part of their teaching practice. The teacher in Case 3 reflected: It is like stepping away from your comfort zone. However, when you do it, you can notice that actually you did not have to step away from your comfort zone. (Teacher 4) V. CONCLUSIONS This study explored teachers' attitudes and competencies and how they influence teachers' willingness to adopt mobile learning approaches. The primary interest lay in exploring how first-order and second-order barriers affect teachers' beliefs and attitudes. Both first-order and second-order barriers were observed in the teachers' interviews. The barriers are more or less the same as those related to any other ICT usage. The major barriers were lack of confidence, lack of competence, and lack of access to resources. Since mitigating these barriers is also found to be a critical component for successful ICT integration [8], it is important to provide resources, professional development, and sufficient support for teachers. In other words, the findings highlighted the fact that it is important to try to eliminate first-order barriers in the case of any form of ICT integration. However, even if the first-order barriers are eliminated, the teacher still might not be willing to utilize mobile technology, for example, because of a lack of confidence. The study, therefore, also indicated that secondorder barriers exist. Second-order barriers are those fundamental internal barriers that can hinder teachers' efforts in technology implementation [9]. From a first-order barrier perspective, the school must provide the equipment needed, as not all of the students have the tools necessary for a BYOD approach. Inasmuch as mobile devices in an educational context still are quite a new phenomenon, it is also significant to provide time and professional development opportunities for teachers. One option, for instance, could be to provide easy accessible and ready-made solutions for teachers, to make it easier for them to initiate the approaches. In this way, the teachers themselves could start to develop their skills and confidence in using mobile technology as part of their teaching practice. When comfort and competence are relatively high, teachers might start to design new, creative, and student-centered ways in which to utilize mobile technology. The mobile learning activities should complement and extend traditional teaching and learning as well as offering something that it is not possible to achieve using traditional teaching and learning resources. Hence, it is important to continue to clarify the potential of mobile technologies in an educational context. How does one integrate mobile technologies into teaching and learning in a meaningful and sustainable way? One very significant observation is that the pedagogical aspect is very important. It is not worthwhile trying to reproduce old pedagogies using mobile technologies. However, in many cases, as in the cases described in this paper, the technology is in fact often employed to reproduce old pedagogies [5]. The Nature Tour case was closest to mobile learning, as the learning was extended to the authentic context of the natural environment, and the students were able to document things as they wanted. However, the experiment had its own limitations, and not all of the core aspects of mobile learning were fulfilled sufficiently. The other two cases, involved delivering instruction through a mobile device. The same instructions could have been written on a piece of paper or spoken by the teacher. In other words, teachers need sufficient technical and pedagogical support as well as knowledge about technology, pedagogy, and content in order for successful ICT and mobile learning integration to take place [16]. When the technical problems are solved, and teachers have the comfort and competence to utilize technology, they will also need support to find new ways in which to integrate technology into their classrooms. From a second-order barrier perspective, this study indicated that it seems that it is significant for teachers to have the confidence to embrace mobile technology in their teaching practices, as lack of confidence brought out other barriers more easily and reduced willingness to adopt mobile learning approaches. A positive, interested and curious attitude, in turn, gave the teachers the courage to step out from their comfort zone and to accept the fact that the students might be better with technology and that it is acceptable to ask for help if needed. It was also observed that traditional and stereotyped teaching practices influenced teachers' views and willingness to adopt mobile learning approaches. It was, for instance, observed that limited knowledge and somewhat traditional pedagogical beliefs led to restrictions on students' use of mobile learning. It was also observed that a positive experience gave teachers the courage and interest to adopt mobile learning approaches again. This study indicated that both first-order and second-order barriers can hinder the integration of mobile technology into teaching practices. As the teacher is very central to mobile learning implementation [19], it is important to try to understand teachers' attitudes and beliefs in a more detailed way. For this reason, it is essential to continue to develop our understanding about the issue. It would be particularly important to identify specifically those factors that might affect teachers' perception that mobile devices are inappropriate, potentially disruptive, and should be banned. Thus, to clarify the findings, more thorough evaluation studies should be conducted. Even though the teachers' views and opinions of mobile learning were mainly positive, and most of the teachers were willing to try mobile learning again, it is not possible to predict whether they are going to utilize mobile technology again or how they are going to utilize it. However, the case studies clearly extended the teachers' perceptions and initiated some ideas about how to utilize mobile technologies in the future. All of the teachers reflected on how to develop further as well as how things should be done differently and changed in the implementation. Hopefully, the case studies and positive experiments represented an incentive to utilize mobile technologies again. It was also observed that in some cases the barriers reduced the willingness to adopt mobile learning approaches and caused negative beliefs. Is it possible to somehow change these attitudes and beliefs, or are they permanent? Did these attitudes and beliefs already exist before the experiment, or did the experiment produce them? Although many questions remain open, the findings in this study provide some insights and a good basis for continuing research into teachers' attitudes to and beliefs about mobile learning.
2017-02-16T01:14:28.063Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "dbaf99a65a9d220fa2399f3c653f3a8c1dda5ce2", "oa_license": "CC0", "oa_url": "https://jyx.jyu.fi/bitstream/123456789/45305/1/teachersattitudes%20rikalahiltunenvesisenaho.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "dcfbffc0895875f793c0cb4b48fdb5a35d4d0cb0", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
90944831
pes2o/s2orc
v3-fos-license
Sustained swimming mitigates stress in juvenile Brycon amazonicus reared in high stocking densities The objective of this work was to evaluate the effect of stocking density associated with the swimming exercise on the stress responses of Brycon amazonicus. During 70 days, fish were subjected to three stocking densities: LD, low density of 88 fish per cubic meter; ID, intermediary density of 176 fish per cubic meter; and HD, high density of 353 fish per cubic meter. These densities were combined with static water (non-exercised group) or moderate-speed water (exercised group). Chronic stress was observed in HD, and plasma cortisol and glucose increased with the stocking densities. In HD, levels of plasma cortisol were significantly lower in exercised fish (135 ng mL-1) than in non-exercised ones (153 ng mL-1). The greatest hepatic glycogen bulks occurred in fish kept in ID and sustained swimming. Hepatic free amino acids (FAA) increased with the stocking density, particularly in non-exercised fish. The contents of FAA in the liver and of free fatty acids (FFA) in the liver and muscle were mobilized to meet the metabolic demands imposed by exercise and stocking density. The hematological parameters remained stable. The results show that Brycon amazonicus is more resistant to stress when subjected to sustained swimming and high stocking density than to static water. Introduction Aquaculture is growing at a rate of 5.8% a year as consequence of a series of combined factors, such as human population growth, rising incomes, and urbanization, which require high-quality healthy products (FAO, 2016).In order to sustain or enhance this growth, a high-tech aquaculture is being adopted, with the development of oxygen generators, automatic recirculation systems, high quality food, and aquaculture systems that support high fish densities, among others (Badiola et al., 2012).However, to reach high profits, fish farmers tend to overstock, ignoring the species demands, which increases the competition for space and contributes to environmental deterioration.High stocking rates may cause environmental impacts, reduced water quality, and the increase of toxicants as ammonia (Björnsson & Ólafsdóttir, 2006).These factors impair fish growth, health, and hematological parameters, increasing pathological conditions and leading to their death (Fazio et al., 2014). If the morphological and behavioral signs of overcrowding (Ashley, 2007), which indicate abnormal conditions in the rearing systems, are not detected in time, there may be a significant increase in production costs.For fish, the permanency of these conditions can result in chronic stress, leading to a greater energy demand to maintain homeostasis (Larsen et al., 2012).The most common approach to mitigate the stress caused by high stocking densities is using conditioned feed.However, exercising fish to overcome crowding stress is currently being studied, aiming to improve the outcomes of fish farming. Exercise has been shown to promote changes in fish growth, metabolism, and overall welfare (Arbeláez-Rojas & Moraes, 2010;McKenzie et al., 2012).It has been reported that cold-water fish subjected to exercise are more resistant to disease by robustness or fitness than non-exercised ones (Davison & Herbert, 2013).Furthermore, exercise reduces the time required to return to homeostasis after the handling and crowding stresses (Veiseth et al., 2006).For salmonids, exercise decreases blood levels of catecholamines, cortisol, and glucose (Davison & Herbert, 2013), whereas, for trout, exercise at low water speeds is slightly beneficial for recovery after acute stress, but not when the source is crowding (McKenzie et al., 2012).This shows the need to better evaluate the practice of sustained swimming to enhance the capacity of fishes in coping with stress from high stocking densities. In South America, the Neotropical freshwater fish Brycon amazonicus is a promising species for fish culture (Cruz-Casallas et al., 2011).Coming from the Amazon, Orinoco, and Tocantins-Araguaia River basins, this species has spread out over the Southeast region of Brazil due to its high growth rate, good performance, and exquisite flesh (Arbeláez-Rojas & Moraes, 2010;Cruz-Casallas et al., 2011).It is also tolerant to transport, handling, and crowding (Abreu et al., 2008).Owing to its rheophilic habit, the species is able to swim for long distances to reproduce, besides being streamlined and fusiform, which has motivated its use in trials on the performance of fish subjected to sustained swimming (Arbeláez-Rojas & Moraes, 2010).Therefore, to know and to optimize the stocking density of the species may be a crucial factor to plan the productivity and cost-effectiveness of intensive fish farms. The objective of this work was to evaluate the effect of stocking density associated with the swimming exercise on the stress responses of B. amazonicus. Materials and Methods The experiment was conducted in the laboratory of adaptive biochemistry of fish of the Department of Genetics and Evolution of Universidade Federal de São Carlos, located in the municipality of São Carlos, in the state of São Paulo, Brazil. Brycon amazonicus fingerlings were acquired from a commercial fish farm, located in the municipality of Mococa, also in the state of São Paulo, Brazil.The fish were transferred to the laboratory and held for over three weeks in 2,000 L tanks, in an aerated and thermostated flow-through water system, with mechanic and biological filters.Fish were kept under a natural 12-hour light:12-hour dark photoperiod and fed with 36% crude protein (2-4 mm commercial pellets) until acclimated to the new conditions.After this period, 210 fish (12.33±0.54cm and 18.44±0.12g) were classified and randomly distributed in circular 250-L fiberglass tanks with 170 L of net water volume, as detailed below. The experiments were performed in a 3x2 factorial arrangement, with three fish densities and two exercise patterns, totalizing six treatments distributed into six tanks.The replicates consisted of ten fish sampled per tank at the end of the experimental period.Three fish densities were tested: LD, low density; ID, intermediary density; and HD, high density, corresponding to 15, 30, and 60 fish per tank or to 88, 176, and 353 fish per cubic meter, respectively.Each density was evaluated in two conditions: exercise and non-exercise.In the exercised groups, fish were stimulated to swim at a moderate speed of 1.0 body length per second.The water speed was generated by a ¾ HP pump as in Arbeláez-Rojas & Moraes (2010).In the non-exercised groups (control), fish were left in static water.Except for water speed, all environmental conditions were rigorously maintained in all tanks.Throughout the trials, the fish were kept under natural photoperiod of nearly 12:12 hours for 70 days and were fed by hand with extruded 3-4-mm commercial pellets (32% crude protein, 13% moisture, 4% crude lipid, 14% ash, 6% crude fiber, 3% calcium, and 0.5% phosphorus), thrice a day to satiety. At the end of the experimental period, feeding was discontinued and the fish were starved for 24 hours.Ten fish per tank were randomly netted and anesthetized with 40 mg L -1 eugenol.The size and weight of each fish were measured right away.Blood samples were withdrawn from the caudal vein in heparinized syringes (5.000 UI heparin), and 1.0-mL blood aliquots were separated in 2.0-mL Eppendorf tubes for hematological determinations.Plasma was separated by centrifugation at 5,000 g for 5 min, at 5 o C, in a Mikro 200 centrifuge (Hettich Lab Technology North America, Beverly, MA, USA).Afterwards, fish were killed by cervical separation, and their liver, white muscle, and red muscle were rinsed with 0.9% cold saline and excised on a cold Petri dish.The livers were quickly weighed to determine the hepatosomatic index (HSI) and the condition factor (K).Then, liver and muscle samples were frozen using liquid nitrogen and kept at -80 o C for posterior metabolite analyses.The HSI and K were determined as follow: HSI = (liver weight/total weight) × 100 and K = (total weight/total length 3 ) × 100. Hematocrit values were evaluated with a microhematocrit centrifuge; the content of hemoglobin, in g dL -1 , was determined according to Drabkin (1948); and the red blood cell number, in 10 6 mm -3 , was counted in a Neubauer chamber.From these primary data, the following secondary Wintrobe indices were obtained: mean corpuscular volume, in μm 3 ; mean corpuscular hemoglobin, in pg per cell; and mean corpuscular hemoglobin concentration, in percentage. Plasma samples were deproteinized in 20% trichloroacetic acid (TCA) to maintain the 1:10 (plasma:TCA) ratio.Fish liver, and red and white muscles were unfrozen, sliced, weighed, and transferred to assay tubes with 20% TCA to obtain a tissue:TCA ratio of 1:10.Tissues were homogenized with a motor-driven Teflon pestle (model T 10 basic Ultra-Turrax, IKA Brasil, Campinas, SP, Brazil), with two strokes of 30 s at 1,000 rpm, under ice bath.Afterwards, homogenates were centrifuged at 3,000 g for 10 min, at 5 o C, in an 5424 R centrifuge (Eppendorf AG, Hamburg, Germany), and supernatants were used as protein-free extracts for the determination of metabolites.Ammonia (Gentzkow & Masen, 1942), protein (Kruger, 1994), glucose (Dubois et al., 1956), triglycerides, total lipids (Folch et al., 1957), and free fatty acids (FFA) (Novák, 1965) were determined in the TCA extracts.In addition, glycogen was quantified in liver and muscles (Bidinotto et al., 1997).Plasma cortisol concentration was obtained by a specific enzyme-linked-immunosorbent assay (Elisa) using the kit from the Neogen Corporation (Lansing, MI, USA). The biochemical variables were checked for normal distribution and homogeneity of variance, and, as necessary, arc sine transformations were performed.The two-way analysis of variance was used to compare factors and conditions, followed by Tukey's multiple range test to compare significant differences at 5% probability.All data were expressed as mean±standard deviation (n=10), and each tank was considered as an experimental unit with ten replicates (fish sampled).The SAS software, version 8.0 (SAS Institute Inc., Cary, NC, USA), was used for the analyses. Results and Discussion Rearing of B. amazonicus under sustained swimming attenuated the stress caused by crowding; however, plasma cortisol was increased in HD even in fish adapted to the exercise (Figure 1 A).Therefore, the best response was observed in fish subjected to HD and static water, which showed cortisol levels that surpassed in 11.89% those of fish under HD and exercise.In fish at ID, either in sustained swimming or static water, plasma cortisol was similar to that of those subjected to HD and exercise, but higher than that of those kept in both swimming conditions at LD. The management of fish rearing systems by introducing mechanisms to control water speed can increase fish resistance to cope with adverse factors, preserving animal welfare (Palstra et al., 2015).Short-term experiments on crowding have shown the high recovery capacity of B. amazonicus after stress during transport (Abreu et al., 2008), indicating the high physiological adaptability of the species to cope with crowding.It is possible that the large number of Pesq.agropec.bras., Brasília, v.52, n.1, p.1-9, jan.2017 DOI: 10.1590/S0100-204X2017000100001 specimens per volume allowed avoiding aggressive interactions, by increasing socialization, as reported for Arctic charr (Salvelinus alpinus) by Jørgensen et al. (1993) and for sea bream (Diplodus sargus) by Papoutsoglou et al. (2006).In addition to high fish densities, moderate exercise also reduces antagonistic behavior and social hierarchy.This fact can be attributed to the modifications in fish behavior when swimming up the stream and grouping into schools, which reduces the stress levels and the number of aggressions (Larsen et al., 2012).Since B. amazonicus is a fish species naturally adapted to continuous swimming over time, it is possible that the exercise contributed to attenuate the chronic stress from high stocking densities. Regarding cortisol and blood glucose concentrations, a similar trend was verified (Figure 1 B).Plasma glucose levels of non-exercised fish in HD were 20.56% greater than those of fish subjected to exercise and lower densities.However, the observed values were similar in fish under ID and LD, which were close to those values in fish subjected to exercise.Glucose levels in fish reared in HD were 17% greater than those of fish kept in LD. The obtained results show that plasma glucose is a relevant metabolic parameter to indicate changes in homeostasis caused by stressors, as observed in other species (Segner et al., 2012).Augmented plasma glucose, therefore, can be used as a secondary index of stress.In the present study, fish density and exercise were correlated through blood glucose concentration.Stress caused by crowding was exacerbated by nonexercise, but attenuated at low fish densities.The glycemic values of fish under sustained swimming were lower than those of sedentary fish in HD, highlighting the benefic effect of the exercise.Although fish were negatively affected by overcrowding, sustained swimming prevented or reduced the stress caused by HD, as shown by the species physiological profile.It should be noted that free glucose may be originated from distinct metabolic sources but the major supplier is glycogen stored in muscle and liver, which is distributed as glucose to the peripheral tissues (Polakof et al., 2012). Glycogen bulks in exercised fish (Figure 1 C) increased 11.41% in ID, compared with those in HD.However, hepatic glycogen stores in exercised fish under ID were higher than those of every other group, Letters compare stocking densities for the same swimming condition, and numbers compare fish subjected to sustained swimming or to static water in the same stocking density.Values are expressed as mean±standard deviation for n=10, and differences were significant at 5% probability. reaching values 28% greater than those of fish kept in LD and in static water.The liver glycogen stores in fish reared under exercise were also increased; however, this raise was more relevant in ID.This result may be explained by the reduction in the energetic expenditure from social antagonistic activities, as also found for rainbow trout (Oncorhynchus mykiss) under low-level sustained exercise (Larsen et al., 2012).In HD, glycogen storage was probably greater due to the decrease of social conflicts (Procarione et al., 1999).Moreover, exercised fish in HD showed cortisol levels lower than those of fish in static water, which is compatible with the increase in liver glycogen.It should be pointed out that high levels of cortisol inhibit liver glycogen stores in fish (Ellis et al., 2012).Therefore, it is possible to presume that the increase of glycogen was a consequence of the cortisol effect, since its multiple physiological actions, including hyperglycemia, are reported to result in peripheral lipolysis and gluconeogenesis from proteolysis, as observed in cortisol implants in common carp (Cyprinus carpio) (Liew et al., 2013). Protein metabolism can be determined through simple metabolites such as ammonia, FFA, and FAA.Plasma ammonia concentrations increased with stocking density (Table 1), and significant interactions were observed between the studied factors.Fish reared in HD and in static water showed the highest levels of plasma ammonia (Table 2), whereas exercised fish in HD exhibited 33.3% less ammonia than those kept in static water.In fish reared in LD, the exercise did not affect the plasma levels of ammonia, which were similar to those levels observed in exercised fish subjected to HD (Table 2).Hepatic FFA increased with crowding in all groups, especially in exercised fish, which had the tendency of showing the greatest concentrations.However, no interactions were found between exercise and stocking densities for the hepatic levels of ammonia and FFA, despite being verified for FAA in the liver.The fish subjected to HD, with or without exercise, showed the highest hepatic FAA levels, whereas those in LD presented lower liver FAA mobilization.In this case, crowding contributed to enhancing amino acid mobilization in the liver; however, in exercised fish, this response was only found in HD.In addition, a distinct relationship was observed between the plasma protein profile of exercised and non-exercised fish: the plasma protein levels increased gradually from LD to ID with exercise, while the greatest values were found in LD and in static water.Still regarding ammonia, its major source is protein catabolism, which is the breakdown of macromolecules.This metabolic feature is usually observed under negative nitrogen balance, which is also suggested by the plasma protein profile, and is considered undesirable because ammonia is toxic and presents many detrimental consequences.However, the basal levels of plasma ammonia were maintained in fish reared under sustained swimming and HD. When fish are subjected to moderate exercise, a metabolic reorganization occurs and protein is spared.For gilthead sea bream (Sparus aurata), Felip et al. ( 2013) reported a higher nutrient turnover with exercise and a greater retention of dietary protein, i.e., a higher 15 N uptake into white muscle in the post-prandial period, in response to the stimulus of sustained exercise.In the present study, similar hepatic amino acid profiles were obtained for all evaluated fish densities, combined or not with exercise, suggesting that no damages occurred in the liver of B. amazonicus.Furthermore, no changes were observed in hepatic ammonia.These results suggest that the catabolism of protein, assumed from plasma ammonia, is derived from other tissues rather than the liver and should drive amino acids to hepatic gluconeogenesis, as observed in greater stocking densities. The levels of plasma triglycerides were affected by fish density: the highest and lowest ones were found in fish reared in HD and in LD, respectively, with or without exercise (Table 2).However, these levels were not affected by crowding in exercised fish, although, in non-exercised ones, there was an increase with the fish population.The content of triglycerides also differed between white and red muscle, since it was affected by exercise and crowding only in the first one.The greatest contents of triglycerides in white muscle were observed in exercised fished reared in LD and ID; however, those under HD presented a decrease in the lesions associated with triglycerides. Triglycerides are important energetic stores for aerobic exercise, as reported for rainbow trout (Rasmussen et al., 2011).These molecules seem to be mobilized as the energetic demand is increased, as has been found in fish under stress due to confinement at high densities, dietary regimen changes, and the increase of physical activities (McKenzie et al., 2012).This mobilization of triglycerides means a decrease of their concentration towards hydrolysis. Lipid stores in muscles of B. amazonicus were mobilized as a function of sustained swimming and fish density.In exercised fish, the deposition of triglycerides in white muscle was stimulated in LD and ID, but decreased in HD, suggesting distinct demands by lipids.This should explain why fish under HD presented lower body mass.A similar response was reported for rainbow trout (McKenzie et al., 2012).Despite these results, some situations demand more interest by low fat fish, with a better fatty acid profile (Rasmussen et al., 2011).This shows the importance of combining fish density and exercise, considering that the benefits from exercise can be blurred by overcrowding stress. Triglycerides in red muscle did not change in any experimental condition.Although no interaction was observed between exercise and density for lesions associated with triglycerides in red muscle, these values were twofold those found in white muscle in every studied condition.Even though the metabolic rates in red muscle were higher than those in white muscle, the intensiveness of the experimental conditions were likely not enough to lead to significant changes in B. amazonicus.This may be attributed to the fact that, in this species, energetic stores are mobilized for seasonal changes in feeding (Urbinati et al., 2014).The Table 1.Comparisons of the physiological and metabolic parameters of Brycon amazonicus subjected to different stocking densities, combined with static water (SW) or sustained swimming (SS), for 70 days. Parameter (1) P-value (2) Main effect Exercise (condition) Density (factor) et al., 2006).In addition, the metabolic responses imply that lipids in B. amazonicus are likely a key-fuel for fish reared under sustained swimming. Stocking density or exercise did not interact regarding hematological parameters (Table 1).Hemoglobin, hematocrit, and red blood cell counts presented a discreet increase in response to exercise for all stocking densities; however, this was just a trend and no significant differences were observed.The hematic profile of B. amazonicus suggests that overcrowding and sustained swimming can associated when the species is farmed without apparent adverse consequences. The HIS and K did not change either with stocking density or exercise; consequently, no significant interaction was observed between these factors and the assessed conditions (Table 1).However, the increase in fish density reduced the HSI, whereas exercised fish showed greater indices. Juvenile B. amazonicus subjected to ID and exercise presented the greatest K indices, suggesting that moderate exercise enhances fish welfare.This index was greater in all fish subjected to exercise, independently of the stocking density, probably because sedentary fish under high densities spend more energy to cope with crowding stress.The combined effects of water quality and high stocking densities also reduced the values of the K of rainbow trout due to low water quality and feeding conditions, causing alterations to fish welfare (Person-Le Ruyet et al., 2008).In the present study, the sharing of the food pellets and the water quality were improved, and the aggressive behavior was reduced, sparing energy for the anabolic processes.This may explain the increase in the K in B. amazonicus.The obtained results show that sustained swimming is a recommended strategy to enhance the growth rates and life quality of B. amazonicus, particularly in intensive fish culture systems. (1Means followed by equal letters do not differ significantly between the rearing conditions at each density by Tukey's post-hoc test, at 5% probability, after the two-way analysis of variance.Results are expressed as mean±standard deviation for n=10.TG, triglycerides; FAA, free amino acids; FFA, free fatty acids; HSI, hepatosomatic index; K, condition factor; LD, low density of 15 fish per tank; ID, intermediary density of 30 fish per tank; and HD, high density of 60 fish per tank. Conclusion Juvenile Brycon amazonicus is more resistant to stress when subjected to sustained swimming and high stocking densities than when subjected to static water, showing a decrease in plasma cortisol levels, as well as an increase in plasma glucose, in hepatic glycogen stores, and in muscle triglycerides. Figure 1 . Figure1.Plasma cortisol (A), glucose (B), and glycogen (C) levels in juvenile Brycon amazonicus subjected to different stocking densities under static water (white bars) or sustained swimming (grey bars).Fish were subjected to three stoking densities for 70 days: LD, low stocking density of 15 fish per tank; ID, intermediary stocking density of 30 fish per tank; and HD, high stocking density of 60 fish per tank.Letters compare stocking densities for the same swimming condition, and numbers compare fish subjected to sustained swimming or to static water in the same stocking density.Values are expressed as mean±standard deviation for n=10, and differences were significant at 5% probability. Parameters were compared by the two-way analysis of variance.TG, triglycerides; FAA, free amino acids; FFA, free fatty acids; Hb, hemoglobin; Ht, hematocrit; RBC, red blood cell count; MCV, mean corpuscular volume; MCH, mean corpuscular hemoglobin; MCHC, mean corpuscular hemoglobin concentration; HSI, hepatosomatic index; and K, condition factor. (2)P-value significant at <0.05. (3)LD, low stocking density of 15 fish per tank; ID, intermediary stocking density of 30 fish per tank; and HD, high stocking density of 60 fish per tank. Table 2 . Metabolic performance of Brycon amazonicus subjected to different stocking densities, combined with static water (SW) or sustained swimming (SS), for 70 days(1).
2018-12-21T23:33:12.800Z
2017-03-20T00:00:00.000
{ "year": 2017, "sha1": "602c8071b4917ecc071d627f7e3003b1c48b8885", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/pab/v52n1/1678-3921-pab-52-01-00001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "602c8071b4917ecc071d627f7e3003b1c48b8885", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
235714295
pes2o/s2orc
v3-fos-license
Developmental Hip Dysplasia: An Epidemiological Nationwide Study in Italy from 2001 to 2016 Developmental Dysplasia of the Hip (DDH) includes a broad spectrum of hip abnormalities. DDH requires early diagnosis and treatment; however, no international consensus on screening protocol and treatment is provided in the literature. Epidemiological studies are helpful to understand the national variation of a specific surgical procedure and compare it with that of other countries. Data provided by different countries could allow researchers to provide international guidelines for DDH screening and treatment. Limited data are reported regarding trends of hospitalization for DDH, and no public database is available. The purpose of this study was to estimate annual admissions for DDH in Italian patients from 2001 to 2016, based on the hospitalization reports. Data of this study were collected from the National Hospital Discharge Reports (SDO) reported at the Italian Ministry of Health. Descriptive statistical analyses were performed. From 2001 to 2016, 3103 hospitalizations for DDH were recorded in Italy, with a mean incidence of 2.33 (per 100,000 young inhabitants). Females of the 0–4 years old group represented the majority of patients hospitalized for DDH. Introduction Developmental dysplasia of the hip (DDH) includes a wide range of hip alterations (from simple dysplasia to dislocation). DDH is characterized by pathological modification of the acetabular cup and/or femoral head, with consequent soft tissue abnormalities (hip capsule and ligaments). The femoral head could be within the acetabular cup (located), partially out of the acetabular cup (subluxated) or outside of the acetabular cup (dislocated) [1]. DDH usually develops in utero or during the neonatal period and affects only one side in 63% of patients [1]. The prevalence of DDH varies between countries (from 1% to 7%) [2], and the female sex reported 2-7 times higher risk [3]. The risk factors include female sex (high levels of estrogen receptors are linked to hyperlaxity) [4]; breech position in utero; firstborn; familiar history; environmental factors (children maintained in wrapped reported lower rates of DDH) [5]. A correct physical examination is necessary to reveal DDH. Positive Ortolani maneuver and limited or asymmetric hip abduction are the most frequent signs [6]. Clinicians need to focus on the asymmetric thigh or gluteal folds and length discrepancy between lower limbs [7], but a negative Galeazzi sign does not exclude the disease [8]. DDH in newborns is suspected from clinical examination and investigated by ultrasonography with Graf classification [9,10]. Rapid diagnosis and treatment are mandatory to avoid complications of DDH (dislocated hip, osteoarthrosis, avascular necrosis of the femoral head and joint stiffness) [11]. The most effective screening protocol for the early detection of DDH is still debated worldwide. Clinical assessment, selective ultrasound or early universal ultrasound screening are adopted as diagnosis methods in several countries [12][13][14]. However, few studies reported the results between countries, making it challenging to redact universal screening guidelines. National health statistics for DDH are attractive for an international audience, as different screening strategies are reported between countries (type of screening, method of ultrasound, mean age at time of screening and diagnosis and subsequent treatment protocols) [12][13][14][15][16]. Sharing national statistics and correlating those to the individual screening systems and treatment protocols, however, could be helpful to compare outcomes for different screening systems internationally. The objective of this study was to estimate the annual incidence of admissions for DDH in Italian patients from 2001 to 2016, based on the hospitalization reports. In Italy, a selective ultrasound screening method is used to assess an early diagnosis of DDH. Reporting the results of a selective ultrasound screening campaign could be helpful to compare the incidence of DDH hospitalization worldwide. The purpose of this study was to report national results of selective ultrasonography screening to compare them with other countries. Materials and Methods Data of this study were collected from the National Hospital Discharge Reports (SDO) reported at the Italian Ministry of Health regarding the years of this paper (2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016). In Italy, the National Health Service (NHS) provides healthcare to all residents. The regional authorities are responsible for organizing and managing the healthcare services delivered through local structures (both public and private accredited providers). Official data on the services provided to residents are collected by hospitals and local healthcare structures, entered into structured data files, and periodically sent to the Ministry of Health. Therefore, the ICD and "procedure codes" are reliable, and the National Hospital Discharge Reports are validated [17,18]. These data were anonymous and reported the patient's sex, age, days of stay, primary diagnoses, and procedures. Population data from the National Institute for Statistics (ISTAT) for each year were obtained. DDH was defined by the following International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis codes: 754.30 "Congenital dislocation of hip, unilateral", 754.31 "Congenital dislocation of hip, bilateral", 754.32 "Congenital subluxation of hip, unilateral", 754.33 "Congenital subluxation of hip, bilateral" and 754.35 "Congenital dislocation of one hip with subluxation of other hip". The ICD-9-CM procedures codes were 77.35 "Other Division of Bone, Femur", 79.75 "Closed Reduction of Dislocation of Hip", 79.85 "Open Reduction of Dislocation of Hip" and 83.12 "Adductor Tenotomy Of Hip". Patients aged between 0 and 14 years were defined as "young" (according to ISTAT) [19]. To avoid underestimating the population which may suffer from DDH, the study was referred only to the young Italian community. Patients with neurological conditions and consequent hip dysplasia were identified using the secondary diagnosis. Statistics The yearly number of DDH, the percentage of males and females, the average age, the average days of hospitalization, primary diagnoses and primary procedures in the whole Italian population were calculated using descriptive statistical analyses. The annual adult population size (achieved from ISTAT, a statutory electronic national population register) were used to calculate the incidence rates. The incidence was based on the size of the entire population of people ≤ 14 years old in Italy. The Statistical Package for Social Sciences (SPSS) version 26 (IBM Corp, Armonk, NY, USA) was used for this data analysis. Figures were created using Excel (Microsoft) software (Microsoft Corporation, Redmond, WA, USA). Demographics During the 16-year study period, 3103 admissions to the hospital for DDH were performed in Italy, representing an incidence of 2.33 procedures for every 100,000 Italian inhabitants 0-14 years old. From 2001 to 2016, the incidence of hospitalizations decreased from 2.49 to 2.16 per 100,000 person-years 0-14 years old (Figure 1). A progressive increase in hospitalizations was recorded from 2003 to 2007. Since 2008, a decrease in hospitalizations has been reported. Over the study period, the highest number of hospitalizations for DDH was found in the 0-4-year age group ( Figure 2). In the 0-4 age group, 61.6% of patients underwent "Closed Reduction of Dislocation of Hip", 21.5% "Open Reduction of Dislocation of Hip", 10.8% "Adductor Tenotomy Of Hip". The remaining patients were coded as "Other Division of Bone, Femur". Females represented the majority of patients undergoing procedures for DDH, both in total and over the years (female 79.3% and male 20.7%) (Figure 3). From 2001 to 2016, the mean age of patients was 1.52 ± 2.96. During the entire period, the average age of males was always higher than females (Figure 4). Demographics During the 16-year study period, 3103 admissions to the hospital for DDH were performed in Italy, representing an incidence of 2.33 procedures for every 100,000 Italian inhabitants 0-14 years old. From 2001 to 2016, the incidence of hospitalizations decreased from 2.49 to 2.16 per 100,000 person-years 0-14 years old (Figure 1). A progressive increase in hospitalizations was recorded from 2003 to 2007. Since 2008, a decrease in hospitalizations has been reported. Over the study period, the highest number of hospitalizations for DDH was found in the 0-4-year age group (Figure 2). In the 0-4 age group, 61.6% of patients underwent "Closed Reduction of Dislocation of Hip", 21.5% "Open Reduction of Dislocation of Hip", 10.8% "Adductor Tenotomy Of Hip". The remaining patients were coded as "Other Division of Bone, Femur". Females represented the majority of patients undergoing procedures for DDH, both in total and over the years (female 79.3% and male 20.7%) (Figure 3). From 2001 to 2016, the mean age of patients was 1.52 ± 2.96. During the entire period, the average age of males was always higher than females (Figure 4). Days of Hospitalizations The average length of hospital stay was 9.5 days (range 0-163 days). The trend of the average number of days of hospitalization was decreasing, with a peak in 2007 ( Figure 5). Males had, on average, more days of hospitalization than females (females 9.43 days and males 9.76). Patients aged 0 to 4 had more days of hospitalization on average. Differentiating by sex, males with a higher number of days of hospitalization were between 0 and 4 years old, while women were between 5 and 9 years old. The primary procedures were "Closed Reduction of Dislocation of Hip" (ICD code 79.75; 54.7%), "Open Reduction of Dislocation of Hip" (ICD code 79.85; 21.5%), "Adductor Tenotomy Of Hip" (ICD code 83.12; 13.1%) and "Other Division of Bone, Femur" (ICD code 77.35; 10.6%) ( Figure 6). Over the study period, "Closed Reduction of Dislocation of Hip" was prevalent, following by "Open Reduction of Dislocation of Hip" (Figure 7). The most significant number of secondary diagnoses of neurological diseases were recorded in patients between 10 and 14 years of age (Figure 7). The secondary diagnoses found were "Congenital quadriplegia" (ICD code 34.32) and "Myoneural disorders, unspecified" (ICD code 35.89). The latter was present in only one patient who was 11 years old ( Figure 8). Discussion The objective of this study was to estimate the incidence of hospital admission for DDH in Italian patients from 2001 to 2016. The analysis of SDO records reported a mean incidence of hospital admission for DDH of 2.33 (for every 100,000 inhabitants under 15 years old). A mild decrease in incidence from 2.49 to 2.16 (admission for every 100,000 inhabitants) was reported over the study period. Females between 0 and 4 years old, represent the majority of patients. A mean of 9.5 days of hospitalization was also reported, but it was observed that this value decreased during the years (from 10.18 to 7.46 days). The data reported are in line with other studies [1,5,20]. Early diagnosis and proper treatment are the keys to preventing serious health outcomes for individuals with DDH [21,22]. Otherwise, there are no international guidelines concerning timing, method and type of screening [12,16,23,24]. Therefore, it is mandatory to find an international consensus regarding the screening strategy. In Italy, selective ultrasonography (using Graf classification [10]) is used to assess the presence of DDH. The purpose of this study was to report national results of selective ultrasonography screening to compare them with other countries. Sharing the national statistics could allow the researchers to compare different national screening programs (clinical exams, selective o universal ultrasound). Unfortunately, few studies reported the incidence of DDH hospitalizations worldwide, making it difficult to perform a direct comparison with the present study results [1,12,[14][15][16]. Kiung et al. reported that a clinical exam, performed by an experienced clinician, must be obtained before the ultrasound assessment [25], reporting the efficacy of selective ultrasound screening in 2686 infants. Biedermann and colleagues reviewed the literature reporting that the universal ultrasound screening campaign represented the most effective method [13]. However, they also discussed the high costs of this campaign. In 2018, the International Interdisciplinary Consensus Committee on DDH Evaluation (ICODE) tried to achieve consensus on the detection and early treatment of DDH and develop a universal standardized screening program. The ICODE highlighted the effectiveness of a universal ultrasound screening campaign compared to other methods [16]. Treiber and colleagues performed a study on 21,676 newborns between 2006 and 2015 [14]. Additionally, in this study, the superiority of universal ultrasound screening was confirmed. However, further high-quality international studies are required to obtain significant results. New technologies and screening campaigns reduced the incidence of delayed diagnosis and complication, with a consequent reduction in surgeries for DDH [26]. Moreover, the highest percentage of procedures recorded within the 10-14 age group was performed in patients with neurological diseases. These data confirm the necessity to understand the incidence of this condition in children with cerebral palsy or other musculoskeletal conditions. Further studies that focus on neurological children are required to reach significant conclusions regarding the trend of DDH procedures in this population. The present study has some limits. It is based on administrative data from different hospitals and macro-regions. We used the International Classification of Diseases 9 (ICD-9) for all the procedures reported. Otherwise, with the ICD-9 used, it was possible to use different codes for the same surgical procedure. The provided database on DDH reported only cases that required hospitalization; therefore, the silent DDHs are not reported. As patients are not recorded with a unique ID number is not possible to distinguish between procedures performed in the same patient or bilateral procedures. This heterogeneity of codification could lead to an underestimation of our results. Lastly, even if the hospital enters the codes, the accuracy of the database is not confirmed. Unfortunately, the healthcare data (including discharge data) are notoriously inaccurate in many countries. Conclusions The incidence of admissions of young patients for DDH in Italy is 2.33 cases/100,000 inhabitants (from 2001 to 2016). DDH requires early diagnosis and treatment; however, no international consensus on screening protocol and treatment is provided in the literature. Epidemiological studies are helpful to understand the national variation of a specific surgical procedure and compare them with other countries. Data provided by different countries could allow researchers to provide international guidelines for DDH screening and treatment. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.
2021-07-03T06:17:02.046Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "2bf5f57ddcadeafca65f1931d1234c29a3552e92", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/12/6589/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "649d21c8025598df08427cb974028234c582bb85", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57570302
pes2o/s2orc
v3-fos-license
Working Memory Capacity: Limits on the Bandwidth of Cognition Why can your brain store a lifetime of experiences but process only a few thoughts at once? In this article we discuss “cognitive capacity” (the number of items that can be held “in mind” simultaneously) and suggest that the limit is inherent to processing based on oscillatory brain rhythms, or “brain waves,” which may regulate neural communication. Neurons that “hum” together temporarily “wire” together, allowing the brain to form and re-form networks on the fly, which may explain a hallmark of intelligence and cognition: mental flexibility. But this comes at a cost; only a small number of thoughts can fit into each wave. This explains why you should never talk on a mobile phone when driving. TIMOTHY J. BUSCHMAN is an Assistant Professor in the Department of Psy chology at Princeton University. (*See endnotes for complete contributor biographies.) Working memory holds the contents of our thoughts. It acts as a mental sketchpad, providing a surface on which we can place transitory information to hold it "in mind." We can then "think" by manipulating this information, such as by combining it with other items or transforming it into something new. For example, working memory allows us to remember phone numbers, do mental arithmetic, and plan errands. Given its fundamental role in thought it is surprising that working memory has such a severely limited capacity: we can only hold a few thoughts in our consciousness at once. In other words, the surface area of our mental sketchpad is quite small. This limitation is obvious whenever we try to multitask, such as when we attempt to talk on the phone while writing an email, and it is why using our mobile phones while driving increases accident risk, even if we are using a hands-free set. This stands in contrast to other mental abilities that are not limited, such as long-term memory storage. We can store (seemingly) a lifetime of experiences, but, for some reason, we can only consciously express these thoughts a few at a time. This limited capacity may be fundamentally responsible for the Abstract: Why can your brain store a lifetime of experiences but process only a few thoughts at once? In this article we discuss "cognitive capacity" (the number of items that can be held "in mind" simul taneously) and suggest that the limit is inherent to processing based on oscillatory brain rhythms, or "brain waves," which may regulate neural communication. Neurons that "hum" together temporarily "wire" to gether, allowing the brain to form and re-form networks on the fly, which may explain a hallmark of intel ligence and cognition: mental flexibility. But this comes at a cost; only a small number of thoughts can ½t into each wave. This explains why you should never talk on a mobile phone when driving. cognitive architecture of our brains: researchers believe it to be the rea son we have evolved the ability to focus on one thing at a time (to "attend" to something). Despite being well studied, no one has yet con½rmed why working memory is limited. In this essay, we will review some of what is known about working memory capacity and offer our theory of why consciousness may have this limit. Though we may feel that we are able to perceive most of the world around us, this sensation is, in fact, an illusion constructed by our brains. In reality, we sense a very small part of the world at any point in time; we "sip" at the outside world through a straw. Our brain takes these small bits of data and pieces them together to present an impression of a coherent and holistic scene. Examples of this limitation are abundant: consider the puzzles in which you must identify ten differences between two similar pictures. The brain requires a surprisingly long time to accomplish this, despite the two pictures being side by side and the changes often being obvious, such as the total disappearance of a building or tree. This effect is often referred to as change blindness and is a regular occurrence of natural vision. (Another example of change blindness is the large num ber of editing mistakes we fail to notice in movies.) The limited bandwidth of consciousness is also apparent in studies of working mem ory capacity. In these experiments, subjects briefly view a screen with a variable number of objects (such as colored squares) and then, after a delay of a few sec onds in which they must hold the objects in memory, they are shown another screen of objects, one of which may be different from what was previously shown. 1 Subjects are then asked whether something has changed, and if so, to identify how it has changed (whether it used to be a different color or shape). When the number of objects on-screen increases beyond a few items, subjects begin to make errors (by missing changes), indicating that their working memory capacity has been exceeded. Experiments such as this have revealed that the average adult human can only process and retain four or ½ve objects at a time (similar to the average monkey, as shown below). 2 The exact capacity of the brain varies by individual; some can remember only one or two items and others can remember up to seven. 3 Interestingly, an individual's capacity is highly correlated with measures of fluid intelligence, suggesting that individual capacity limits may be a fundamental restriction on high-level cognition. 4 This seems intuitive: if you can hold more information in mind at the same time, then more ideas can be combined at once into sophisticated thought. But what is the nature of the capacity limitation? Do we simply miss new items once we have ½lled our thoughts with four or ½ve? Or do we always try to take in as much information as possible, eventually spreading ourselves too thin when there are more than four or ½ve objects present? In fact, both may be true. Models of a strict limit on the number of items you can hold in mind posit that this is because working memory has a limited number of discrete "slots," each of which can independently hold information. And once you ½ll those slots, you can no longer store any new information. In contrast, other models predict that our lim ited capacity is due to our spreading our selves too thin. They suggest that working memory is a flexible resource, a neural pool that can be subdivided among objects. You do not stop storing new information after you reach a ½xed capacity as in the slot model; rather, as new information is received, the resource pool is continually divided until the information is spread so thin that it can no longer be ac -curately recalled (and therefore cannot support behavior). Much evidence has been marshaled on behalf of both models, primarily from studies of the patterns of errors humans make on tests of cognitive capacity. Recently, we examined the neurophysiological mechanisms underlying capacity limits in monkeys. We found an intriguing possibility: both the slot and flexible-resource models are correct, albeit for different reasons. The advantages of animal work include tighter control over gaze as well as more precise measurements of neural activity than is possible with human subjects. These advantages allowed us to dig deeper into the phenomenon and led to a surprising discovery. The monkeys, like humans, had an overall capacity of four objects. But the monkeys' overall capacity was actually composed of two separate capacities of two objects each in the right and left visual hemi½elds (to the right and left of the center of gaze) that were independent of each other. The processing of objects on the right half of gaze is unaffected by objects in the left half of gaze, regardless of how many ob jects there were on the left (and vice versa). But adding even one object on the same side of gaze as another object resulted in a decrement in performance. It was as if the monkeys had two separate brains, each one assigned to the right or left half of vi sion. This right/left independence was sur prising, though research focusing on a dif ferent type of task might have predicted it: humans have independent capacities to track moving objects in the right and left visual hemi½elds. 5 This phenomenon is likely related to the fact that the right and left visual hemi -½elds are respectively processed by the left and right cerebral hemispheres. This suggests that the two cerebral hemispheres can operate somewhat independently, at least for the processing required for visual infor -mation to reach awareness. Indeed, the apparent split between the two hemispheres recalls some of the initial observations of hu mans who had their cerebral hemispheres split to control epilepsy. With out care ful testing, these subjects usually appeared normal. Thus, there may be something of a split even in the intact brain: the two visual hemi½elds/cerebral hemispheres act like two independent slots for processing and holding visual information. At ½rst blush, this seems to support the slot model, with slots for both the left and right ½elds of vision. But we also found evidence to support the flexible-resource model within each visual hemi ½eld: on each side of visual space, information was shared and spread among ob jects. To show this, we looked more closely at how neurons encoded the contents of working memory. A pure slot model predicts that encoding an object is all-or-none: if the brain successfully remembers an object, there should be an equal amount of information about it regardless of how many other objects are in the array. But we found that even when a given object was successfully encoded and retained, neural information about that speci½c object was reduced when another object was added to the same visual hemi½eld, as if a limited amount of neural information was spread between the objects. The slot mod el also predicts that if a subject misses an object, no information about it should be recorded in the brain; either an object ½lls a slot, and is remembered, or not. By contrast, the flexible-resource model suggests that even when a subject misses an object, some information about the object could have been recorded in the brain, just not enough to support conscious perception and memory. This latter prediction is exactly what we found: even when a subject did not consciously perceive the object, the brain still recorded a signi½cant, albeit reduced, amount of information. In sum, the two cerebral hemispheres (visual hemi½elds) act like discrete resource slots; within them, neural information is divided among objects in a graded fashion. A number of recent studies in humans support such a hybrid model, ½nding that there are multiple slots that can store graded information about objects. 6 Thus, capacity limits may reflect interplay or blend between different types of underlying constraints on neural processing. On the one hand, neural processing on the right and left halves of visual space can be slot-like, akin to buckets that can hold a maximum volume of water (information). But, on the other hand, within each cerebral hemisphere there is no limit to the num ber of objects (thoughts) in each buck et. The limitation is inherent to the in formation, not the number of objects: if there are too many items in the bucket, only a few can get wet enough (have enough information devoted to them) to reach consciousness. The rest may get a little damp, but it is not enough to act upon. Whether or not the two cerebral hemispheres have independent capacities for information other than vision remains to be determined. It may prove only to be a visual phenomenon, due to the fact that the right and left of gaze are primarily processed in the left and right cerebral hemispheres, respectively. But even if this independence is limited to vision, it has clear practical implications. For example, taking into account the separate capacities of the right and left of gaze can help in the design of heads-up displays, such as on au tomobile windshields, maximizing the amount of information that drivers can glean in each glance, or providing information without overloading their capacity to fully process important visual scenes, such as the road in front of them. So far we have seen that despite our impression that we can store and perceive a signi½cant amount of visual information at once, this is not the case. We can only simultaneously think about a very limited amount of information. Our brains knit together these sips of information to give us the illusion that we have a much larger functional bandwidth. (Again, this is some thing to keep in mind the next time you are driving and have an urge to reach for your mobile phone.) But this still does not explain why there is a capacity limit for conscious thought. What about the brain's functioning dictates such a small bandwidth? Why can't you hold one thousand thoughts in mind simultaneously, or even just one hundred? There is mounting evidence that the brain uses oscillatory rhythms (brain waves) for communication, especially for processes underlying high-level cognition. The theory is that the brain forms networks by synchronizing oscillations (rhythmic or repetitive neural activity) of the neurons that make up that network. Neurons that "hum" together form networks, and because only so much information can ½t into each oscillatory cycle, any communication system based on an oscillating signal will naturally have a limitation on bandwidth. But before delving into the content limits of an oscillatory cycle, what is the evidence supporting a role for oscillatory activity in brain function to begin with? It has long been known that the brain has large populations of neurons that oscillate in synchrony. These so-called brain waves occur across a wide range of frequencies from very low (less than once a second, or < 1 Hz) to very high (almost once every 15 ms, or > 60 Hz). Brain waves are not random: they vary with mental state. For example, when you are relaxed, your brain tends to show lower frequency waves; but if you suddenly focus on a task, brain regions that are needed to perform that task begin to produce higher frequency waves. Despite the evidence that brain waves are important for behavior, their exact role in brain function has long been a mystery. Beginning with the pioneering work of physicist and neurobiologist Christoph von der Malsburg, neurophysiologist Wolf Singer, and their colleagues, there has been increasing awareness that synchronizing the oscillations between neurons may be critical in forming functional networks. Synchronized oscillations are useful for increasing the impact of neural impulses ("spikes," or sharp changes in voltage that neurons use when they signal one another). Spikes from two neurons that arrive simultaneously at a third neuron downstream have a greater impact than if the impulses arrived at different times. 7 Given this, it is easy to imagine how such a mechanism could be useful to focus mental effort on particular representations (when we pay attention). After all, if synchronizing the rhythms of neurons increases the impact of their spikes, then one way to boost the neural signals associated with an attended object would be to increase synchrony between neurons representing it. There is growing evidence that this is exactly how attention works. Increased atten tional focus increases oscillatory synchrony between the visual cortical neurons that represent the attended stimulus. For example, visual cortical neurons that process a stimulus under attentional focus show increased synchronized gamma band (30-90 Hz) oscillations. 8 This higher frequency (> 30 Hz) synchrony may result from interactions within local cortical circuits, 9 the same interactions that underlie the computations of stimulus features. 10 By contrast, sensory cortical neurons representing an unattended stimulus show increased low frequency (< 17 Hz) synchronization. A variety of evidence suggests that low frequencies may help deselect or inhibit the corresponding ensembles (populations of neurons that together underlie a particular thought, perception, memory, or neural computation), perhaps by disrupting the higher frequency. 11 On a broader scale, synchrony between regions may also regulate communication across brain areas. 12 In short, if two different networks in different brain areas oscillate in phase (a particular moment with a neural oscillation, such as a speci½c "piece" of a brain wave) they are more likely to influence one another because both are in an excited and receptive state at the same time. Conversely, if they are out of phase, information will be transmitted poorly. This is supported by observations that interareal oscillatory coherence within and between "cognitive" regions and sensory areas has been found to increase with attention. 13 In other words, if two brain areas are involved in a given cognitive function (such as visual attention), they increase their synchrony during that function. We have discussed how synchronized rhythms can change the flow of information between neurons and between brain regions. Recent work has begun to suggest that synchrony may not only control communication between networks, it may actually form the networks themselves. The classic model suggests that if neurons are anatomically connected, then they are part of the same network; but it may be that anat omy dictates which neurons are capable of forming networks. The actual formation of the networks may instead come through synchrony (Figure 1). In other words, anatomy is like a system of roads; synchrony is the traf½c. Importantly, dynamic formation of ensembles by oscillatory synchrony may underlie cognitive flex ibility: our ability to rapidly change thoughts and behavior from one moment to the next. Consider, for example, what is widely assumed to be the basic element of a thought: a group of neurons that are ac -Working Memory Capacity: Limits on the Bandwidth of Cognition tive together. Such an ensemble can form a perception, memory, or idea. But how does the brain form a particular neural ensemble for a speci½c thought? This is not straightforward; there are billions of neurons linked to each other through trillions of connections. This is further complicated because neurons have multiple func tions, particularly at "higher," more cognitive levels of the brain. 14 Thus, many neurons inhabit many different ensembles and, conversely, ensembles for different thoughts share some of the same neurons. If anatomy were all there were to forming ensembles, then attempting to activate one ensemble would result in activity that extended to other ensembles, and subsequently a jumble of thoughts. We propose that the role of synchrony is to dynamically "carve" an ensemble from a greater heterogeneous population of neurons 15 by reinforcing mutual activation between the neurons that form the ensemble. 16 Because ensemble membership would depend on which neurons are oscillating in synchrony at a given mo ment, ensembles could flexibly form, break-apart, and re-form without changing their anatomical structure. In other words, for mation of ensem bles by rhythmic synchrony endows Earl K. Miller & Timothy J. Buschman thought with flexibility, a hallmark of higher cognition. Humans can quickly adapt and change their thoughts and behaviors in order to tailor them to the constantly changing demands of our complex world. Thus, networks have to be as sembled, deconstructed, and recon½gured from moment to moment as our foci, plans, and goals change. This is not meant to downplay the role of neural plas ticity in changing the weights of connections be tween neu rons and in forming new ana tomical connections; it is always important to build and maintain roads. Through a study in which we trained mon keys to switch back and forth between two tasks, we recently found evidence that synchronized oscillations can provide the substrate for dynamic formation of ensembles. As the monkeys switched tasks, different sets of neurons in the prefrontal cortex showed synchronous oscillationsone for each task-in the beta band (about 25 Hz) synchrony, as if the neurons were switching from one network to the oth er. 17 Importantly, many of the neurons were multifunctional, synchronizing their activity to one ensemble or the other depending on the task at hand. This supports the idea that synchrony can dynamically form (and disassemble) ensembles from anatomical networks of neurons that participate in multiple ensembles. Interestingly, one of the two tasks was much easier for the monkeys to perform; and when the monkeys prepared to engage in the harder task, the neurons that formed the network for the easier task showed synchrony in a low-frequency alpha range (about 10 Hz). Alpha waves have been associated with suppression or inattention to a stimulus 18 and are therefore thought to inhibit irrelevant processes. 19 In our experiment, alpha oscillation inhibition seemed to be acting to quiet the dominant network (the one needed for the easier task), which would have interfered with the network needed for the current, more challenging task. This suggests that synchronous os cillations helped control the formation of ensembles. 20 Higher (beta) frequencies de ½ned the two task networks while lower (alpha) frequencies were used to somehow disrupt formation of the stronger network (and thus prevent an erroneous reflexive re action) when the weaker network had to be used. If synchronized rhythms form neural ensembles, it follows to wonder how it is that the brain can form more than one ensemble at a time. After all, would not two rhythmically de½ned ensembles inadvertently synchronize to each other, merging together and distorting the information they represent? In response, some researchers have proposed that the brain forms more than one ensemble at a time by oscillating different ensembles slightly out of phase with one another. According to this theory, neurons that are part of a speci½c ensemble do not only synchronize their activity, but they do so by aligning their spikes to speci½c phases of neuronal population oscillations. 21 By separating thoughts into different phases of population oscillations, our brain can hold multiple thoughts in mind simultaneously ( Figure 2). 22 In other words, the brain prevents ensembles from interfering with one another by juggling them, rhythmically activating each in turn (out of phase from each other). We recently reported evidence for this multiplexing when information is held in mind. 23 When monkeys hold multiple objects in working memory, prefrontal neurons encode information about each object at different phases of an ongoing (~ 32 Hz) oscillation. Signi½cantly, there were bumps of information at different phases, yet in all phases the neurons still carried at least some information, supporting a hybrid slot/flexible-resource model. The bumps of information are some what slot-like in the sense that they are speci½c to certain phases of the oscillation; but they are not strict slots because the bump is a relative increase over information in other phases. The effect was not all-or-none, information-here-but-notthere, as is predicted by a strict slot model. This ½nally leads us to an explanation for the severe limitation of conscious thought. Phase-based coding has an inherent capacity limitation. You have to ½t all the in formation needed for conscious thought within an oscillatory cycle. Consciousness may thus be a mental juggling act, and only a few balls can be juggled at once. Crucial tests of this hypothesis still need to be conducted, but these ½ndings and theories collectively suggest that bringing thoughts to consciousness may depend on generation of oscillatory rhythms and the precise temporal relationships between them and the spiking of individual neurons.
2016-09-21T08:51:56.807Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "411bf8189e72e4d83c855b97bdf085417b84dd77", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/96358/1/Miller-2015-Working%20Memory%20Capac.pdf", "oa_status": "GREEN", "pdf_src": "Crawler", "pdf_hash": "6c2e2e80b3a75b0986629f65f1a8c4db4108a16e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
259794121
pes2o/s2orc
v3-fos-license
Management of iron deficiency anaemia in children No abstract available Sri Lanka Journal of Child Health, 2023: 52(2): 204-208 Introduction Iron is an essential cofactor of haemoglobin. Human adult haemoglobin is composed of two  and two  globin protein subunits, each of which is tightly associated with a non-protein haem group 1 . Haem group consists of an iron ion held in a porphyrin ring 2 . This iron ion, primarily found in its ferrous (Fe 2+ ) state, is the site for oxygen binding, and therefore, is essential for oxygen delivery to tissues. Iron deficiency impairs the synthesis of haemoglobin and erythropoiesis in the bone marrow and leads to iron deficiency anaemia (IDA). Epidemiology Iron deficiency is the commonest form of anaemia in children throughout the world. The higher prevalence is reported in South Asia and Central, Western, Eastern, and sub-Saharan Africa 3 . It is particularly common among toddlers and preschool children aged between one to five years. The prevalence of IDA among Sri Lankan preschool children is estimated at 7.3% 4 . Risk factors and aetiology Nutritional deficiency is the most common cause of iron deficiency in children 5 . Lack of availability and inadequate intake of iron-rich food, poor weaning practices, excessive milk consumption, and consumption of iron absorption inhibitors along with meals (e.g., phytates, tannates, calcium), are common risk factors for nutritional iron deficiency. Malabsorption and chronic blood loss due to hookworm infections and gastrointestinal bleeding are other causes of IDA in children 6 . Due to the maternal transfer of adequate quantities of iron in utero, iron deficiency is rare before six months in term infants 7 . However, prematurity, placental abruption, fetal-maternal haemorrhage and twin-twin transfusions cause IDA in infants below six months. Clinical features IDA is clinically asymptomatic and detected incidentally in most 8 . Clinical features depend on the severity of the iron deficiency and anaemia 9 . Some of the features are due to the deficiency of iron, while others are caused by associated anaemia (Table 1) 6 . Conjunctival and skin pallor are evident when the haemoglobin is below 8-9g/dL. When the haemoglobin is very low, children with IDA could develop features of heart failure like tachycardia, cardiomegaly, gallop rhythm, tender hepatomegaly and fine basal crepitations. Some studies also report that children with iron deficiency develop cognitive impairment and motor development delay even without anaemia. However, the causal relationships between these neuro-cognitive symptoms and nonanaemic iron deficiency are not conclusive. Laboratory findings Full blood count in IDA shows low haemoglobin and microcytosis [low mean corpuscular volume (MCV)]. The mean corpuscular haemoglobin (MCH) and mean corpuscular haemoglobin concentration (MCHC) are low, too 10 . The red blood cell (RBC) distribution width is elevated. Reactive thrombocytosis is another haematological feature of IDA. The blood picture in IDA shows hypochromic microcytic red blood cells, anisopoikilocytosis and pencil and teardrop cells. However, blood picture does not help to differentiate IDA from other causes of microcytic anaemia; therefore, it should not be routinely performed. Diagnosis The diagnosis of iron deficiency is confirmed by serum ferritin or serum iron studies. Serum ferritin is the most commonly performed test due to its low cost and wider availability. However, as serum ferritin is an acute phase reactant, it is elevated in many infective and inflammatory conditions. Therefore, it should be done when the child is free from acute inflammation. Serum ferritin <15g/L is widely accepted as the cut-off to diagnose IDA 11 . However, some studies suggest that ferritin <30g/L should be considered as iron deficiency 12 . Therefore, in routine clinical practice, it is reasonable to commence a therapeutic trial of iron in those with serum ferritin between 15-29g/L. Transferrin saturation is another helpful investigation to confirm iron deficiency in children 12 . Although it is customary to order a full iron profile (serum iron, total iron binding capacity and transferrin saturation), the same information on iron deficiency can be gathered from the transferrin saturation alone. Therefore, it is recommended to perform only that. In IDA, transferrin saturation is <16% (normal >30%). Serum iron is low, and the iron binding capacity is high in IDA. If the facilities for serum ferritin and iron studies are unavailable, giving a therapeutic trial of iron as a diagnostic tool of IDA is recommended. This is especially appropriate for children aged between six months to two years. During a therapeutic trial, children are given the treatment dose of iron for one month and evaluated for the response by demonstrating a rise in haemoglobin by at least 1g/dL. Other investigations like soluble transferrin receptor levels (high in IDA), zinc protoporphyrin (high in IDA) and serum hepcidin (low in IDA) are used only in research settings. They are not widely available or optimised for clinical use. Differential diagnosis The differential diagnoses of microcytic anaemia include thalassaemia trait, sideroblastic anaemia, lead poisoning and copper deficiency, of which the thalassaemia trait is the most important health problem in Sri Lanka 13, 14 . The prevalence of thalassaemia trait and -thalassaemia trait in Sri Lanka is reported as 2-3% and 8%, respectively 15,16 . Therefore, many children with asymptomatic microcytic anaemia could have thalassaemia traits. Similarly, IDA is known to co-exist with the thalassaemia trait 17 . Also, the National Thalassaemia Prevention Programme recommends screening for -thalassaemia trait in all individuals with low MCV to avoid births of children with thalassaemia 18,19 . Therefore, screening for thalassaemia by performing haemoglobin highperformance liquid chromatography (HPLC) or capillary electrophoresis (CE) is recommended for all children with microcytic anaemia with or without iron deficiency. The diagnosis of -thalassaemia trait is confirmed if the patient has haemoglobin A2 >3.4% 20 . -thalassaemia trait, conversely, cannot be diagnosed by haemoglobin HPLC or CE. It can only be diagnosed by genetic testing done in specialised laboratories 21 . If facilities are available, screening for common -thalassaemia mutations (-3.7 , -4.2 , --MED , --SEA , --THAI , --FIL and --20.5 ) should be performed in children with persistent microcytic anaemia in whom the IDA and -thalassaemia trait have been excluded. Treatment Oral iron is the mainstay in the treatment of IDA. Children should be prescribed 6mg/kg of elemental iron daily as a single daily dose or two divided doses, preferably before meals. The commercially available iron preparations include ferrous sulfate, ferrous gluconate, ferrous fumarate, ferric citrate, ferric maltol and sucrosomial ® iron 22 . The amount of elemental iron available in each preparation differs according to the manufacturer. Therefore, the prescriber should be aware of the amount of elemental iron in each iron preparation. The response to oral iron is very rapid, and therefore it can be used effectively even in children with severe IDA and very low haemoglobin levels. Routine prescription of folic acid, vitamin C and antihelminthic medication has not been shown to provide an added advantage in IDA; therefore, they should not be practised 23 . Oral iron's frequently reported side effects are constipation, nausea, dyspepsia, and vomiting 24 . Parenteral iron is indicated in children with IDA with intolerable side effects to oral iron and gastrointestinal pathologies that reduce oral iron absorption. Parenteral iron formulations available for clinical use include iron sucrose, ferric gluconate, low molecular weight iron dextran, ferric carboxymaltose, and iron isomaltoside 25 . Parenteral iron is associated with serious adverse effects like hypersensitive reactions and anaphylaxis 26 . Blood transfusions are very rarely used in IDA in children. It is indicated in patients with haemodynamic instability due to anaemia and ongoing active infection. Iron therapy should be started subsequently. Dietary iron supplementation is an essential component in IDA management. Children with IDA should be encouraged to consume food rich in haem iron with higher bioavailability, such as meat, fish and egg yolk. The absorption of non-haem iron in green leaves and pulses is increased by consuming food rich in vitamin C along with meals. Children with IDA should not consume phytates (grains and seeds) and tannates (tea and coffee) that decrease iron absorption along with meals. Follow up Response to treatment of IDA is usually assessed by repeating the haemoglobin after one month. A rise in the haemoglobin of 1-2g/dl indicates an adequate response to iron 27 . Iron treatment should be continued for three months after haemoglobin and RBC indices have normalised in patients with good responses. This is to ensure the replenishment of iron stores. If the response to iron treatment is poor, several possibilities should be explored. After confirming that the dose and compliance are adequate, patients should be re-evaluated for non-dietary causes of iron deficiency; for example, malabsorption and chronic blood loss. Genetic diseases causing iron refractory IDA should also be considered 28 . Prevention Prevention of IDA is a significant public health issue. Maternal iron supplementation during pregnancy and delayed cord clamping at delivery are proven interventions to prevent IDA during infancy. In older children, consuming iron-rich food to fulfil the recommended dietary allowance (1mg/kg/day in infancy and 7-10mg/day during childhood) is universally accepted to prevent iron deficiency. Continuous or intermittent iron supplementation is another method advocated. The WHO recommends iron supplementation only in countries where the prevalence of anaemia is >40% 29 .
2023-07-12T16:57:01.096Z
2023-06-05T00:00:00.000
{ "year": 2023, "sha1": "a6f72fd3c05924d7477efd42274f7f03bb9f7472", "oa_license": "CCBY", "oa_url": "https://storage.googleapis.com/jnl-sljo-j-sljch-files/journals/1/articles/10525/646726d5b5474.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "49917f9d75eb14f72ca98c434461aa00efe954e8", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [] }
229015572
pes2o/s2orc
v3-fos-license
Change in The Structure of Polyme Polyacetylene When Irradiated by Low-Energy X-Ray Taken by Tem Plastics or synthetic plastics are products that are not naturally available, but are made by humans. They have a long decomposition time, can last up to hundreds, thousands of years (plastic bottles, plastic straws and plastic bags take 10 years to 1000 years to decompose). Plastic waste is the plastic products after being used and are discharged into the environment such as plastic bags, plastic bottles, plastic straws, synthetic plastics. Here polymer being studied is polyacetylene, polyacetylene is a plastic film, polyacetylene is projected by low-energy x-rays with different projection doses, thereby analyzing the structural change of the polyacetylene. The changing structure leads to a change in polymer decomposition time, which can reduce environmental pollution. Introduction Plastic is commonly used in society today, but its drawback is the long decomposition time, therefore, causing many negative impacts on the environment (worsening urban landscape, clogging of sewers, etc). Currently, the problem of plastic waste treatment is very concerned around the world. The problem is finding a method that can change the structure of the plastic and change the decomposition time [1][2][3][4]. Plastic waste is rising to an alarming level, the harm that they cause to the environment is not small, cpecifically: Plastic waste is difficult to decompose in the natural environment. Each type of plastic has different years of decomposition with very long time, ten of years sometimes thousands of years [5][6][7]. Polyacetylene is a polymer with the formula (C 2 H 2 ) n . Polyacetylene consists of a long chain of carbon atoms with single and double bonds alternating between them, each of which has a hydrogen atom. Worldwide, the study of the physical and chemical properties of polymers with irradiation effects began to be studied in the early 1960s. Many scientists have used interferometers and tools to study the effects of gamma irradiation on the optical properties of polymers [7][8][9][10]. Low-energy X-rays are a commonly used radiation in laboratories. The application of x-rays in irradiation to polyacetylene is a new problem. Due to the good absorption of low energy X-rays by polyacetylene, its structure will change and lead to a change in the decomposition time. Decomposition time is reduced to help plastic decompose faster in nature, which brings a great effect for environmental protection, partly solving the problem of plastic pollution. Material and method The sample used in the study is polyasetylene films in the laboratory, which is cut into rectangular sheets, each plate is 0.02mm thick, stacked 10 pieces on each irradiation (total thickness ~ 0.2 mm). Polyasetylene film samples are projected by the X-ray generator MBR-1618R-BE (Hitachi). Figure 1. Polyacetylene structure X-ray generator MBR-1618R-BE (Hitachi) is used in the fields of application of radiation beams related to materials research, food preservation, killing microorganisms, gene mutations. The generator operates in the voltage range of 35 -160 kV, the current is about 1 -30 mA [1][2][3][4][5]. Polyacetylene samples were irradiated with different dose doses. With different poly samples, we change the distance from the x-ray source to the sample, then change the projection time. The samples are stored away from direct sunlight, at room temperature. After the projection is completed, the polyme samples are broken down to the nanoscale and they are taken by the TEM machine to show any changes in the structure. Results and discussion When X-ray enters the material, the intensity of X-ray will be reduced: The graph shows the dependence of beam intensity on the thickness of the polyacetylene sheet. When the thickness of the polyacetylene sheet is bigger, the intensity of the emerging beam decreases but the level of reduction is very small compared to when we increase the thickness significantly. This demonstrates that the X-ray energy is less absorbed by polyacetylene, which is largely released. In order for polyacetylene to absorb a large amount of energy, we must increase the irradiation time. The polymer samples after being broken to the nanoscale will be taken by the TEM machine. We have 4 samples taken TEM in which one has not been irradiated and 3 samples have been irradiated. Through the results of TEM images we can see the structural change of polymers. After irradiation of polyacetylenes, these samples are broken down to the nanoscale and taken by a Figure 5, we can see that sample 1 after irradiation varies a lot from the original sample that was not irradiated. The polymer structures are clustered together, the original structures are changed, and on the sample appear more uneven, rough lines like peaks. In Figure 6, there are differences, the black areas are thick polymer bands, the brighter areas are thin polymer bands. It also proves that when irradiated the structure of the polymer changes, there are places where more molecules are concentrated, and places where there are fewer molecules. In Figure 7, similarly, the structure of the polymer changed quite a lot after being irradiated thin bands appear thick bands and peaks. Conclusions In this paper, polyacetylene samples are irradiated by low-energy x-rays with different doses. The image obtained from the TEM shows that the structure of the polymer has changed. Comparing with unirradiated image we can see that the polymer samples when irradiated are completely different from the original sample. In the irradiated samples we see that there are places where more molecules are concentrated in the formation of black bands in the image, and there are places where very few molecules form lighter bands. Due to the change in the structure of the polymer samples after irradiation, the decomposition time of the samples also changes. The decomposition time of polymers in nature is very long, so it is possible to use low energy x-ray irradiation methods on polymers to reduce decomposition time as well as reducing environmental pollution. Comparing with unirradiated image can see that the polymer samples when irradiated are completely different from the original sample. The research results also show that polyacetylene also absorbs a small amount of X-ray energy, in order for the x-ray absorption process to be more effective, the irradiation time must be increased. The thickness of the polyacetylene sheet also has a small effect on the absorption of x-rays, so during experiments, the polyacetylene sheets were stacked to increase the irradiation efficiency. Figure 5, 6, 7 show that with different irradiation doses and different irradiation times we get different images but basically they change the structure when irradiated. TEM images also show that when the sample is irradiated at a distance nearer to the source of radiation, it will be more effective. Samples with a longer irradiation time also show an effect on the structural change of the polymer. So the irradiation time, the distance from the sample to the irradiation source and the radiation dose are the three basic factors that influence the change in the structure of the polymer.
2020-11-12T09:08:59.321Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "874d802a9678bab86ea226b268d1487f0c957df4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1655/1/012009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8b88889b33821f27b1b1d98c18012aa020fe9959", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
265503466
pes2o/s2orc
v3-fos-license
The Utility of a Novel Electrocardiogram Patch Using Dry Electrodes Technology for Arrhythmia Detection During Exercise and Prolonged Monitoring: Proof-of-Concept Study Background Accurate detection of myocardial ischemia and arrhythmias during free-living exercise could play a pivotal role in screening and monitoring for the prevention of exercise-related cardiovascular events in high-risk populations. Although remote electrocardiogram (ECG) solutions are emerging rapidly, existing technology is neither designed nor validated for continuous use during vigorous exercise. Objective In this proof-of-concept study, we evaluated the usability, signal quality, and accuracy for arrhythmia detection of a single-lead ECG patch platform featuring self-adhesive dry electrode technology in individuals with chronic coronary syndrome. This sensor was evaluated during exercise and for prolonged, continuous monitoring. Methods We recruited a total of 6 consecutive patients with chronic coronary syndrome scheduled for an exercise stress test (EST) as part of routine cardiac follow-up. Traditional 12-lead ECG recording was combined with monitoring with the ECG patch. Following the EST, the participants continuously wore the sensor for 5 days. Intraclass correlation coefficients (ICC) and Wilcoxon signed rank tests were used to assess the utility of detecting arrhythmias with the patch by comparing the evaluations of 2 blinded assessors. Signal quality during EST and prolonged monitoring was evaluated by using a signal quality indicator. Additionally, connection time was calculated for prolonged ECG monitoring. The comfort and usability of the patch were evaluated by a web-based self-assessment questionnaire. Results A total of 6 male patients with chronic coronary syndrome (mean age 69.8, SD 6.2 years) completed the study protocol. The patch was worn for a mean of 118.3 (SD 5.6) hours. The level of agreement between the patch and 12-lead ECG was excellent for the detection of premature atrial contractions and premature ventricular contractions during the whole test (ICC=0.998, ICC=1.000). No significant differences in the total number of premature atrial contractions and premature ventricular contractions were detected neither during the entire exercise test (P=.79 and P=.18, respectively) nor during the exercise and recovery stages separately (P=.41, P=.66, P=.18, and P=.66). A total of 1 episode of atrial fibrillation was detected by both methods. Total connection time during recording was between 88% and 100% for all participants. There were no reports of skin irritation, erythema, or pain while wearing the patch. Conclusions This proof-of-concept study showed that this innovative ECG patch based on self-adhesive dry electrode technology can potentially be used for arrhythmia detection during vigorous exercise. The results suggest that the wearable patch is also usable for prolonged continuous ECG monitoring in free-living conditions and can therefore be of potential use in cardiac rehabilitation and tele-monitoring for the prevention of exercise-related cardiovascular events. Future efforts will focus on optimizing signal quality over time and conducting a larger-scale validation study focusing on both arrhythmia and ischemia detection. Introduction Higher levels of physical activity and fitness are associated with a lower burden of cardiovascular disease (CVD) [1][2][3][4].However, it is also well established that vigorous exercise is associated with an increased risk of major adverse cardiovascular events in people with underlying CVD.In patients with coronary artery disease (CAD), intense physical activity could lead to fatal ventricular arrhythmias due to plaque rupture or demand ischemia [5,6].Therefore, the 2019 European Society of Cardiology Guidelines for the diagnosis and management of chronic coronary syndromes state that an exercise electrocardiogram (ECG) provides complementary clinically useful and valuable prognostic information in addition to a resting ECG [7]. In clinical practice, the risk of exercise-induced arrhythmias and ischemia is typically evaluated by a traditional exercise stress test (EST).However, the interpretation of such a test is associated with several limitations.First, an EST usually consists of a single short bout of exercise with a gradually increasing workload (in watts), which is mostly not representative of free-living sports activities in terms of sports type, intensity, and duration.Yet, these factors are particularly important determinants of the occurrence of ischemia and arrhythmias during exercise [5,8,9].Second, environmental factors such as temperature and hydration status can vary considerably during outdoor sports activities and pose additional risks for patients with CVD [10].A sensor suitable for continuous ECG monitoring during vigorous exercise can enhance the monitoring of individuals with subclinical or diagnosed CAD in free-living conditions.Such sensors would allow repeated or periodic measurements during exercise, contributing to better screening and management in this group.For individuals engaged in sports, this technology will support health care professionals in providing appropriate and personalized exercise prescriptions. An extensive review of the literature reveals that there has been a dramatic increase in wearable sensors over the past decade [11][12][13][14][15]. Numerous devices are available for heart rhythm monitoring and arrhythmia detection, most of them based on either ECG or photoplethysmography (PPG) [11].Generally, for diagnostic purposes, ECG-based wearables are preferred over PPG sensors because ECG-based analyses have been shown to be more accurate than derived analyses based on pulse waveforms.In particular, PPG-based wearables generally lack accuracy for monitoring during exercise [16,17].Whereas ECG patches are well-tolerated and have high patient adherence [18,19], existing devices such as the Apple Watch, Alivecor Kardia Mobile, or Fibricheck [20][21][22] often require additional handling to generate an ECG recording, making them unsuitable for continuous use during exercise.These limitations may be overcome by ECG patches, as they maintain direct skin contact, allowing for automatically generated ECG read-outs.Several Conformité Européenne (CE)-marked or Food and Drug Administration-cleared single-use ambulatory ECG patches are currently available [23].However, research on the usability and accuracy of ECG patches during prolonged (vigorous) sports activities is scarce. To overcome these barriers, we developed a single-lead wearable vital signs platform featuring self-adhesive dry electrodes with the intended purpose of detecting arrhythmias and myocardial ischemia over prolonged periods of time, including physical activities.The self-adhesive dry electrode technology ensures direct skin contact over many days with minimal skin preparation and no gel application.These features contribute to maintaining good signal quality over time and during exercise, ensuring skin comfort and user compliance.This could make this single-lead wearable patch more suitable for long-term monitoring during physical activity in comparison to gel electrode solutions [24,25].In this initial proof-of-concept study, we aimed to examine the usability, signal quality, and utility of detecting arrhythmias in ECG signals recorded with the innovative patch during vigorous exercise and for prolonged monitoring in free-living conditions. Study Design and Population Participants included in this observational proof-of-concept study were adult cardiac patients diagnosed with chronic coronary syndrome who underwent an EST as part of routine follow-up at the Department of Cardiology at Máxima Medical Centre, the Netherlands.Individuals with cardiac pacemakers or other stimulators were excluded, as were participants with an implantable cardioverter defibrillator.Other exclusion criteria were left bundle branch block, Wolff-Parkinson-White syndrome, or ≥0.1 ST-segment depression on resting ECG.All participants performed an exercise test according to standardized exercise testing protocols, in which a traditional 12-lead ECG was recorded.This was combined with monitoring using the ECG patch during the same test.To collect additional data about prolonged ECG recording, the participants were asked to continuously wear the patch for 5 consecutive days.Also, the patients were asked to complete a short web-based questionnaire about the comfort and usability of the patch, which took approximately five minutes to complete. Vital Signs Patch Platform In this study, we assessed the utility to detect arrhythmias, signal quality, and usability of the vital signs patch research platform featuring self-adhesive dry electrode technology (Figure 1).The patch platform consists of a disposable patch and a reusable read-out module.The patch contains a pair of electrodes for acquiring a bipolar single-lead ECG signal for continuous monitoring.The printed patch is a layer build-up of conformable thermoplastic polyurethane with flexible and stretchable conductive silver within a meander design for additional strain relief during wear.Self-adhesive and gel-free electrodes are transfer printed onto the design, and a nonwoven acrylic adhesive (MED45150, Avery Dennison) is used as the top layer to ensure dominant skin contact properties for long-term wear durability of the overall patch.The vital signs research platform was developed in the Dutch Organization for Applied Scientific Research (TNO) Holst Centre, and the screen printing was carried out at the Holst Centre manufacturing facilities.The reusable part contains a wireless communication module and read-out electronics for 7 days of continuous monitoring on a single charge (2M Engineering).The patch was placed on the left side of the chest, just below the V4-V6 lead position, shortly before the planned exercise test.The patch records continuously unless the recording is stopped.Sometimes data can be missing, probably due to poor electrode contact.These moments are referred to as disconnection time. Exercise Stress Test The exercise test was performed on a cycle ergometer (Lode Excalibur Sport, Lode BV Medical Technology) as part of routine patient care.According to the local protocol, an individualized ramp protocol was used, aiming for a total test duration of 8-12 minutes.This individualized protocol was based on the predicted maximum workload.The EST consists of an exercise phase with an incremental load and a recovery phase.The patch was applied to the skin of the participant by the investigator.The 12-lead exercise ECG (GE Cardiosoft V6.73, GE Health Care) was monitored continuously throughout the test.Patient characteristics (length and height), maximum workload (Wmax), power-to-weight ratio, and the percentage of predicted maximum heart rate were reported.In patients using beta-blockers, Brawner's equation was used to predict the maximum heart rate [26]. Wearing the Patch in Free-Living Activities After finishing the EST, the electrodes of the 12-lead ECG were removed, and the participants continued wearing the ECG patch for 5 days.They did not need to make adjustments in their daily activities, but they were not allowed to swim or take a bath.The patch can stay in place during the night and, for instance, while showering.After 5 consecutive days, the patients were instructed to remove the patch themselves.At their scheduled visit to the cardiologist, the participants handed in the device.This visit was scheduled according to standard care, usually within two weeks of the exercise test.In the case of a consultation by phone, the device was picked up at the patient's home by the investigators.All recorded data was reviewed offline and retrospectively. Arrhythmia Detection A total of 2 cardiologists, blinded for participant characteristics, assessed all 12-lead and patch ECG recordings in a random order.An evaluation on the following items was performed: the amount of premature atrial contractions (PAC) and premature ventricular contractions (PVC) during the exercise and recovery phases, and the occurrence of supraventricular or ventricular tachycardias.Items for which no consensus was obtained were assessed by a third cardiologist. Signal Quality During Traditional Exercise Testing Quality analysis was performed using Python (Python Software Foundation).The quality of the ECG recording from the patch during exercise was analyzed using a Signal Quality Indicator (SQI) which resulted in being the best performing one when comparing different SQI metrics with annotated quality levels [27].This SQI is based on the comparison of successive QRS complexes.To calculate the signal quality, the signal was preprocessed in the same way for all patients with a 0.5 Hz high-pass Butterworth filter.To avoid bias in the analysis, an irregular QRS complexes rejector was included to not affect the estimated quality levels [28].The output is a number between 0 and 1, with 0 corresponding to the lowest quality and 1 to the highest.Furthermore, the same SQI was applied to the 12-lead ECG signals.The average of the SQIs obtained for each lead was then computed and used for comparison with the SQI results of the patch ECG. Signal Quality for Prolonged Electrocardiogram Recording We analyzed the quality of the ECG recording from the patch for prolonged monitoring using the connection time and the SQI already mentioned.The connection time of the ECG signal is represented as a percentage.In addition to total time, the time was split into day and night using a regular schedule, from 7 AM to 11 PM and from 11 PM to 7 AM, respectively.The SQI results of the patch ECG during the prolonged monitoring were further processed on all recorded data in order to obtain the percentage of time per day in which the SQI was above 80%.This was then averaged across all participants.Segments of ECG with a quality above 80% are considered high-quality signals [27]. Questionnaire The comfort and usability of the patch were measured using a self-constructed web-based questionnaire in Castor (Castor EDC), a web-based software application for clinical research.The questionnaire consisted of general questions about the wearing time, physical activities while wearing the patch, and an evaluation of whether the patch adhered for the entire period.The following parameters were scored using a 1-5 Likert scale: noticeability of the patch, skin irritation, erythema, and pain (1=totally disagree, 2=disagree, 3=neutral, 4=agree, and 5=totally agree).The removal of the patch was evaluated with 2 questions about the occurrence of pain and skin irritation.An overall comfort score during both the EST and daily activities was asked using a 1-5 Likert scale (1=totally uncomfortable, 2=uncomfortable, 3=neutral, 4=comfortable, and 5=very comfortable). Statistical Analyses Descriptive analyses were conducted for baseline participant characteristics.Continuous data and normal distributed variables are presented as mean (SD).Categorical variables are presented as numbers and percentages.Continuous and nonnormal distributed variables are represented as median (IQR).For the continuous ECG parameters, the degree of agreement between the 2 devices was evaluated using the intraclass correlation coefficient (ICC) with a 95% CI.In this study, an ICC>0.9 was regarded as excellent agreement.To compare the ECG parameters, the Wilcoxon signed rank test was performed.In all statistical analyses, P<.05 was considered statistically significant.All statistical analyses were performed using SPSS statistical software (version 22, IBM Corp). Ethical Considerations This study complied with the principles of the Declaration of Helsinki.Ethical approval for this study was waived by the Medical Ethics Review Committee of Máxima Medical Center, Veldhoven, the Netherlands (N22.002), as the rules laid down in the Medical Research Involving Human Subjects Act (also known by its Dutch abbreviation WMO), do not apply to this research.Written informed consent was obtained from all participants when they were enrolled in this study.All data have been deidentified.No compensation was provided to the participants. Participants and Demographics A total of 6 consecutive patients who fulfilled the inclusion criteria signed the informed consent form and completed the study protocol between May and July 2022.All participants were male, with a mean age of 69.8 (SD 6.2) years.Both patient baseline characteristics and the results of the ESTs are presented in Table 1.All 6 patients completed the EST according to their individualized protocol.All participants were verbally encouraged to exercise until exhaustion, and none of the tests were terminated prematurely.The mean maximum achieved load was 206.1 (SD 96.2) W, and the mean power-to-weight ratio was 2.53 (SD 1.1) W/kg.The mean percentage of the maximum predicted heart rate was 98.8% (SD 14.6%). Arrhythmia Detection A total of 191 isolated PACs and 296 PVCs were detected in all ESTs using the 12-lead ECG system.The median (IQR) of all detected PACs and PVCs per exercise test is presented in Table 2.The total number of premature complexes during the exercise and recovery phases of the stress test showed an excellent degree of agreement between the 2 ECG recording methods (ICC=0.998,95% CI 0.982-1.000;ICC=0.998,95% CI 0.989-1.000;ICC=1.000,95% CI 0.999-1.000;and ICC=0.998,95% CI 0.988-1.000).Looking at the total number of PACs and PVCs during the total test duration, the degree of agreement between the 12-lead ECG and the patch ECG is also excellent (ICC=1.000,95% CI 0.997-1.000,both).There were no significant differences in the total number of PACs and PVCs detected with both methods during the total test duration (P=.79 and P=.18, respectively), and for the testing stages separately (P=.41,P=.66, P=.18, and P=.66).One patient had an episode of atrial fibrillation during the end of the exercise stage (14 seconds) and during the first part (30 seconds) of the recovery stage.This episode was detected at exactly the same time and for the same duration on both recorded ECGs.No ventricular arrhythmias were detected. Signal Quality During Exercise Testing The signal quality of the ECG signal obtained using the patch and 12-lead system during the exercise recordings of all 6 participants is presented in Figure 2. In general, this quality metric shows that the signal quality starts high, decreases to some degree during the EST, and increases again in the recovery phase.In participants 1, 2, and 5, the quality of the ECG patch outperformed the average quality of 12-lead ECG, whereas the opposite was the case in participants 3, 4, and 6.In participants 1, 3, and 5, the relative changes in signal quality followed similar trends for the patch ECG and the 12-lead. Connection Time Table 3 shows the percentages of connection times in the total recording and those during the day and night.A total of 5 out of 6 participants had a total connection time above 88% (range 88%-100%).The disconnection time during the day was less than 15%, and during the night it was below 4%.In participant 6, the patch was completely disconnected after 52 hours, and no recordings were made after this.In the recording time frame, the connection time of this patch was 100%. Signal Quality Indicator The percentage of time the recorded data has a quality above 80%, average across all participants, is presented in Figure 3, split between days.The average percentage of high-quality signals is similar on days 1 and 6. Wearing Time The average duration of time in which the patch stayed attached to the skin was 118.3 (SD 5.6) hours.There were no issues reported about the loss of adhesion of the patch.Participant 3 removed the patch accidentally on the last day of testing; all other participants wore the patch for at least 5 days (120 hours). User Comfort and Usability A total of 2 participants found the patch noticeable.There were no reports of skin irritation, erythema, or pain (Figure S1 in Multimedia Appendix 1). Regarding the removal of the patch, there were also no reports of skin irritation.One participant reported brief pain-less than a minute-when removing the patch.An overall evaluation of the comfort of the patch during the exercise test and in daily life reported no discomfort (Figure S2 in Multimedia Appendix 2). Principal Findings In this proof-of-concept study in 6 cardiac patients, we demonstrated that the vital signs patch platform containing self-adhesive dry electrodes for recording single-lead bipolar ECG can be used for arrhythmia detection during a maximum EST.Additionally, our findings suggest the potential utility of this patch for prolonged, over-5-day ECG monitoring in free-living conditions, although larger-scale validation is necessary.The patch was well tolerated by all participants, and no discomfort or device-associated side effects were reported. Interpretation of Findings Based on our literature review, this is one of the first studies to evaluate the use of an ECG patch with self-adhesive dry XSL • FO RenderX electrode technology specifically during exercise, with direct comparison to the traditional 12-lead ECG [15,17].A patch called the ECG247 Smart Heart Sensor was previously tested in elite endurance athletes during short exercise bouts, but no reference to the gold standard was included [29]. Numerous studies on different ECG patches have been conducted over the past few years.Many of them assessed accuracy for the detection of cardiac arrhythmias at rest or during low-intensity activities, focusing mainly on atrial fibrillation [18,30,31].Premature atrial and ventricular complexes are often not analyzed.In this study, we assessed the detection of PACs, PVCs, and atrial and ventricular arrhythmias during exercise testing.One participant had an episode of atrial fibrillation during the EST.This episode was assessed by the cardiologists for the same time period and duration on both ECG methods.Although the patch signal quality, based on QRS morphology SQI, decreased during the (sub)maximum load, the patch was still highly accurate when compared with the traditional 12-lead exercise ECG.The signal quality of the ECG patch during exercise therefore appears to be sufficient for rhythm assessment.When comparing the SQI level of the average 12-lead ECG with the ECG patch, the 12-lead ECG performed better in 3 patients during the EST, while the patch outperformed the 12-lead ECG in the other half. Dry electrode technology offers advantages for prolonged recording compared with gel electrodes, as it causes less skin irritation and erythema [32].A study that analyzed a dry electrode ECG patch for atrial fibrillation detection reported an overall accuracy of 93.57% and 85.94% during stationary and movement states, respectively [33].However, the movement state during free-living conditions consisted only of low-intensity activities, such as walking.In this study, the patch stopped recording prematurely in 1 participant.The exact reason this happened is unknown; the patch remained firmly adhered to the skin until its removal on day 5.The wearing time was therefore not affected.This particular failure, as well as the decline in connection times during prolonged wear, may be attributed to issues related to skin-electrode contact or hardware failures.However, the percentage of time with high signal quality in all collected data remained stable over time and was around 60%.Moreover, further analysis of signal quality showed no significant difference in the quality changes between the days [27].Other research has shown that monitoring for up to 14 days is already feasible with the self-adhesive Zio Patch [18], but the influence of high-intensity activities was not addressed.Prolonged continuous monitoring with a patch device can, however, contribute to higher detection rates in arrhythmia screening compared with conventional 24-hour Holter monitoring [34]. Limitations An important limitation of this study was the small number of participants.However, for the purpose of this proof-of-concept study, we considered a sample of 6 patients sufficient to explore the use of the new ECG technology for its feasibility for a larger-scale study and to obtain results to improve the patch and the test procedure.Another limitation is the fact that all patients were male and mainly older adults.The mean BMI of the participants was 25.3 (SD 1.8) kg/m 2 ; however, a BMI of <20 kg/m 2 and a very high BMI of ≥35 kg/m 2 are risk markers for cardiovascular mortality in patients with chronic coronary syndrome [35].Future research should expand the participant demographics to include a more diverse group to ensure broader applicability.However, the high user comfort in this group is promising, as older people are more likely to have dry and sensitive skin conditions [36].Regarding long-term monitoring during physical activities, not all participants in this study were highly active.The adhesion of the patch during extended wear might be different for more active individuals.Furthermore, data for signal quality assessment was missing due to one patch being disconnected prematurely.This, however, did not affect the ability to detect arrhythmias with the patch during exercise.The performance of SQIs based on QRS morphology should be further analyzed for ECG recordings with PACs and PVCs, as the morphology of these extrasystoles is different.Finally, to collect specific information on the comfort and usability of the patch, a self-constructed brief questionnaire was used.For further studies, a more comprehensive questionnaire would provide more information. Future Perspectives Accurate and continuous arrhythmia detection during vigorous exercise could play an important role in primary and secondary cardiovascular prevention in high-risk populations.ECG assessment during physical activities can help monitor individuals with underlying CVD.Arrhythmias during exercise, such as an increasing number of PVCs, can imply exercise-induced myocardial ischemia.Therefore, the patch could be beneficial for monitoring patients with known CAD, but also for screening high-risk athletes and highly active individuals without known CAD, or even patients with underlying structural heart disease who are at risk for fatal arrhythmias. The ability to detect arrhythmias using this dry electrode ECG technology must be confirmed in a larger-scale validation study with multiple measurements during exercise over prolonged periods of time.There is also added value in exploring patch-based screening for the occurrence of myocardial ischemia during exercise (eg, ST-segment deviation).High signal quality and good skin-sensor contact are essential.Patch design and material selection affect this.Additional preclinical tests with improved patches can contribute to optimizing the patch and reduce the risk of premature disconnection.More research is needed in order to improve the signal quality during prolonged ECG monitoring in a larger heterogeneous cohort and to depict how the signal quality is altered by factors such as gender, body composition, and skin type.This will also add more knowledge about possible causes of connection loss and the potential role of body morphology in this.At the same time, the development of automatic assessment methods is required for clinical application. As for wearability and comfort, efforts shall include the use of a validated questionnaire on skin irritation and comfort and research on a cohort with wider demographics and body composition (gender, age, hormonal conditions, and BMI), as XSL • FO RenderX well as participants with different skin types and highly prevalent skin conditions (sensitive skin and allergic dermatitis). Conclusions This proof-of-concept study using the vital signs patch showed that an ECG sensor based on self-adhesive dry electrode technology has the potential to be useful for arrhythmia detection during vigorous exercise.Our results suggest that the patch is also usable for prolonged ECG monitoring in free-living conditions and can therefore be of potential use in the prevention of exercise-related cardiovascular events.This technology can support health care professionals in telemonitoring solutions and cardiac rehabilitation programs. XSL • FO RenderX provided the original work, first published in JMIR Formative Research, is properly cited.The complete bibliographic information, a link to the original publication on https://formative.jmir.org,as well as this copyright and license information must be included. Figure 2 . Figure 2. Signal quality index (SQI) of the electrocardiogram (ECG) patch during exercise stress test (EST) based on QRS morphology, compared with the average 12-lead ECG, participants 1-6. Figure 3 . Figure 3. Average percentage of time with high-quality patch electrocardiogram (ECG) signal over the days. Table 1 . Patient baseline characteristics and exercise test results (n=6). b %Pred maxHR: Percentage Predicted Maximum Heart Rate. Table 2 . Assessment of the number of premature complexes per exercise stress test, expressed as median (IQR).The degree of agreement was evaluated using intraclass correlation coefficients (ICC) with 95% CI. Table 3 . Percentage of connection time of the patch per person for 5 days of wear. a In this participant, the patch was completely disconnected prematurely.
2023-12-01T06:17:54.496Z
2023-11-30T00:00:00.000
{ "year": 2023, "sha1": "f97f11cdc96bb5835f651850770e1aac2128e86a", "oa_license": "CCBY", "oa_url": "https://formative.jmir.org/2023/1/e49346/PDF", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3810ac7fe579a75ec06584b3a40eab455a9d519b", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
119511284
pes2o/s2orc
v3-fos-license
Quantum memory protocol in QED cavity based on Photon echo A new protocol of the optical quantum memory based on the resonant interactions of the multi atomic system with a cavity light mode is proposed. The quantum memory is realized using a controllable inversion of the inhomogeneous broadening of the resonant atomic transition and impact interaction (on request) of additional short 2 - laser pulse resonant to an adjacent atomic transition. We demonstrate that the quantum memory protocol is effective for arbitrary storage time and can be used for new quantum manipulations with transient entangled states in the field-atoms evolution. The effect of the fast absorption and emission of the light field is predicted. A new protocol of the optical quantum memory based on the resonant interactions of the multi atomic system with a cavity light mode is proposed. The quantum memory is realized using a controllable inversion of the inhomogeneous broadening of the resonant atomic transition and impact interaction (on request) of additional short 2π -laser pulse resonant to an adjacent atomic transition. We demonstrate that the quantum memory protocol is effective for arbitrary storage time and can be used for new quantum manipulations with transient entangled states in the field-atoms evolution. The effect of the fast absorption and emission of the light field is predicted. Photons are convenient carriers of quantum information [1] however realization of universal quantum memory (QM) for photons is still a difficult problem which attracts large attention due to its importance for quantum information science [1,2,3,4,5]. There are several proposals based on single atoms in an optical cavity [6] and on optical dense macroscopic media in free space [7,8,9]. First successful experiments with optically dense media have been made recently both with classical fields [10] and specific quantum states of light [11]. Optical QM includes delicate reversible unitary dynamics of the interacting light and medium. Controlling such dynamics in the macroscopic media opens a door for new investigations in quantum optics. Particularly the QM effect based on the electromagnetically induced transparency (EIT) was used recently for the controllable generation of spectrally narrow single photon fields [12]. Such QM technique was also proposed to stationary single- [13], two-and three color entangled light control [14] that looks prospective to quantum nondemolition measurements of single photon fields [15]. QM based on the photon echo technique [9] can be used to short light pulses [16] and effective multiply manipulations of single photon wave packets [17]. Storage and retrieval of the light states in the QM processes follow through the transient entangled states of the light and medium. Investigations and control of such quantum states are also important for understanding of the fundamental issues in the multi particle quantum dynamics. In this paper a new QM protocol based on the interaction control in the multi atomic system (N>1, N is a number of atoms) and resonance mode of the quantum electrodynamics cavity is proposed. Using the proposed QM, the possibility of new quantum manipulations are shown for the reversible evolution of the field and multi atomic system with arbitrary timescale unlike QM technique based on photon echo [9] which was initially suggested for gases with Doppler broadened atomic transitions and than developed for solid state media [18,19,20,21]. In the proposed protocol, the quantum field of cavity mode is storied at the interaction with inhomogeneously broadened resonant transition of the multi atomic system. The two operations are used to control the reversibility in the quantum dynamics of field mode. The first operation includes a frequency inversion of the inhomogeneous broadening on the resonant atomic transition and the second procedure gives an additional π -phase kick to atomic states similar to the method of work [22] proposed for controlling the coherent dynamics of atom in the cavity QED. Analysis performed here have shown both the effective control of the reversible field-atom dynamics and demonstrated a new effect of fast absorption and emission at the field -atoms interaction. This method can be used for QM processes and analysis of unitary quantum dynamics in the more complicated multi-particle systems. Let us consider the interaction between the field mode and multi-atomic system assuming that all the quantum evolution takes place within short enough temporal duration so (where is a maximum decay constant in the field and atoms evolution). Model of the proposed QM protocol I discuss here is characterized by the N-atomic Jaynes-Cummings Hamiltonian added by the inhomogeneous broadening and interaction with an external short laser pulse: , the atomic detunings are inhomogeneously broadened within a distribution Let it be that initially all the atoms are in the ground state and the probe field of the cavity mode is excited at time so initial quantum state is 1) First of all I consider the case when the short control 2π-pulse is applied with time delay τ after entrance of the probe field so corresponding to the representation where and also it was assumed . Using Eq.(3) and introducing the operators the solution of Eq. (2) for ( 4 ) Exact solution in Eq. (4) with the following solution for a large enough time of interaction ) has been used and it was supposed for simplicity that As seen from the Eqs. where , for short resonant laser pulse where the influence of the atomic detunings are negligible due to In the second step just after − π 2 pulse we switch the atomic detunings so . Such frequency inversion can be realized by several methods. The first method proposed [18] for solid state medium is based on the excitation by additional radio frequency j j j t t t tuned to a resonance with nuclei spins coupled with the atomic electrons through the strong hyperfine interaction. Inversion of the nuclei spins signs changes the local hyperfine field on the electron spin leading to the reversion of the inhomogeneous broadening on the electron spin transition. Another method uses a switching of the electric field gradient in the resonant medium [19,20,21] and was experimented realized recently for rare-earth ions doping the crystals [21]. Let us introduce an operator including the two operations: frequency inversion and laser ) ( 2 t J π − π 2 pulse kick with negligible relative time delay. After procedure we get a new Hamiltonian that determines the following evolution of the wave function for As seen from Eq. (8) , ( 1 0 ) The reconstructed wave function in Eq. (10) includes additional π phase shift comparing to the initial state ) 0 ( Ψ that coincides with the result of the field reconstruction in free space obtained in [19] whereas the solution in the Eq. (8) includes 2 π phase shift where an additional π shift of the echo signal field is caught from new atomic state evolved after interaction with the π 2 -laser pulse. Coming back to the solution of the Eq. . This important distinction of these two variants of QM protocol is determined by different phase relations between two field terms in Eq. (9a): the first term is coupled with the single photon initial field whereas the second term is given by the echo signal field which has an additional π phase shift. The sharpest difference of the simplified QM protocol takes place at time delay where the destructive effect between two interfering fields leads to the unpredictable fast absorption of the initial field. Putting of arbitrary initial state (fast absorption and emission of a single photon field in the QM protocol is shown in Fig.1.). In this paper, the new QM protocol is proposed for perfect retrieval of the arbitrary quantum state of the cavity field mode with short and long time delays. The protocol is based on controlling of interaction between single quantum mode and multi-atomic system using spectral inversion of inhomogeneous broadened resonant transition and applying on demand additional short π 2 laser pulse at the adjacent atomic transition. Detail analysis of the quantum dynamics in the proposed protocol shows an unusual dynamical effect of the fast absorption (or emission) of the light field which is the result of the destructive quantum interference between the initial and irradiated fields evolved in the cavity. The proposed QM protocol opens new interesting opportunities for quantum manipulations with photons and entangled states of light + multi-atomic systems which have a special interests both for single photon fields and for relatively more intensive quantum light. Finally we note that the problems analyzed here have become close to the collapse and revival phenomenon [22] in the particular case of the negligible small inhomogeneous broadening. So the proposed QM protocol can be useful for the investigation of nonunitary irreversible decoherence processes in more complicated multi-particle media. Such phenomena are also interesting for analysis in terms of the general approach to quantum reversibility studied recently in the framework of generalized Loschmidt echo [23] In particularly using it looks interesting to analyze more general schemes of quantum reversibility with the QM protocol including new manipulations by the atomic detuning and coupling constant.
2019-04-14T03:21:52.128Z
2006-05-03T00:00:00.000
{ "year": 2006, "sha1": "3ff33c50f3dd7c5ecda4c70b2c074bd30e7912b8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3ff33c50f3dd7c5ecda4c70b2c074bd30e7912b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118679838
pes2o/s2orc
v3-fos-license
Glass former units and transport in ion-conducting network glasses A new theoretical approach is presented for relating structural information to transport properties in ion conducting network glasses. It relies on the consideration of the different types of glass forming units and the charges associated with them. Changes in the compositions of these units lead to a re-distribution of Coulomb traps for the mobile ions and to a subsequent change in long-range ionic mobilities. It is furthermore shown how measured changes of the unit compositions can be explained by thermodynamic modeling. The theories are tested against experiments on borophosphate glasses and yield good agreement with the measured data both for the compositional changes of the units and the variation of the activation energy. A new theoretical approach is presented for relating structural information to transport properties in ion conducting network glasses. It relies on the consideration of the different types of glass forming units and the charges associated with them. Changes in the compositions of these units lead to a re-distribution of Coulomb traps for the mobile ions and to a subsequent change in long-range ionic mobilities. It is furthermore shown how measured changes of the unit compositions can be explained by thermodynamic modeling. The theories are tested against experiments on borophosphate glasses and yield good agreement with the measured data both for the compositional changes of the units and the variation of the activation energy. The chemical composition of ion conducting glasses can be varied to a large extent and this offers many possibilities to optimize these materials with respect to different demands, in particular to high ionic conductivities [1]. It is therefore important to get an understanding of the connection between the network forming structure and the long-range ionic transport properties. Considerable progress has been made in the past to gain insight into near and medium range order properties of ion conducting glasses by various experimental probes such as X-ray and neutron scattering, infrared and Raman spectroscopy, and solid-state NMR techniques (for a review, see [2]). A challenge is to utilize this information for theoretical models of the ionic transport. One promising route was suggested some time ago by building Reverse Monte Carlo models of the glass structure based on diffraction data [3] and by further analyzing these structural models with the bond valence method [4] to explore the preferred diffusion pathways of the mobile ions. In this Letter we will present a new theoretical approach, which is applicable to network forming glass structures and relies on the different network forming units (NFUs) that build up the host structure for the ionic motion (cf. Fig. 1). We argue that the charges associated with the NFUs and the way how they are localized are of crucial relevance for characterizing the statistical properties of the energy landscape that govern the long-range ionic transport properties. To demonstrate the new approach we apply it to the mixed glass former effect in sodium borophosphate glasses, where detailed information on the NFU concentrations has been gained recently by MAS-NMR [5,6], see Fig. 2. We first show how the observed changes of NFU concentrations with the borate-to-phosphate mixing ratio can be understood from a thermodynamic model. Then we will use this structural information on the NFUs to calculate changes of the conductivity activation energy upon the mixing ratio. In borophosphate glasses of composition we distinguish seven NFUs as in [5]: the neutral trigonal B (3) units with three bridging oxygens (bOs) and zero non-bridging (nBOs), the negatively charged tetrahedral B (4) units with four bOs and zero nbOs, the trigonal B (2) units with two bOs and one negatively charged nbO, and the tetrahedral phosphate units P (n) , n = 0, . . . 3 with n bOs and (3 − n) nbOs, see the Fig. 1. MAS-NMR measurements redrawn in Fig. 2 (symbols) show that, when starting the mixing from the phosphate rich side (x = 0), first the B (4) units replace P (2) units. This replacement continues until the B (4) concentration saturates at about x ≃ 0.4. Above this mixing concentration the neutral B (3) units start to appear, replacing now the neutral P (3) units to keep the total amount of negative charge constant. This is needed to compensate the positive charge of the mobile sodium ions. With further increasing x, the behavior becomes more complex until at the boron rich side all NFUs are somehow involved in forming the network structure. To understand this behavior we developed a thermodynamic model, which is based on a hierarchy of formation enthalpies G(X) for the NFUs. The B (4) are the most preferable NFUs for the charge compensation of the alkali ions, since they are most highly connected in the network with their four bOs. However, their concentration is lim- ited, because the delocalized charge hinders them to come close to each other [7]. Defining in general the NFU concentrations [X] as the fraction per network former cation (i.e. total number of units X divided by total number of B and P atoms) we set [B (4) ] = min(x, [B (4) ] sat ), where the saturation limit [B (4) ] sat = 0.43 is chosen in agreement with the pioneering MAS-NMR results by Bray and coworkers [8] and more recent findings [9], as well as early theoretical modeling [10]. For the other NFUs we choose the neutral P (3) unit as reference point, and introduce only one parameter ∆ to describe the relative formation enthalpies This choice expresses that the poorly connected and highly charged P (1) and P (0) units are increasingly less preferable compared to the better connected B (3) , P (3) and P (2) units [11] and that the B (2) units are least likely due to their nbO and trigonal configuration, which makes it difficult to accommodate them within the network. In addition we need to take care of the total amount of negative charge, being fixed by the sodium content due to charge neutrality, and the total amount of borate and phosphate given by x. This yields the constraints [B (4) ] + [B (2) ] + [P (2) ] + 2[P (1) In a grand-canonical treatment we can assign the chemical potentials µ q , µ B and µ P to these constraints (2a)-(2c), respectively. Considering a set of sites to be occupied by the NFUs with mutual site exclusion, we obtain the generalized Fermi distributions where all energies are given in units of the thermal energy k B T and the chemical potentials have to be determined from Eqs. (2a)-(2c). Equations (1) with the single parameter ∆ describe the hierarchy between the formation enthalpies. Specific values for these enthalpies should be irrelevant as long as the system is in the low-temperature regime. To evaluate the behavior in this regime we solve the set of Eqs. (2,3) for ∆ → ∞. The results shown as solid lines in Fig. 2 are in good agreement with the MAS-NMR data from ref. [5] (diamonds), except for x = 1, where the measured [B (4) ] is much smaller than the presumed saturation value [B (4) ] sat = 0.43, and correspondingly the [B (2) ] value larger than the theoretical prediction. With respect to the deviation at x = 1, we note that MAS-NMR measurements reported by another group [9] yield the data marked by the open symbols in Fig. 2, which are in better agreement with the theoretical predictions. On the basis of the thermodynamic model, one can, of course, reproduce the behavior found in ref. [5] by assuming a lower saturation value [B (4) ] sat for x = 1. Indeed, for the sodium borate glass, the maximal [B (4) ] was found to be slightly smaller than 0.43 [7], which can be explained by requiring that a bO cannot link two B (4) units [10]. However, to describe all details, including different behaviors for different types of alkali ions, one needs to weaken this rule and allow for the formation of diborate groups [12]. Let us note that by including such refinements it is also possible to model the [B (4) ] max in the borophosphate system. To keep things simple we have focused on the essential idea and used the limit [B (4) ] max ≃ 0.43 here. Next we show how one can, based on the information on the NFU concentrations, successfully model longrange ionic transport properties. To this end we developed a model, which we call the Network Unit Trapping (NUT) model. It relies on the following idea: the nbOs create localized Coulomb traps for the mobile ions, while delocalized charges, as those of the B (4) units, give a partial Coulomb contribution to several neighboring ion sites. In this way the structural energy landscape for the ionic pathways is modified with the mixing concentration x and this effect can be conjectured to govern the change of the activation energy E a (x) for the long-range ionic transport. To test this model we randomly distribute the NFUs with their concentrations from Eqs. (3) on the sites of a simple cubic lattice. These sites are called NFU sites. The mobile ions are considered to perform a hopping motion between the centers of the lattice cells, which represent the ion sites. An NFU α with k α > 0 nbOs and charge (−z α e) adds a Coulomb contribution (−z α e/k α ) to k α randomly selected neighboring ion sites, as illustrated in Fig. 3. Note that this implies that the delocalization of electrons belonging to the double bond in the charged P (n) units is taken into account. For example, a P (2) unit on an NFU site i induces a charge −e/2 at two randomly selected neighboring ion sites. The delocalized charge of a B (4) unit is spread equally among the neighboring ion sites, which amounts to set k = 8 for this unit. The neutral B (3) and P (3) units give no Coulomb contribution. Finally, Gaussian fluctuations are added to the site energies in order to take into account the disorder in the glassy network [13]. In summary we can write for the energy of ion site i where the sum over j runs over all neighboring NFU sites of ion site i. The occupation number ξ α i,j is equal to one, if an NFU α on site j contributes a Coulomb contribution −z α e/k α to ion site i; otherwise it is equal to zero. The parameter E 0 > 0 sets the energy scale and the η i are independent Gaussian random variables with zero mean and standard deviation σ. Note that E 0 is irrelevant as long we are interested in relative changes of the activation energy with x. Hence σ is the only tunable parameter in the modeling. To determine the activation energy E a (x) we have chosen a lattice with 50 3 sites, occupied all NFU sites according to the occupation probabilities given by Eqs. (3), and the ion sites randomly with concentration y/(1−y). Then Kinetic Monte-Carlo simulations with periodic boundary conditions and Metropolis transition rates [14] were performed. After thermalization the time-dependent mean- square displacement R 2 (t) of the mobile ions and the dif- The diffusion coefficient is shown for σ = 0.25 and various mixing concentrations in an Arrhenius plot in Fig. 4a. From the slopes of the straight lines we calculated the activation energy E a (x), and the behavior of the normalized activation energy E a (x)/E a (0) is compared with the experimental results from [5] in Fig. 4b. The overall agreement between the theoretical (open symbols, solid line) and the experimental data (full symbols) is surprisingly good. Note that we needed to fit only one parameter σ to achieve this agreement. A significant difference between the theoretical and experimental curve can be seen for x → 1: while the theoretical E a (x) deceases monotonously with x, the experimental E a (x) fi-nally rises for the sodium-borophosphate glass (x = 1). Interestingly, this rise is reproduced by the NUT model (dashed line), if instead of the NFU concentrations predicted by Eqs. (3), the NFU concentrations measured in [5] are used. In view of the discrepancies at x = 1 between experiments discussed in connection with Fig. 2, this calls for a reevaluation of the activation energy in the sodium borate system. In summary we have presented a new approach to relate structural information to transport properties in ionconducting network glasses. This approach is based on a consideration of the properties of the different NFUs building the network structure with respect to total charge and charge delocalization. In addition we showed how MAS-NMR results for NFU concentrations can be understood from thermodynamic modeling. The potential of our new approach is manifold, since one can apply it quite generally to other network glasses with different compositions. One immediate application, for example, could be the investigation of glass series with varying modifier content. It is known that the activation energy often shows a logarithmic decrease with the concentration of mobile ions [15] and it would be important to see whether this behavior can be captured by the NUT model. We would like to thank H. Eckert and S. W. Martin for very valuable discussions and gratefully acknowledge financial support of this work by the Deutsche Forschungsgemeinschaft in the Materials World Network (DFG Grant number MA 1636/3-1). * Electronic address: pmaass@uos.de; URL: http://www.statphys.uni-osnabrueck.de
2010-09-10T09:21:15.000Z
2010-09-10T00:00:00.000
{ "year": 2010, "sha1": "e043c4e5cfd5956d27004391c2ae8938a7689a53", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e043c4e5cfd5956d27004391c2ae8938a7689a53", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
216194513
pes2o/s2orc
v3-fos-license
Angiographic Findings and Outcomes of Bronchial Artery Embolization in Patients with Pulmonary Tuberculosis Objective: We aimed to evaluate the angiographic findings and outcomes of bronchial artery embolization in tuberculosis patients and to compare them with those of non-tuberculosis patients. Materials and Methods: Patients who underwent bronchial artery embolization in a single interventional radiology department with hemoptysis were reviewed. A total of 89 patients (66 males and 23 females; mean age 52.71±15.37) were incorporated in the study. The patients were divided into two groups: tuberculosis group (n=36) and non-tuberculosis group (16 malignancy, 22 bronchiectasis, 6 pulmonary infection, 5 chronic obstructive pulmonary disease, 4 idiopathic; n=53). Angiography and embolization procedure were performed by interventional radiologists with 5, 10, and 20 years of experience. Angiographic findings were classified as tortuosity, hypertrophy, hypervascularity, aneurysm, bronchopulmonary shunt, extravasation, and normal bronchial artery. Chi-square test was used to compare angiographic findings between tuberculosis and non-tuberculosis patient groups. Results: Bronchopulmonary shunt was found to be significantly higher in the tuberculosis group as compared to that in the non-tuberculosis group (p=0.002). Neither of the groups showed a statistically significant difference with respect to recurrence (p=0.436). Conclusion: Bronchial artery embolization is a useful and effective treatment method of hemoptysis in tuberculosis. Evaluation of bronchopulmonary shunts in patients with tuberculosis is critical for the reduction of catastrophic complications. Introduction Hemoptysis is a common and sometimes life-threatening symptom that may have many underlying etiologies. It is defined as the expectoration of blood from the respiratory system [1]. If hemoptysis is massive (>300 mL per day) and untreated, the mortality rates may rise to 50% [2]. Management of hemoptysis includes conservative treatment, surgery, and bronchial artery embolization (BAE). Patients admitted with massive hemoptysis are usually in poor condition medically and cannot tolerate surgery. In addition, serious complications such as asphyxia, bronchopleural fistula, and respiratory failure may occur in patients who undergo surgery [1]. Therefore, BAE has become the primary treatment method of massive or intermittent-moderate (>100 ml per day) hemoptysis [3,4]. The causes of hemoptysis vary significantly between developed and non-developed countries. In non-developed countries, tuberculosis is the most frequent cause of massive hemoptysis [5]. Since Remy et al. [6] first described BAE for the management of hemoptysis, several studies have declared the efficacy of BAE in tuberculosis patients [7][8][9][10][11]. These studies have investigated the outcomes of BAE in tuberculosis patients and the risk factors that affect recurrence. However, angiographic findings and their influence on the embolization procedure are not reported in detail. The aims of this study were to evaluate angiographic findings during BAE in tuberculosis patients and to compare the findings with those of non-tuberculosis patients. We also tried to reveal the effect of angiographic pattern on the success and technique of BAE. Materials and Methods Local ethics committee approval was obtained for this retrospective study. One hundred and five patients who underwent BAE between August 2015 and July 2018 in a single interventional radiology department with moderate (>100 mL per day) or severe (>300 mL per day) hemoptysis refractory for medical and bronchoscopic treatment were reviewed. In 16 patients, no pathologic artery was found during the angiography, so they were excluded from the study. A total of 89 patients (66 males and 23 females; mean age 52.71±15.37) were incorporated in the study. The patients were divided into two groups according to their diagnosis: tuberculosis group (n=36) and nontuberculosis group (16 malignancy, 22 bronchiectasis, 6 pulmonary infection, 5 chronic obstructive pulmonary disease, 4 idiopathic; n=53). The diagnosis of tuberculosis was made by experienced pulmonologists after reviewing medical history, laboratory findings, acid-fast bacilli (AFB) smear, and a radiologic examination. Tuberculosis patients were subdivided into active and latent groups. Active disease was classified as primary and post primary (reactivation) tuberculosis [12]. Patients, who had not been previously exposed to Mycobacterium tuberculosis, with clinical (cough, hemoptysis, fatigue, malaise, weight loss, fever, night sweats), radiological (lymphadenopathy, consolidation, pleural effusion, miliary nodules), and laboratory (AFB-positivity) findings were considered as primary tuberculosis patients [13]. Patients who met the criteria for an active clinical case and AFB-positivity, with accompanying radiologic findings (consolidations predominant in the apical and upper lung zones, nodules, cavitations) were considered as reactivation tuberculosis patients [14]. Patients with AFB-negativity but having radiologic or clinical evidence of former tuberculosis were classified as latent tuberculosis patients [12]. Multi-drug resistant tuberculosis was defined as the resistance to isoniazid and rifampin therapy in culture studies [13]. Written informed consent was obtained from all patients prior to embolization. Angiography and embolization procedures were performed by interventional radiologists with 5, 10, and 20 years of experience with a classical method that has been previously described [15]. Before the procedure, a computed tomography (CT) of the thorax and a bronchoscopy were performed on all patients to find the pathologic lesion and artery. Common femoral artery was chosen for access under ultrasound guidance. The decision for embolization and selection of embolic agents were made by the operators during the procedure. After inserting a 5-French sheath into the common femoral artery, a thoracic aortogram was taken with a 5-French pigtail catheter to distinguish any abnormal sites and assess the origin of the bronchial and non-bronchial systemic arteries. In all patients, internal thoracic, subclavian, and intercostal arteriograms in addition to bronchial arteriograms were performed to observe any abnormal contrast filling. Simmons 1 and Cobra 2 catheters were used to find the origin of the pathologic arteries. Hand injection was used in selective bronchial or non-bronchial angiograms. After observing an abnormal angiographic finding, a microcatheter (Renegade microcatheter; Boston Scientific, Natick, Massachusetts) was advanced superselectively to the pathologic artery. Embolization was done after obtaining a superselective angiogram and after evaluating the angiographic findings. Microparticles ≥500 µm were used if there was a bronchopulmonary shunt. In other cases, embolization started with 350 µm sized microparticles to achieve complete embolization of the distal vascular territory. Microspheres (Embozene; Boston Scientific, Cork, Ireland) sized between 350-700 µm diameter were used as embolic agents. Embolization was ended when there were a significant contrast material stasis and no antegrade flow. Coils were not used to avoid any access difficulties in the case of possible recurrence. We classified angiographic findings as tortuosity, hypertrophy, hypervascularity, aneurysm, bronchopulmonary shunt, extravasation, and normal bronchial artery ( Figure 1) [16]. Tortuosity referred to more than two turns in opposite directions of the pathologic artery. Hypertrophy meant that the diameter of the anomalous artery is greater than 3 mm. Hypervascularity meant increased contrast filling with parenchymal blush and staining. Aneurysm referred to a localized dilation of the diseased artery. Bronchopulmonary shunt was described as contrast material flowing from systemic circulation into the pulmonary circulation. When no pathologic finding was seen, the artery was considered as "normal". Angiographic findings of the diseased arteries were evaluated by two interventional radiologists with 5 and 10 years of experience. All patient data were hidden during the analysis. In cases of disagreement between the two interventional radiologists, the images were reevaluated. Furthermore, a third interventional radiologist (U.B.) with 20 years experience reanalyzed the images, and the final decision was reached by consensus. Technical success, clinical success, recurrence rates, and minor and major complication rates were considered during the outcome analysis. Technical success was described as rapid interruption of blood flow from the diseased artery [17]. Clinical success was defined as the total cessation of hemoptysis. Partial recovery that did not need any medication within a minimum of 30 days was also referred as clinical suc-• Bronchial artery embolization is a safe and effective treatment method for hemoptysis in patients with tuberculosis. • Bronchopulmonary shunt was significantly higher in tuberculosis patients compared to the non-tuberculosis group. • There was no difference in the angiographic findings between reactivation and latent tuberculosis. cess [18]. The requirement of medical, surgical, or angiographic treatment for hemoptysis after embolization was regarded as recurrence. Follow-up information was obtained from inpatient and outpatient records retrospectively. Extended hospitalization, irreversible sequelae, or death were regarded as major complications. Minor complications such as hematoma at the access site were conditions that did not result in sequelae and needed only minimal care and observation [19]. Statistical Analysis Statistical analysis was performed by using the Statistical Package for the Social Sciences version 22.0 (IBM Corp.; Armonk, NY, USA). Continuous data were expressed as means ± standard deviation (SD) and categorical variables as percentages. The distribution of continuous variables was evaluated by the Kolmogorov-Smirnov or the Shapiro-Wilk test. The independent t-test or Mann-Whitney U test was used for the comparison of age and the angiographic findings. The chi-square test or Fisher' s Exact test was used to compare angiographic findings between the tuberculosis and non-tuberculosis groups, between reactivation and latent groups of patients with tuberculosis, and to assess the relationship between recurrence rates and angiographic findings. P-values of less than 0.05 were considered statistically significant. Results Among 89 patients, 36 had tuberculosis (16 latent, 20 active) and 53 did not have tuberculosis. The tuberculosis group consisted of 34 men (94.4%) and 2 women (5.6%) with a mean age of 49.57±17.75. There were 32 males (60.4%) and 21 (39.6%) females in the non-tuberculosis group (mean age 54.57±13.45). There was a significant difference in the gender of patients between the two groups (p<0.001). All active tuberculosis patients consisted of reactivation tuberculosis patients. Among the reactivation group, two of the subjects were found to have multi-drug resistant tuberculosis. A total of 98 embolization procedures were performed in 89 patients. The angiographic findings of all patients are presented in Table 1. Tortuosity and hypervascularity were the most common findings in the tuberculosis (both reactivation and latent) and non-tuberculosis groups. Among all patients, extravasation was the rarest observation, and it was observed in only one case with a reactivation tuberculosis patient ( Figure 2). Aneurysm was the second least common observation in both the tuberculosis and non-tuberculosis groups. No significant correlation was observed between age or gender and any of the angiographic findings. Angiographic findings in tuberculosis and non-tuberculosis groups are summarized and compared in Table 1. No significant relationship was found between age or gender and angiographic findings in tuberculosis patients. Bronchopulmonary shunt was found to be significantly higher in tuberculosis patients compared to the non-tuberculosis group (p=0.002); 19 of 36 (52.8%) patients with tuberculosis had bronchopulmonary shunt; however, in the nontuberculosis group, only 12 of 53 (22.6%) had a shunt. Angiographic findings of tuberculosis patients and comparisons are summarized in Table 2. No significant differences in angiographic findings were found between the tuberculosis reactivation and tuberculosis latent groups (p>0.05). The number of embolized arteries Table 3. In the tuberculosis group, 68.5% of the arterial abnormality was observed in the bronchial system. In the non-tuberculosis group, 76.1% of the pathologic arteries originated from the bronchial arteries. The mean follow-up period in this study was 17.09 months ±9. 16 Recurrence rates and the degree of hemoptysis are outlined in Table 4. Recurrence occurred in 12 of the 89 patients (13.4%). In the tuberculosis group, 6 patients (3 latent, 3 reactivation) required reembolization; and 6 patients in the non-tuberculosis group also underwent reembolization due to recurrent hemoptysis. No relationship was found between the severity of hemoptysis and recurrence (p>0.05). None of the groups showed a statistically significant difference with respect to recurrence (p=0.436). Also, neither reactivation nor latent groups were associated with significantly higher recurrence rates. Among the tuberculosis group, reembolization was performed after 8 months for one patient, after 4 months for another, after 3 months for the third, and within the first month for three patients. Clinical failure was seen in three patients with tuberculosis within one month. Of these three patients, two underwent reembolization due to recanalization of the same vessels. No abnormality was found during angiography in the last patient, thus medical treatment was given. Among the three patients who developed recurrence after one month had passed, recanalization with collaterals was observed in one ( Figure 3). Pathologic arteries different from the ones observed in the first procedure were seen and embolized in the remaining two patients. There was no significant relationship between angiographic findings and recurrence in tuberculosis patients (p>0.05). Among non-tuberculosis recurrence group (n=6), reembolization was performed after 16 days (chronic obstructive pulmonary disease), 8 months (malignancy), 13 months (bronchiectasis), 18 months (bronchiectasis), 22 months (malignancy), and 31 months (bronchiectasis). In the clinical failure case within the first month, recanalization of the same vessel was observed during the procedure. Different arteries were the cause of hemoptysis in the remaining five cases. Discussion Our study demonstrated that hypervascularity and tortuosity were the most common angiographic findings in tuberculosis patients. Bronchopulmonary shunt was significantly higher in the tuberculosis group compared to that in the non-tuberculosis group. Continuous airway inflammation, bacterial superinfection, cavities, scar lesions, bronchiectasis, and erosion of the adjacent vessels may be possible reasons of vascular rupture in tuberculosis [13]. Chronic inflammation induces localized hypoxia and subsequent reduction in the pulmonary flow. Various angiogenic growth factors occur to supply adequate perfusion to the lungs. Neovascularization aggravates the fragility of vessels and eventually hemorrhaging occurs [20]. As a consensus, tortuosity, hypervascularity, hypertrophy, bronchopulmonary shunt, bronchial artery aneurysm, and active contrast extravasation are the main pathologic signs that need to be carefully examined during embolization in patients with pulmonary tuberculosis. In patients with tuberculosis, tortuosity and hypervascularity were the most prevalent results similar to those obtained by Dabo et al. and Anuradha et al. [10,21]. Contrast extravasation, which is a direct sign of bleeding, is a rare angiographic finding in BAE [5]. In parallel with this condition, we only observed contrast extravasation in one patient during the procedure. Bronchopulmonary shunt seemed to be a common angiographic finding in tuberculosis similar to what was observed by Shin et al. [9]. There was no statistically significant difference in the angiographic findings between the tuberculosis and non-tuberculosis groups, except for bronchopulmonary shunting. Bronchopulmonary shunts are connections between bronchial and pulmonary arteries or bronchial arteries and pulmonary veins. Vascular proliferation, remodeling, and collaterals can be seen between bronchial and non-bronchial systemic arteries occasionally. However, bronchopulmonary shunts may occur only in some conditions such as chronic inflammatory processes, malignancies, or decreases in the pulmonary flow (chronic thromboembolic pulmonary hypertension and congenital pulmonary stenosis) [22,23]. As a consequence of shunting, pressure and circulation in the affected area augments and leads to hemoptysis [24]. The detection of bronchopulmonary shunts is vital to prevent the passage of the embolic agent into the pulmonary system in the presence of a bronchopulmonary shunt [25]. To avoid either pulmonary or systemic infarction, >325 µm sized particles are suggested for use [5]. During BAE in patients with tuberculosis, the presence of a bronchopulmonary shunt should be scrutinized to avoid severe complications. Furthermore, if a bronchopulmonary shunt is seen in an urgent BAE procedure with massive hemoptysis and unknown diagnosis, tuberculosis may be considered as the underlying cause. The operator may suggest the clinician search for tuberculosis in such conditions. Anuradha et al. [10] reported that bronchopulmonary shunts were common in patients admitted with re-bleeding. No statistical significance was observed between bronchopulmonary shunt and recurrence in our study. However, manifested bronchopulmonary shunts were detected in two clinical failure cases in the tuberculosis group. In the tuberculosis group, 32% of the bleeding source was the non-bronchial system. Intercostal (19.3%) and subclavian (7%) arteries were the leading origins in the non-bronchial system. Ramakantan et al. [7] reported similar rates in intercostal arteries. We found higher rates of pathologic subclavian and internal thoracic arteries with rates of 7.0% and 5.2%, respectively. According to our results, searching for potential non-bronchial systemic vessels is critical for appropriate treatment as there is a high rate of non-bronchial origin. Additionally, it was reported that embolization of nonbronchial systemic collaterals decreased the recurrence rates [10]. We showed that BAE was a secure, efficient, and functional method for controlling hemoptysis in both tuberculosis or other underlying diseases with a clinical success rate of 91.6% and 98.1%, respectively. These rates were similar to those of Lee et al. and Swanson et al. [26,27]. The clinical success rates ranged from 70-99% in the literature [28]. Technical success rates of BAE had a wide range between 81-100% [21,29,30]. Clinical success rates were higher in the non-tuberculosis group than the tuberculosis group in the present study. This may be due to inadequate embolization of collaterals, rapid recanalization of the embolized artery, or conservatively unstoppable progression of the disease. Although angiographic findings were not associated with clinical success or recurrence, they manipulated the technique of embolization. Greater than 350 µm sized particles were used when bronchopulmonary shunts were observed. The main reasons of recurrence are speculated to be incomplete embolization, recanalization of the embolized arteries, development of new collaterals, or progression of the underlying disease [9,26,29]. The studies investigating recurrence associated factors in tuberculosis after BAE had conflicting results. Hwang et al. [31] showed a statistical difference in bronchopulmonary shunt between the recurrence and non-recurrence groups. Contrary to this, Shin et al. [9] assessed the relationship between angiographic findings and recurrence in tuberculosis patients but found no statistical significance. Furthermore, Anuradha et al. [10] did not notice any significant difference between recurrence and any angiographic or clinical feature in tuberculosis. Similarly, no significant relationship was found between angiographic findings and recurrence in the present study. Some studies showed that the recurrence rate was significantly higher in reactivation tuberculosis [9,26]. However, we did not observe any difference between the reactivation and latent groups. Major complications usually occur because of unintentional embolization of spinal arteries, manipulation of subclavian arteries, or transition of embolic agents through bronchopulmonary shunts. Agmy et al. reported monoparesis in 2 out of 348 patients [32]. Fruchter et al. [33] declared transient neurological complication rates as 4.4%. No neurologic symptoms occurred after embolization in our cases. The incidence of minor complications such as transient chest pain, arterial vasospasm, and dissection were compatible with the findings in the literature. Although groin hematoma and femoral artery pseudoaneurysms were also reported after BAE procedures [34,35], we did not observe these problems. It may be due to the ultrasound-guided puncture of the common femoral artery. The study had some limitations. The retrospective study design was a major limitation of the present study. Additionally, because of the relatively small sample size of the tuberculosis group, the number of patients with tuberculosis who had re-bleeding after embolization was too small. Further prospective studies with a larger number of patients are advisable. In conclusion, BAE is a useful and effective treatment method for moderate and massive hemoptysis in tuberculosis. The occurence of bronchopulmonary shunts seemed to be significantly higher in patients with tuberculosis. A detailed investigation of bronchopulmonary shunts in patients with tuberculosis during BAE is critical for both effective treatment and reduction of catastrophic complications. Informed Consent: Written Informed consent was obtained from patients who participated in this study.
2020-04-02T09:31:30.930Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "96590f65c66edcfacb108f85823dde41e7050098", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5152/eurasianjmed.2020.19221", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "70226fbe00594194a994690e2ef284a29324c0ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
86696430
pes2o/s2orc
v3-fos-license
Evaluation of clinical results and complications of internal fixation of intertrochanteric femur fracture with proximal femoral nail antirotation Trochanteric fractures are one of the most common injuries sustained predominantly in patients over sixty years of age. They are 3 to 4 times more common in women who are osteoporotic; trivial fall being the most common mechanism of injury. 1 The greatest problems for the orthopaedic surgeon treating this fracture are instability and the complications of fixation that result from instability. The type of implant used has an important influence on complications of fixation. Sliding devices like the dynamic hip screw have been extensively used for fixation. However, if the patient bears weight early, especially in comminuted fractures, these devices can penetrate the head or neck, bend, or separate from the shaft. 2 INTRODUCTION Trochanteric fractures are one of the most common injuries sustained predominantly in patients over sixty years of age. They are 3 to 4 times more common in women who are osteoporotic; trivial fall being the most common mechanism of injury. 1 The greatest problems for the orthopaedic surgeon treating this fracture are instability and the complications of fixation that result from instability. The type of implant used has an important influence on complications of fixation. Sliding devices like the dynamic hip screw have been extensively used for fixation. However, if the patient bears weight early, especially in comminuted fractures, these devices can penetrate the head or neck, bend, or separate from the shaft. 2 Intramedullary devices like the proximal femoral nail have been reported to have an advantage in such fractures as their placement allowed the implant to lie closer to the mechanical axis of the extremity, thereby decreases the lever arm and bending moment on the implant. Intramedullary nail carry an advantage over other loadsharing devices by not having to depend on plate fixation with bone screws purchasing a compromised lateral cortex. 3,4 More recently, a new generation of proximal femoral nails with helical blades had been developed, featuring a large contact area and compression between the blade and the cancellous bone promoting better stability against varus collapse, especially in patients with osteoporotic bones. [5][6][7][8] The aim of the present study was to evaluate the clinical results and complications of internal fixation of intertrochanteric fractures with the proximal femoral nail antirotation. METHODS This study was conducted on 30 patients presented with intertrochanteric femur fracture between December 2014 to November 2016 in department of orthopaedics, Uttar Pradesh university of medical sciences (UPUMS), Saifai. The present study was conducted after obtaining the permission from ethical committee of the institute. All patients with inter-trochanteric femur fractures and who were able to walk prior to the fracture were included in the study. However, patients with pathological fracture, active infection, unstable medical illness and nontraumatic disorder were excluded from the study. Patients with comorbid conditions like diabetes, peripheral vascular disease or chronic osteomyelitis were excluded in this study. The patients were evaluated as per the history; mode of injury, necessary radiological investigations and haematology profile was done on admission. The 30 patients with intertrochanteric fractures were fixed with proximal femoral nail antirotation. The length of the incision, duration of surgery, blood loss and fluoroscopy time was recorded intraoperatively. The immediate postoperative X-rays were evaluated. Patients were mobilized non weight bearing as soon as the pain or general condition permitted. Weight bearing was commenced depending upon the stability of the fracture and adequacy of fixation, delaying it for patients with unstable or inadequate fixation. All the cases were again evaluated through clinical and radiological methods at 6 weeks, 12 weeks, 6 months and 1 year for any morbidity and mortality. Radiographs of affected hip were obtained in A.P and lateral planes at each follow up visit and any changes in position of implant and extent of fracture united were noted. Fractures were judged to be radio graphically if bridging callus was evident on 3-9 cortices as noted on 2 views. Functional outcome was assessed using the Harris hip score. RESULTS The present study consists of 30 cases of intertrochanteric femur fractures. All the cases were fixed using proximal femoral nail antirotaion. The study period was from December 2014 to November 2016. The age of the patients ranged from 54 to 86 with fracture most common in the 5 th and 6 th decade and on average age of 69 years. Out of 30 patients, 17 (57%) patients were females and 13 (43%) patients were males showing female preponderance because of osteoporosis being a common problem among postmenopausal women. In our study, 22 (73%) patients sustained injury following trivial fall on ground, 5 (17%) sustained injury due to fall from height and 3 (10%) met road traffic accident. The mean time from injury to surgery time was 5.3 days, ranging from 2 to 10 days. Out of 30 cases treated with PFNA 3 (10%) took <49 minutes, 5(17%) took 50-59 minutes, 10 (33%) took 60-69 minutes, 8(27%) took 70-79 minutes and 4 (13%) took >80 minutes. The average time duration was 63 minutes with ranging from 45-85 minutes. The average blood loss was 96 ml, ranged from 60 to 180 ml. Out of 30 cases, average post-operative hospital stay was of 5.6 days, ranged from 4 to 14 days. Out of 30 cases operated, 10 cases suffered shortening in affected side averaged about 0.22 cm ranged from 0 to 1 cm. None of the patient had deep infection or failure or breakage due to implant fatigue. Secondary surgery was not required in any patient. DISCUSSION The incidence of unstable intertrochanteric fracture is increasing and this trend is likely to continue. These fractures are challenging for an average orthopaedic surgeon. Treatment modalities include osteosynthesis with dynamic hip screws and cephalomedullary nails and in selected cases, arthroplasty. However, the choice of implant for unstable intertrochanteric fractures is still debatable. Fixation of unstable intertrochanteric fractures with dynamic hip screw is associated with excessive displacement of the fracture, leading to meadilization of the femoral shaft and lateralization of the greater trochanter resulting in shortening of the limb and thus the lever arm of the abductor mechanism of the hip, leading to abnormal hip biomechanics. 2 The first generation intramedullary nails had a shorter lever arm, to decrease tensile strain on the implant, the lack of requirement of an intact lateral cortex, the improved load transfer (as a result of medial location), the potential for closed fracture reductions, percutaneous insertion, shorter operative time, and reduced blood loss are theoretical advantage of intramedullary devices compared with compression hip screw devices. [3][4][5] The first generation nail for treatment of intertrochanteric fracture, the gamma nail, was associated with a relatively high incidence of peri-implant fracture of 2.2% to 1.7% approximately 4 times greater than seen with compression hip screws. 5,9 Nail geometry and size were contributing factors. A large (10 degree) valgus bend, long (200 mm) length without an anterior bow, and relative stiffness caused by large proximal (17 mm) and distal (12-16 mm) diameters all provided for increased stress concentration at the tip of the nail. 10 The rate of cut out of these first generation nails, 2% to 4.3% was no better than that seen with compression hip screw devices, 2.5%. 6,9 Changes to implant geometry, a reduced valgus bend to 4 degree, a decrease in the distal diameter to 11 mm and shortening of the length to 180 mm decreased the stress concentration at the tip of second generation gamma nails. 10 The rate of peri implant fracture reduced with these second-generation devices to between 0% and 4.5%. 9 The third generation nails such as the proximal femoral nail (PFN), which incorporates multiple screws into the femoral head, have been introduced. Multiple points of fixation theoretically provide better rotational control of unstable fractures compared with a single lag screw. The theoretical concern about smaller diameter screws was, screw cut-out directly related to their decreased diameter that could be exacerbated by screw bending. Fracture of the smaller superior screws has been seen, especially when it is placed near the subchondral bone of the femoral head. In this position, it encounters large varus stress that are not shared by the large inferior screws. 10 Unstable proximal femoral fractures were treated successfully with proximal femoral nail antirotation (PFNA). Insertion of the blade compacts the cancellous bone. These characteristics provide optimal anchoring and stability when the implant is inserted into osteoporotic bone and had been biomechanically proven to retard rotation and varus collapse. The inserted PFNA blades achieve an excellent fit through bone compaction and require less bone removal compared to a screw. 5 PFNA are now favoured in west and there are multiple studies coming from that region to support this. [5][6][7][8] Very few studies exist on this subject from Indian population. Sahin et al in their study conducted on 45 patients who underwent osteosynthesis using the PFNA for unstable intertrochanteric femoral fractures found high union rate, early post-operative mobilization, and shorter operation time. 7 Zeng et al in their study of thirteen RCT involving 958 cases found that PFNA for intertrochanteric fractures is superior to dynamic hip screw in regards to the mean duration of surgery, mean intra-operative blood loss, the rate of post-operative complication, and the rate of postoperative fixation failure. 14 Kumar et al in their study conducted on 42 patients of unstable intertrochanteric fractures fixed with PFNA presents with less operative time and low complication rate. However, proper operative technique is important for achieving stability and to avoid major complications. 15 Sadic et al retrospectively analysed 113 consecutive patients with intertrochanteric fractures treated with proximal femoral nail antirotation, suggested that PFNA offers advantages, as it can be easily inserted and provides stable fixation, which allows early mobilization of the patient. 16 Therefore, early operation, good reposition, strict respect of technical steps and stable fixation will result in good functional recovery. In our study patients treated with PFNA had a significantly lower pain score at the sixth month of follow up. No cases of implant breakage and fatigue were seen during the follow up period. The helical blade effectively decreased the incidence of cut out. Due to advantages of high union rate, early post-operative mobilization and short operation time, PFNA osteosynthesis is the method of choice for surgical treatment of intertrochanteric femoral fractures. The main limitation of our series was small number of patients, lack of control group, and absence of data relating the functional outcome with the bone quality of the patient. CONCLUSION In our study of unstable intertrochanteric fractures treated with PFNA, we found good outcome with very few complication rate and high union rate with short operative time and early post-operative mobilization.
2019-03-28T13:33:31.234Z
2019-02-23T00:00:00.000
{ "year": 2019, "sha1": "618daee139b626a05be3ea5d66ff8c1b2b579cb4", "oa_license": null, "oa_url": "https://www.ijoro.org/index.php/ijoro/article/download/1014/562", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "22ef774a15b57cd718674c1413226da59dc69131", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
224804635
pes2o/s2orc
v3-fos-license
Series solution of unsteady MHD oblique stagnation point flow of copper-water nanofluid flow towards Riga plate This article concentrates on the non-Newtonian fluid flow over the oscillating surface. The rate of heat conduction of the fluid is enhanced by taking nanofluids in it. The two-phase nanofluid flow model is revealed. The flow is explored in the existence of oblique stagnation point flow. The analysis is incorporated for the Riga plate in the existence of an oblique stagnation point. Riga plate is well-known as an electromagnetic actuator contains permanent magnets and a spanwise aligned array of alternating electrodes attached on a plane surface. The dimensional equations satisfying the stated assumptions of the fluid flow are presented utilizing the Navier-Stokes equation. Fourier law is incorporated in the evaluation of heat flux. The analysis is examined in the fixed frame of reference. The obtained partial differential equation will be critically examined suitable similarity transformation will be chosen to convert these flow developed equations into higher non-linear ordinary differential equations (ODE) and these equations of motion are tackled by mathematical techniques like bvp4c method in Maple. From this study, it is determined that due to the effect of the Riga parameter the velocity field enhances, and also due to the effects of Casson parameter the velocity field increases. The effect of immerging of parameters is mentioned by tables and graphs. Moreover, the flow behavior is also confirmed by streamlines. The Casson fluid parameter makes to get faster the fluid velocity. The system heats up by the impact of Joule heating and dissipation. Introduction The analysis of nanofluids is getting to obtain the attention of investigators due to superior thermal conductivity and wide applications in the engineering and industrial purposes like microscale and macro heat transfer, transportation and biomedicine, nuclear reactors, etc., given by Choi et al. [1]. Nanofluids have filled the usual convection of a two-dimensional cavity. Hoghoughi, G et al. [2] explored various models that are compared by the physical properties of nanofluids. The nano liquid discoveries beneficial applications in manufacturing, cooling of electronic devices, heat exchangers, transportation, paints, biomedicine. Yu, Q., et al. [3] explored the nanofluids are predictable to be used in current engineering problems involving into polymerase chain reaction efficiency, solar collectors, radiators, and electronic cooling system. A vital source of thermal characteristics and natural convection fluid flow is associated with buoyancy driven flows contained by an enclosure explored by Purusothaman et al. [4]. A similar investigation has been conducted by many researchers [5,6,7]. In current years, non-Newtonian fluids are often encountered in numerous industrial and physical progressions because the analyses of non-Newtonian fluids have been inspired expressively. Frequent fluids treated as non-Newtonian fluids including drilling mud, clay coating, paints, fruit squash, shampoos, polymer fluxes, blood, and certain oils, etc. Shen [8] demonstrated that as compared to Newtonian fluid flow, the constitutive performance of non-Newtonian fluid flow is normally more complex. According to their physical behavior, the non-Newtonian fluids are divided into many branches one of the important branches is Casson fluid or shear-thinning fluids. For this type of fluids, the visible viscosity displays declining manners against applied shear stress. The mutual and vital illustration of such fluid is human blood is given by Khan et al. [9]. In the current past, many investigators have been contributed to the research of non-Newtonian fluid [7,10]. The fluid motion occurs on all compact bodies moving in a fluid known as stagnation point flows. The stagnation region encounters the maximum rates of mass deposition, the maximum heat transfers, and the uppermost pressure explored by Heimans et al. [11]. Stagnation point flows with heat flow facet are fairly apparent in crystal puffing, paper manufacture, revolving fibers, melt spinning process, and continuous molding given by Mehmood et al. [12]. In the prediction of skin-friction moreover, heat transfer or mass transfer close stagnation areas of bodies in great speed movements, radial diffusers, drag decline, the project of thrust compartment, transpiration cooling, drag decrease, and thermal oil repossession the stagnation point flow with several physical possessions has great physical significance given by S. Nadeem et al. [13]. Such kind of topic has been studied by researchers [14,15,16]. The magnetic field holds a significant place in fluid mechanics due to the sophisticated improvement in the thermophysical properties of a fluid. Astrology and field like earth science use these fluids which proved to be poor electric conductors. Therefore, an external agent is suggested to improve conductivity via the heat transfer process and related thermophysical characteristics. A magnetic bar can be that external agent or maybe permanently fixed magnets with alternate electronics. Such as experiment was first carried out by Gailitis and Lielausis et al. [17]. Ahmad et al. [18] report laminar fluid over the Riga plate which is supported by theoretically that this efficient agent reduces skin friction. Some fruitful research has been studied on this topic [19,20,21,22,23,24,25,26,27]. The current study analysis uses the fixed frame of reference and non-Newtonian fluid with the two-phase flow model on the oscillating plate. Heat transfer is shown for Cu nanoparticles. It is perceived that the occurrence of Cu nanoparticles reduces the Skin friction coefficient whereas heat transfer develops. The Casson fluid parameter makes to get faster the fluid velocity. The partial differential equations for vacillating 2D flows are simplified in a fixed frame by considering the supposed form of solutions. The coupled differential equations are tackled by a mathematical technique like the bvp4c method in maple. The influence of several parameters such as Casson parameter, Riga parameter, Hartmann Number on skin friction, temperature, and velocity profile. Moreover, flow performance is also demonstrated by streamlines. Implementing the Eq. (6) on Eqs. (3), (4), and (5) and eliminate the pressure using p xy ¼ p yx , yields 3. Fixed frame of reference for the phase flow model Conferring to [16] we consider that where k is a parameter. We examine that about the plate is oscillating at the point y ¼ 0, and the fluid engaged at the upper half plane y > 0. The flow function is given by [16], ψ ¼ 1 2 γy 2 þ xy. The boundary conditions are given below Where the γ is dimensionless and implementing (9) on (7), we obtain Integrating Eqs. (11) and (12) with respect to y and exhausting boundary conditions (10), we get We define In order to obtain the dimensionless form of Eqs. (5), (10), (13), and (14), we get k nf k f θ Where the Parameter M ¼ πJ0M0 which is shown by [16], The Skin friction number can be exalted as the shear stress is stated by Dimensionless form of Eq. (27), obtain where Re ¼ sx 2 υf ; is the local Reynolds number. The stream function is in dimensionless form Showing in Figure 1 an angle α with the plate is generating by the streamline and the slope of straight line can be found by setting ψ * ¼ 0 as ψ ¼ 1 2 γy 2 þ xy where η ¼ À 2 γ x which gives slope ¼ À 2 γ . Hence the γ (shearing parameter) and α (impinging angle) relationship is The Nusslet number Nu is accessible as heat flux is calculated by Dimensionless form of Eq. (32) given by ðiΩÞ n ϑ 1n ðyÞ: For minor values of Ω considering the real part only, we get We have perceived that ϑ 10 ðyÞ ¼ θ 0 ðyÞ and solving θ 0 ðyÞ by direct integration we get The thermophysical properties of base fluid and nanoparticle are composed of [16]. Results and discussion The above equations are solved by bvp4c method in maple and the current portion studies the performance of several flow parameters. Figure 9 shows that by adding nanoparticle the thermal boundary layer and the thermal conductivity escalate therefore the temperature profile enhances by increasing ϕ and β. It is clearly observed that the temperature profile is increasing the function of nanoparticle volume fraction. Figure 10 shows the action of velocity and velocity gradient ðf ðyÞ; f 0 ðyÞ; f 00 ðyÞÞ is designated for M ¼ 1:5; β ¼ 0:5; γ ¼ 0:5; and it achieves the boundary conditions. Figure 11 shows the presence of the non-dimensional time t on uðη; tÞ in a fixed frame of reference respectively. It is perceived that uðη; tÞ is an oscillation. Figure 12 explored the disparity in Skin friction by changing γ: Figure 13 shows the effect of ϕ and Pr on Nusselt number. Increment arose due to increases of Pr. Figures 14, 15, 16,17,18,19,20, and 21 validate the impact of β and M on stream lines. Table 1 is made to examine the impact of γ and M on Skin friction and expresses the value of Skin friction increases due to increases in the value of the parameter. Table 2 is made to demonstrate the effect of γ and Pr on Nusselt number. This table illustrate when all parameters increases the value of Nusslet number also increases. Table 3 is made to validate the results. Conclusion The two-phase model for non-Newtonian nanofluid with Riga plate covering thermophoresis and Brownian motion effects is selected for investigation. The resulting ideas are worth mentioning... The velocity field rises for F 0 ðηÞ, and the velocity field rises and the boundary layer decreases. Temperature profile θðyÞ drops with non-Newtonian parameter β, and ϕ: By increasing the M and thermal stratification the thermal boundary layer is dejected. Author contribution statement Rizwana Rizwana: Conceived and designed the experiments; Wrote the paper. Azad Hussain: Performed the experiments; Analyzed and interpreted the data. S. Nadeem: Contributed reagents, materials, analysis tools or data. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
2020-10-21T05:05:33.197Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "b8109be949cb8d4c713bc7c799a444f4ac1fafe6", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844020315322/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b8109be949cb8d4c713bc7c799a444f4ac1fafe6", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
59842963
pes2o/s2orc
v3-fos-license
Mode Collapse and Regularity of Optimal Transportation Maps This work builds the connection between the regularity theory of optimal transportation map, Monge-Amp\`{e}re equation and GANs, which gives a theoretic understanding of the major drawbacks of GANs: convergence difficulty and mode collapse. According to the regularity theory of Monge-Amp\`{e}re equation, if the support of the target measure is disconnected or just non-convex, the optimal transportation mapping is discontinuous. General DNNs can only approximate continuous mappings. This intrinsic conflict leads to the convergence difficulty and mode collapse in GANs. We test our hypothesis that the supports of real data distribution are in general non-convex, therefore the discontinuity is unavoidable using an Autoencoder combined with discrete optimal transportation map (AE-OT framework) on the CelebA data set. The testing result is positive. Furthermore, we propose to approximate the continuous Brenier potential directly based on discrete Brenier theory to tackle mode collapse. Comparing with existing method, this method is more accurate and effective. Introduction Generative Adversarial Networks (GANs, [13]) emerge as one of the dominant approaches for unconditional image generating. GANs have successfully shown their amazing capability of generating realistic looking and visual pleasing images. Typically, a GAN model consists of an unconditional generator that regresses real images from random noises and a discriminator that measures the difference between generated samples and real images. Despite GANs' advantages, they have critical drawbacks. 1) Training of GANs are tricky and sensitive to hyperparameters. 2) GANs suffer from mode collapsing. Recently Meschede et. al ( [23]) studied 9 different GAN models and variants showing that gradient descent based GAN optimization is not always convergent. The goal of this work is to improve the theoretic understanding of these difficulties and propose methods to tackle them fundamentally. Optimal Transportation View of GANs Recent promising successes are making GANs more and more attractive (e.g. [27,32,36]). Among various improvements of GANs, one breakthrough has been made by incorporating GANs with optimal transportation (OT) theory ( [35]), such as in works of WGAN ( [2]), WGAN-GP ( [15]) and RWGAN ( [16]). In WGAN framework, the generator computes the optimal transportation map from the white noise to the data distribution, the discriminator computes the Wasserstein distance between the real and the generated data distributions. Fig. 1 illustrates the framework of WGAN. The image space is X , the real data distribution ν is concentrated on a manifold Σ embedded in X . Z is the latent space, ζ is the white noise (Gaussian distribution). The generator computes a transformation map g θ , which maps (Z, ζ) to (X , µ θ ); the discriminator computes the Wasserstein distance between µ θ and the real distribution ν by finding the Kontarovich potential ϕ ξ (refer to Eqn. 22). In principle, the GAN model accomplishes two major tasks: 1) manifold learning, discovering the manifold structure of the data; 2) probability transformation, transforming a white noise to the data distribution. Accordingly, the generator Σ X Z ζ G : g θ µ θ D : W c (µ θ , ν), ϕ ξ ν Figure 1: Wasserstein GAN framwork. map g θ : (Z, ζ) → (Σ, µ θ ) can be further decomposed into two steps, where T is a transportation map, maps the white noise ζ to µ in the latent space Z, g is the manifold parameterization, maps local coordinates in the latent space to the manifold Σ. Namely, g gives a local chart of the data manifold Σ, T realizes the probability measure transformation. The goal of the GAN model is to find g θ , such that the generated distribution µ θ fits the real data distribution ν, namely Regularity Analysis for mode collapse By manifold structure assumption, the local chart representation g : Z → Σ is continuous. Unfortunately, the continuity of the transportation map T : ζ → µ can not be guaranteed. Even worse, according to the regularity theory of optimal transportation map, except very rare situations, the transportation map T is always discontinuous. In more details, unless the support of µ is convex, there are non-empty singularity sets in the domain of T , where T is discontinuous. By Eqn. 1 and 2, µ = (g −1 ) # ν is determined by the real data distribution ν and the encoding map g −1 , it is highly unlikely that the support of µ is convex. On the other hand, the deep neural networks (DNNs) can only model continuous mappings. For example, the commonly used ReLU DNNs can only represent piece-wise linear mappings. But the desired mapping itself is discontinuous. This intrinsic conflict explains the fundamental difficulties of GANs: Current GANs search a discontinuous mapping in the space of continuous mappings, the searching will not converge or converge to one continuous branch of the target mapping, leading to a mode collapse. Solution to mode collapse We propose a solution to the mode collapse problem based on the Brenier theory of optimal transportation ( [35]). According to Brenier theorem 3.4, under the quadratic distance cost function, the optimal transportation map is the gradient of a convex function, the so-called Brenier potential. Under mild conditions, the Brenier potential is always continuous and can be represented by DNNs. We propose to find the continuous Brenier potential instead of the discontinuous transportation map. Contributions This work improves the theoretic understanding of the convergence difficulty and mode collapse of GANs from the perspective of optimal transportation; builds connections between the regularity theory of optimal transportation map, Monge-Ampère equation, and GANs; proposes solutions to conquer mode collapse based on Brenier theory. This paper is organized as follows: in Section 3, we briefly introduce the theory of optimal transportation; in Section 4, we give a computational algorithm based on the discrete version of Brenier theory; in Section 5, we explain the mode collapse issue using Monge-Ampère regularity theory, and propose a novel method. Furthermore, we test our hypothesis that general transportation maps in GANs are discontinuous with proposed method. The testing results are reported in Section 5 as well. Finally, we draw the conclusion in Section 6. Previous Work Generative adversarial networks. Generative adversarial networks (GANs) are technique for training generative models to produce realistic examples from an unknown distribution ( [13]). In particular, the GAN model consists of a generator network that maps latent vectors, typically drawn from a standard Gaussian distribution, into real data distribution and a discriminator network that aims to distinguish generated data distribution with the real one. Training of GANs is unfortunately found to be tricky and one major challenge is called mode collapse, which refers to a lack of diversity of generated samples. This commonly happens when trained on multimodal distributions. For example on a dataset that consists of images of ten handwritten digits, generators might fail to generator some digits ( [15]). Prior works have observed two types of mode collapse, i.e fail to generate some modes entirely, or only generating a subset of a particular mode ( [12,34,4,10,24,28]). Several explanatory hypothesis to mode collapse have been made, including proper objective functions ( [2,3]) and weak discriminators ( [24,29,3,21]). Three main approaches to mitigate mode collapse include employing inference networks in addition to generators (e.g. [10,11,33]), discriminator augmentation (e.g. [8,29,19,22]) and improving optimization procedure during GAN training (e.g. [24]). However these methods measure the difference between implicit distributions by a neural network (i.e discriminator), whose training relies on solving a non-convex optimization problem that might lead to non-convergence ( [21,23]). In contrast, the proposed method metrics the distance between two distributions by L 2 Wasserstein distance, which can be computed under a convex optimization framework. Furthermore the optimal solution provides a transport map transforming the source distribution to the target distribution, and essentially serves as a generator in the feature space. Empirically, the distribution generated by this generator is theoretically guaranteed to be identical to the real one, without mode collapse or artificial mode invention. Optimal Transport. Optimal transport problem attracted the researchers attentions since it was proposed in 1940s, and there were vast amounts of literature in various kinds of fields like computer vision and natural language processing. We recommend the readers to refer to [26] and [31] for detailed information. Under discrete optimal transport, in which both the input and output domains are Dirac masses, we can use standard linear programming (LP) to model the problem. To facilitate the computation complexity of LP, [9] added an entropic regularizer into the original problem and it can be quickly computed with the Sinkhorn algorithm. By the introduction of fast convolution, Solomon & Guibas ( [30]) improved the computational efficiency of the above algorithm. The smaller the coefficient of the regularizer, the solution of the regularized problem is closer to the original problem. However, when the coefficient is too small, the sinkhorn algorithm cannot find a good solution. Thus, it can only approximate the original problem coarsely. When computing the transport map between continuous and point-wise measures, i.e. the semi-discrete optimal transport, ( [14]) proposed to minimize a convex energy through the connection between the OT problem and convex geometry. Then by the link between c-transform and Legendre dual theory, the authors of [20] found the equivalence between solution of the Kantorovich duality problem and that of the convex energy minimization. If both the input and output are continuous densities, the OT problem can be treated to solve the Monge-Ampére equation. ( [17,18,25]) solved this PDE by computational fluid dynamics with an additional virtual time dimension. But this kind of problem is both time consuming and hard to extend to high dimensions. Optimal Mass Transport theory In this subsection, we will introduce basic concepts and theorems in classic optimal transport theory, focusing on Brenier's approach, and their generalization to the discrete setting. Details can be found in [35]. We only consider maps which preserve the measure. Given a cost function c(x, y) : X × Y → R ≥0 , which indicates the cost of moving each unit mass from the source to the target, the total transport cost of the map T : X → Y is defined to be X c(x, T (x))dµ(x). The Monge's problem of optimal transport arises from finding the measure-preserving map that minimizes the total transport cost. Definition 3.3 (Optimal Transportation Map) The solutions to the Monge's problem is called the optimal transportation map, whose total transportation cost is called the Wasserstein distance between µ and ν, denoted as W c (µ, ν). For the cost function being the L 1 norm, Kontarovich relaxed transportation maps to transportation plans, and proposed linear programming method to solve this problem. We introduce the details of Kontarovich's approach in Appendix A. Brenier's Approach For quadratic Euclidean distance cost, the existence, uniqueness and the intrinsic structure of the optimal transportation map were proven by Brenier ([6]). Theorem 3.4 ([6] ) Suppose X and Y are the Euclidean space R d and the transportation cost is the quadratic Euclidean distance c(x, y) = 1/2 x − y 2 . Furthermore µ is absolutely continuous and µ and ν have finite second order moments, X |x| 2 dµ(x) + Y |y| 2 dν(y) < ∞,then there exists a convex function u : X → R, the so-called Briener potential, its gradient map ∇u gives the solution to the Monge's problem, The Brenier potential is unique upto a constant, hence the optimal transportation map is unique. Assume the Briener potential is C 2 smooth, the it is the solution to the following Monge-Amère equation: In general cases, the Brenier potential u satisfing the transport condition (∇u) # µ = ν can be seen as a weak form of Monge-Ampère equation, coupled with the boundary condition ∇u(X) = Y . Hence u is called a Brenier solution. Definition 3.5 (Legendre Transform) Given a function ϕ : R n → R, its Legendre transform is defined as In practice, the Brenier solution is approximated by the so-called Alexandrov solution. Definition 3.6 (Sub-gradient) Let u : R d → R be a convex function. Its sub-gradient or sub-differential at a point x is defined as x 0 Figure 3: Singularity structure of an optimal transportation map. A convex function is Lipschitz, hence it is differentiable almost everywhere. then we say v is an Alexandrov solution to the Monge-Ampère equation. Regularity of Optimal Transportation Maps Let Ω and Λ be two bounded smooth open sets in R d , let µ = f dx and ν = gdy be two probability measures on R d such that f | R d \Ω = 0 and g| R d \Λ = 0. Assume that f and g are bounded away from zero and infinity on Ω and Λ, respectively, According to Caffarelli ( [7]), if Λ is convex, then the Alexandrov solution u is strictly convex, furthermore 1. If λ ≤ f , g ≤ 1/λ for some λ > 0, then u ∈ C 1,α loc (Ω). 2. If f ∈ C k,α loc (Ω) and g ∈ C k,α loc (Λ), with f, g > 0, then u ∈ C k+2,α loc (Ω), (k ≥ 0, α ∈ (0, 1)) Here C k,α loc represents the k-th order Hölder continuous with exponant α function space. If Λ is not convex, there exist f and g smooth such that u ∈ C 1 (Ω), the optimal transportation map ∇u is discontinuous at singularities. u is differentiable if its subgradient ∂u is a singleton. We classify the points according to the dimensions of their subgradients, and define the sets Σ k (u) := x ∈ R d |dim(∂u(x)) = k , k = 0, 1, 2 . . . , d. It is obvious that Σ 0 (u) is the set of regular points, Σ k (u), k > 0 are the set of singular points. We also define the reachable subgradients at x as It is well known that the subgradient equals to the convex hull of the reachable subgradient, ∂u(x) = Convex Hull(∇ * u(x)). Theorem 3.8 (Regularity) Let Ω, Λ ⊂ R d be two bounded open sets, let f, g : R d → R + be two probability densities, that are zero outside Ω, Λ and are bounded away from zero and infinity on Ω, Λ, respectively. Denote by T = ∇u : Ω → Λ the optimal transport map provided by theorem 3.4. Then there exist two relatively closed sets Σ Ω ⊂ Ω and Σ Λ ⊂ Λ with |Σ Ω | = |Σ Λ | = 0 such that T : Ω \ Σ Ω → Λ \ Σ Λ is a homeomorphism of class C 0,α loc for some α > 0. Fig. 3 illustrates the singularity set structure of an optimal transportation map ∇u : Ω → Λ, computed using the algorithm based on theorem 4.2. We obtain The subgradient of x 0 , ∂u(x 0 ) is the entire inner hole of Λ, ∂u(x 1 ) is the shaded triangle. For each point on γ k (t), ∂u(γ k (t)) is a line segment outside Λ. x 1 is the bifurcation point of γ 1 , γ 2 and γ 3 . The Brenier potential on Σ 1 and Σ 2 is not differentiable, the optimal transportation map ∇u on them are discontinuous. Discrete Brenier Theory Brenier's theorem can be directly generalized to the discrete situation. In GAN models, the source measure µ is given as a uniform (or Gaussian) distribution defined on a convex compact domain Ω; the target measure ν is represented as the empirical measures, which is the sum of Dirac measures ν = n i=1 ν i δ(y − y i ),where Y = {y 1 , y 2 , · · · , y n } are training samples, weights n i=1 ν i = µ(Ω). Each training sample y i corresponds to a supporting plane of the Brenier potential, denoted as where the height h i is a variable. We represent all the height variables as h = (h 1 , h 2 , · · · , h n ). An envelope of a family of hyper-planes in the Euclidean space is a hyper-surface that is tangent to each member of the family at some point, and these points of tangency together form the whole envelope. As shown in Fig. 2, the Brenier potential u h : Ω → R is a piecewise linear convex function determined by h, which is the upper envelope of all its supporting planes, The graph of Brenier potential is a convex polytope. Each supporting plane π h,i corresponds to a facet of the polytope. The projection of the polytope induces a cell decomposition of Ω, each supporting plane π i (x) projects onto a cell the cell decomposition is a power diagram. Given the target measure ν in Eqn. 4, there exists a discrete Brenier potential in Eqn. 11, whose projected µ-volume of each facet w i (h) equals to the given target measure ν i . This was proved by Alexandrov in convex geometry. Theorem 4.1 ([1]) Suppose Ω is a compact convex polytope with non-empty interior in R n , n 1 , ..., n k ⊂ R n+1 are distinct k unit vectors, the (n + 1)-th coordinates are negative, and ν 1 , ..., ν k > 0 so that k i=1 ν i = vol(Ω). Then there exists convex polytope P ⊂ R n+1 with exact k codimension-1 facesF 1 , . . . , F k so that n i is the normal vector to F i and the intersection between Ω and the projection of F i is with volume ν i . Furthermore, such P is unique up to vertical translation. Alexandrov's proof for the existence is based on algebraic topology, which is not constructive. Recently, Gu et al. gave a contructive proof based on the variational approach. Furthermore, ∇u h minimizes the quadratic cost among all transport maps T # µ = ν, where the Dirac measure ν = n i=1 ν i δ(y − y i ). The gradient of the above convex energy in Eqn. 13 is given by: The Hessian of the energy is given by As shown in Fig. 2, the Hessian matrix has explicit geometric interpretation. The left frame shows the discrete Brenier potential u h , the right frame shows its Legendre transformation u * h using definition 7. The Legendre transformation can be constructed geometrically: for each supporting plane π h,i , we construct the dual point π * h,i = (y i , −h i ), the convex hull of the dual points {π * h,1 , π * h,2 , . . . , π * h,n } is the graph of the Legendre transformation u * h . The projection of u * h induces a triangulation of Y = {y 1 , y 2 , . . . , y n }, which is the weighted Delaunay triangulation. As shown in the left fram of 4, the power diagram in Eqn.12 and weighted Delaunay triangulation are Poincarè dual to each other: if in the power diagram, W i (h) and W j (h) intersect at a (d − 1)-dimensional cell , then in the weighted Delaunay triangulation y i connects with y j . The element of the Hessian matrix Eqn. 17 is the ratio between the µ-volume of the (d − 1) cell in the power diagram and the length of dual edge in the weighted Delaunay triangulation. Fig. 4 shows one computational example based on the theorem 4.2. Suppose the support of the target measure ν has two connected components, restricted on each component, ν has a smooth density function. We sample ν and use a Dirac measureν to approximate it. By increasing the sampling density, we can construct a sequence {ν k } weakly converges to ν,ν k → ν, the Alexandrov solution toν k also converges to Alexandrov solution to the Monge-Ampère equation with ν. At each stage, the targetν k is a Dirac measure with two clusters, the source µ is the uniform distribution on the unit disk. Each cell on the disk is mapped to a point with the same color. The Brenier potential u k has a ridge in the middle. Let k → ∞, u k → u, the ridge on u k will be preserved on the limit u, whose projection is the singularity set Σ 1 for the limit optimal transportation map ∇u. Along Σ 1 , ∇u is discontinuous, but the Brenier potential u is always continuous. Mode Collapse and Regularity Although GANs are powerful for many applications, they have critical drawbacks: first, training of GANs are tricky and sensitive to hyper-parameters, difficult to converge; second, GANs suffer from mode collapsing; third, GANs may generate unrealistic samples. This section focuses on explaining these difficulties using the regularity theorem 3.8 of transportation maps. Intrinsic Conflict The difficulty of convergence, mode collapse, and generating unrealistic samples can be explained by the regularity theorem of the optimal transportation map. Suppose the support Λ of the target measure ν has multiple connected components, namely ν has multiple modes, or Λ is non-convex, then the optimal transportation map T : Ω → Λ is discontinuous, the singular set Σ Ω is nonempty. Fig. 4 shows the multi-cluster case, Λ has multiple connected components, where the optimal transportation map T is discontinuous along Σ 1 . Fig. 5 shows even Λ is connected , but non-convex. Ω is a rectangle, Λ is a dumbbell shape, the density functions are constants, the optimal transportation map is discontinuous, the singularity set In general situation, due to the complexity of the real data distributions, and the embedding manifold Σ, the encoding/decoding maps, the supports of the target measure are rarely convex, therefore the transportation mapping can not be continuous globally. On the other hand, general deep neural networks, e.g. ReLU DNNs, can only approximate continuous mappings. The functional space represented by ReLU DNNs doesn't contain the desired discontinuous transportation mapping. The training process or equivalently the searching process will leads to three situations: 1. The training process is unstable, and doesn't converge; 2. The searching converges to one of the multiple connected components of Λ, the mapping converges to one continuous branch of the desired transformation mapping. This means we encounter a mode collapse; 3. The training process leads to a transportation map, which covers all the modes successfully, but also cover the regions outside Λ. In practice, this will induce the phenomena of generating unrealistic samples. As shown in the middle frame of Fig. 6. Therefore, in theory, it is impossible to approximate optimal transportation maps directly using DNNs. Proposed Solution The fundamental reason for mode collapse is the conflict between the regularity of the transportation map and the continuous functional space of DNNs. In order to tackle this problem, we propose to compute the Brenier potential itself, instead of its gradient (the transportation map). This is based on the fact that the Brenier potential is always continuous under mild conditions, and representable by DNNs, but its gradient is rarely continuous and always outside the functional space of DNNs. Figure 6: Comparison between PacGAN and our method to tackle mode collapsing. Multi-mode Experiment We use GPU implementation of the algorithm in Section 4 to compute the Brenier potential. As shown in Fig. 6, we compare our method with a recent GAN training method (PacGAN [22]) that aims to reduce mode collapse. Orange markers are real samples and green markers represent generated ones. Left frame shows a typical case of mode collapse where the generated samples cannot cover all modes. Middle frame shows the result of PacGAN. Although all 25 modes are captured, the model also generates data that deviates from real samples. Right frame shows the result of our method that precisely captured all modes. It is obvious that our method accurately approximates the target measures and covers all the modes, whereas the method PacGAN generates many fake samples between the modes. Figure 7: AE-OT framework. (a) generated facial images (b) a path through a singularity. Figure 8: Facial images generated by an AE-OT model. Hypothesis Test on CelebA In this experiment, we want to test our hypothesis: In most real applications, the support of the target measure is non-convex, therefore the singularity set is non-empty. As shown in Fig. 7, we use an auto-encoder (AE) to compute the encoding/decoding maps from CelebA data set (Σ, ν) to the latent space Z, the encoding map f θ : Σ → Z pushes forward ν to (f θ ) # ν on the latent space. In the latent space, we compute the optimal transportation map (OT) based on the algorithm described in Section 4, T : Z → Z, T maps the uniform distribution in a unit cube ζ to (f θ ) # ν. Then we randomly draw a sample z from the distribution ζ, and use the decoding map g ξ : Z → Σ to map T (z) to a generated human facial image g ξ • T (z). The left frame in Fig. 8 demonstrates the realist facial images generated by this AE-OT framework. If the support of the push-forward measure (f θ ) # ν in the latent space is non-convex, there will be singularity set Σ k , k > 0. We would like to detect the existence of Σ k . We randomly draw line segments in the unit cube in the latent space, then densely interpolate along this line segment to generate facial images. As shown in the right frame of Fig. 8, we find a line segment γ, and generate a morphing sequence between a boy with a pair of brown eyes and a girl with a pair of blue eyes. In the middle, we generate a face with one blue eye and one brown eye, which is definitely unrealistic and outside Σ. This means the line segment γ goes through a singularity set Σ k , where the transportation map T is discontinuous. This also shows our hypothesis is correct, the support of the encoded human facial image measure on the latent space is non-convex. As a by-product, we find this AE-OT framework improves the training speed by factor 5 and increases the convergence stability, since the OT step is a convex optimization. This gives a promising way to improve existing GANs. Conclusion This work builds the connection between the regularity theory of optimal transportation map, Monge-Ampère equation and GANs, which gives an theoretic understanding of the major drawbacks of GANs: convergence difficulty and mode collapse. According to the regularity theory of Monge-Ampère equation, if the support of the target measure is disconnected or just non-convex, the optimal transportation mapping is discontinuous. General DNNs can only approximate continuous mappings, this intrinsic conflict leads to the convergence difficulty and mode collapse in GANs. Kontarovich's problem can be solved using linear programming method. Due to the duality of lineary programming, the (KP) Eqn. 19 can be reformulated as the duality problem (DP) as follows: Problem A.2 (Duality) Given a transport cost function c : X × Y → R, find the function ϕ : X → R and ψ : Y → R, such that (DP ) max ϕ,ψ X ϕ(x)dµ + Y ψ(y)dν : ϕ(x) + ψ(y) ≤ c(x, y) The maximum value of Eqn.20 gives the Wasserstein distance. Most existing Wasserstein GAN models are based on the duality formulation under the L 1 cost function. Then the duality problem can be rewritten as where ϕ is called the Kontarovich's potential.
2019-02-08T04:30:38.000Z
2019-02-08T00:00:00.000
{ "year": 2019, "sha1": "3eda00509a18d928027a4c1bf3402a6ad558ae3f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3eda00509a18d928027a4c1bf3402a6ad558ae3f", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
258349967
pes2o/s2orc
v3-fos-license
Liver Transplantation for Liver Metastasis of a Pseudopapillary Pancreatic Neoplasm in a Male Patient Patient: Male, 60-year-old Final Diagnosis: Pseudopapillary pancreatic neoplasm with liver metastasis Symptoms: Jaundice Clinical Procedure: Biopsy • CT scan • liver resection • MRI • transplantation • ultrasonography • Whipple procedure Specialty: Transplantology Objective: Unusual clinical course Background: Solid pseudopapillary neoplasm (SPN) of the pancreas, which predominantly affects young women, is an uncommon condition with low malignant potential. It is often asymptomatic. This tumor has a low metastatic rate and a good prognosis in contrast to other pancreatic tumors. Approximately 14% of SPNs develop liver metastasis, but for SPNs with malignant features liver metastasis has been reported to occur in over 55% of cases. Complete surgical resection is the treatment of choice for increasing the survival rate in metastatic recurrent disease. When surgical resection is impossible, liver transplantation has shown promising results in a few cases. The purpose of this article is to present the first case of a male patient who underwent liver transplantation for this indication. Case Report: We present the case of a 60-year-old male patient who previously had pancreas surgery, numerous liver re-sections, and chemotherapy for SPN, but nevertheless developed recurrence of multiple liver metastases. His metastatic liver disease was regarded as unresectable. The lymphatic structure was also affected. The patient underwent orthotopic liver transplantation with a deceased donor graft after multidisciplinary evaluation. At 2-year follow-up, the patient was alive and recurrence free. Conclusions: This is the first published report of a male patient who underwent liver transplantation due to SPN metastasis. Our case demonstrates that liver transplantation should be further investigated for selected cases of SPN of the pancreas with liver metastatic disease when surgical resection is deemed unattainable. Background Solid pseudopapillary tumor of the pancreas, also known as the Franz tumor, first described by V. K. Franz in 1959, is a potentially malignant tumor that is observed primarily in women in their 20s to 40s. It is an uncommon neoplasm, accounting for approximately 2% of all pancreatic tumors. Nevertheless, due to extensive usage and improving precision of radiological imaging, SPN is being detected more frequently [1]. Clinically, it is often asymptomatic, but patients can present with abdominal pain, nausea, dyspepsia, vomiting, and jaundice. Blood biochemistry, including pancreatic amylase, liver function tests, and tumor markers, is usually unaffected [2]. Malignancy potential is suggested to be correlated with tumor size and Ki-67 index [1]. Metastasis from SPNs is rare, with a median prevalence of 14% (range, 2-28%) [3]. The liver is the most common site for metastasis. The optimal treatment for improving survival of metastatic disease is surgical resection [4,5]. Liver transplantation has been reported in a few cases of irresectable liver metastasis. Disease-free survival varied from 9 months to 2 years [6][7][8][9][10]. Here, we present the first case of a male patient with recurrent unresectable and non-ablatable liver metastasis from solid pseudopapillary pancreatic neoplasm who underwent liver transplantation (LT). Case Report A 60-year-old male patient presented with jaundice. Computed tomography showed an 11-cm tumor with central necrosis in the pancreatic head and a 2.5-cm solitary metastasis in segment 6 of the liver. The tumor compressed the vascular structures of the portal vein, the superior mesenteric vein, and the common hepatic artery ( Figure 1A). A Whipple procedure was performed, including resection of the superior mesenteric vein and local liver resections of segment 6 and the gallbladder bed for the suspected liver metastasis. Radical resection was obtained. Histopathological analysis of the pancreas and liver samples showed monomorphic epithelial cells with round-to-oval nuclei and vacuolated cytoplasm, which exhibited a predominantly solid pattern (Figure 2A, 2F). Degenerative changes, including necrosis, hemorrhage, and areas of pseudopapillary formations, were observed. Extrapancreatic and vascular invasion were noted as aggressive features. The larger liver metastasis showed a predominantly encapsulated pattern, but smaller ones showed direct contact between the tumor cells and the hepatocytes, the so-called replacement growth pattern. The tumor cells in both the pancreas and liver showed identical immunohistochemical profiles (Figure 2B-2E, 2G-2J) with diffuse, distinct, nuclear, and cytoplasmic immunoreactivity for beta-catenin. They were also positive for vimentin and negative for cytokeratins and synaptophysin. Ki-67 index was 15%. Overall, histomorphology and immunohistochemical analyses were highly consistent with a solid pseudopapillary neoplasm of the pancreas and synchronous liver metastasis. Due to local intrahepatic recurrence with 4 metastases, a local liver resection was performed 18 months later. This was followed by adjuvant chemotherapy with 3 cycles of Paclitaxel+Gemcitabin and then 4 cycles of the Folfirinox regimen. A new recurrence occurred 12 months later with 1 metastasis and the patient underwent another local liver resection. Radiological imaging 48 months after the first operation showed 21 liver metastases engaging all liver segments (Figure 1B), and an enlarged lymph node located next to the SMV was observed. After multidisciplinary discussion by the hospital Liver Transplant Board and Ethics Committee, the patient was accepted to the liver transplant waiting list. The patient underwent liver transplantation 3 months later (Figure 3). The enlarged lymph node was extirpated intraoperatively and showed no signs of malignancy. The postoperative course was complicated by a ruptured mycotic pseudoaneurysm of the hepatic artery, which required surgical resection and endovascular stenting. No adjuvant radio-or chemotherapy was given. Follow-up included alternating MRI and ultrasound every 3-6 months. Histopathological analysis of the explanted tissue confirmed SPN (Figure 2A-2E). The enlarged lymph node showed no signs of malignancy. Two years following the liver transplantation, the patient is doing well with no sign of disease recurrence and with good graft function (Figure 1C). Immunosuppression consists of lowdose tacrolimus (1.5 mg twice daily) and 5 mg of prednisolone. Discussion Here, we present the first case of liver transplantation for metastasis of SPN in a male patient. SPN affects women in an almost 10: 1 female-to-male ratio and it has a low malignant potential and a good survival rate [1]. However, 10-15% of cases of SPN have metastatic disease at the time of diagnosis and the most common site is the liver. Tanue et al reported that in patients with malignant feature SPN, liver metastasis occurs in 57.1% of cases [4]. Sperti et al reported occurrence of hepatic metastasis 2-168 months after diagnosis of the primary tumor [11]. Survival rates are significantly decreased in untreated metastatic disease. In metastatic disease, radical surgical resection improves patient survival compared to chemotherapy [7]; patients with distant metastasis undergoing surgical resection have similar median survival rates as patients without metastatic disease. In contrast, patients with distant metastases who do not undergo surgical resection have a median survival of 2.1 years with a higher mortality risk compared to patients with metastatic disease who underwent resection (HR 18, P<0.0001) [5]. Liver metastasis has generally been considered a contraindication for LT due to poor results. However, recent studies suggest that careful patient selection and advances in multimodality approaches have substantially improved these results [12]. Only a few cases for SPN liver metastasis treated with LT are described in the literature [6][7][8][9][10]. One of these case reports reported disease recurrence after transplantation. However, the follow-up was short except for 1 case with 5 years of follow-up [9]. These studies suggest that LT is a possible treatment when other surgical and/or oncological treatments are not recommended or suitable. It is important to consider the suspected effect of immunosuppressive medication accelerating tumor growth in possible residual disease and how the perioperative treatment can be optimized to avoid this occurrence. We applied the centerstandard immunosuppression protocol with a combination of tacrolimus and prednisolone, although an mTOR inhibitors regimen may be considered. In LT for metastatic disease, post-transplant adjuvant treatments may also be considered. In the presented case reports no treatments were offered to the patients. Wojciak et al described a case of lymph node recurrence 1 year after transplantation that was treated with excision, radiotherapy, and switching to mTOR inhibition immunosuppression [4]. Lymph node status was carefully evaluated in our case. The preoperatively enlarged lymph node did not contraindicate LT. It was a singular local lymph node that was assessed as resectable during LT surgery.
2023-04-27T15:07:43.452Z
2023-04-25T00:00:00.000
{ "year": 2023, "sha1": "0ecb540f7387ec31732217d2a04f2db216763cc4", "oa_license": "CCBYNCND", "oa_url": "https://amjcaserep.com/download/inPress/idArt/938678", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0bf4a18fb89206835c5e985eb658d843c9bdc68d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246756697
pes2o/s2orc
v3-fos-license
Parameter Calculation and Working Characteristic Analysis of a New Type of Magnetic Integrated CRT The working characteristic of an electromagnetic equipment is an important reference standard for measuring its working performance. In this paper, we take a new type of magnetic integrated controllable reactor of transformer type (CRT) as the research object, establish its equivalent mathematical model and deduce its calculation short-circuit impedance and winding current formulas. On this basis, the relationship between the working winding instantaneous current, harmonic current coefficient and equivalent impedance of the new magnetic integrated CRT and the thyristor trigger angle is quantitatively analysed, and the working characteristic curves, such as the instantaneous current waveform, harmonic characteristic curve, control characteristic curve and volt-ampere characteristic curve of the working winding, are obtained. On the basis of the MATLAB/Simulink simulation platform, a new magnetic integrated CRT simulation model is established to simulate the working winding current under different trigger angles to verify the established formulas. Mean-while, the response speed of the CRT is simulated and analysed. Results show that the analytical calculation results of the winding cur-rent of the magnetic integrated CRT are consistent with the simulation results. The new type of magnetic integrated CRT has the characteristics of hierarchical smooth regulation and low harmonic current and the ad-vantages of fast response speed and high sensitivity. Furthermore, the CRT’s volt-ampere characteristic is approximately linear. This study is expected to provide some theoretical guidance for the in-depth understanding of the working characteristics of the magnetic integrated CRT and the structural design and optimisation of the CRT. I. INTRODUCTION In recent years, with the centralised access of large-scale new energy power generation, the power system presents new characteristics of high new energy penetration proportion at the source side and AC/DC hybrid connection at the grid side [1][2]. The operation characteristics of high proportion renewable energy are different from those of the traditional power supply. Its power supply fluctuation exceeds the load fluctuation and becomes the main source of power system uncertainty [3]. To cope with the increasingly complex power grid structure and the drastic power flow changes, meet the power demand of different regions, realise the optimal allocation of energy re-sources nationwide and ensure the reactive power balance and voltage control of the power system, extra high voltage (EHV) , long-distance and high-capacity transmission put forward high requirements for controllable reactance [4][5]. As a new type of dynamic reactive power compensation device for realising a smooth inductive reactance output on the basis of the magnetic flux control principle, a controllable reactor of transformer type (CRT) has the advantages of small current harmonic, fast response and continuous adjustable capacity. With the development goal of lightweight and miniaturised electromagnetic equipment, the introduction of magnetic integration technology into the structural design of controllable reactors has become a new direction of the controllable reactor research. A study [6] first introduced magnetic integration technology into the structural design of CRT and proposed an array magnetic integration CRT on the basis of the principle of magnetic flux offset. However, the proposed CRT's structure is complex, the number of windings is difficult to expand, and the universality is poor. To solve this problem, a previous study [7] proposed a magnetic integrated CRT with multiple magnetic conductive materials on the basis of the combination of multiple magnetic conductive materials. Although the structure is simplified, it cannot realise complete decoupling between windings, and the no-load current is large. Therefore, another study [8] proposed a split magnetic integrated CRT to address the above problems. However, to achieve high short-circuit impedance, discus is set between the iron core of the working winding and the control winding, increasing the difficulty of process manufacturing. A work [9] proposed a multi basic independent unit magnetic integrated CRT by setting a leakage magnet core column be-tween the working winding and the control winding to replace the discus in the CRT structure in the previous study [8]. However, the setting of multi basic independent units increases the volume, weight and material cost of the equipment. Therefore, on the basis of the multi basic independent unit magnetic integrated CRT, several works [10][11] integrated two basic independent units and proposed a dual control-winding basic unit magnetic integrated CRT to reduce the volume, weight and cost of the CRT. Meanwhile, aiming at minimising the cost and loss, a study [12] established a structural parameter optimisation model for the dual control winding basic unit magnetic integrated CRT and optimised its structural parameters. Obviously, the dual control winding basic unit magnetic integrated CRT is the most superior magnetic integrated structure of CRT [10]. However, although the above research shows the effectiveness of the structure by establishing its equivalent magnetic circuit model and circuit model, the harmonic, magnetisation, control and other working characteristics were not analysed. In fact, the working characteristics of an electromagnetic equipment are an important reference standard for measuring its working performance. To summarise, this paper takes the new magnetic integrated CRT, that is, the dual control-winding basic unit magnetic integrated CRT, as the research object, establishes its equivalent mathematical model and deduces the calculation formulas of short-circuit impedance and winding current. On this basis, the relationship between the instantaneous current, harmonic current coefficient and equivalent impedance of the working winding of the new magnetic integrated CRT and the thyristor trigger angler is quantitatively analysed, and the working characteristic curves, such as the instantaneous current waveform, harmonic characteristic curve, control characteristic curve and volt ampere characteristic curve of the working winding, are obtained. On the basis of the MATLAB/Simulink simulation platform, a new magnetic integrated CRT simulation model is established, and the working winding current under different trigger angles is simulated to verify the established formulas. Meanwhile, the response speed of the CRT is simulated and analysed. This study is expected to provide some theoretical guidance for the in-depth understanding of the working characteristics of the magnetic integrated CRT and the structural design and optimisation of the CRT. Figure 1 shows the fundamental diagram of the new magnetic integrated CRT. In Figure 1, BW is the working winding, and its terminal AX is connected to the high-voltage bus; i0 is the working winding current of the CRT; 1 CW , 2 CW , …, CW S are the control windings; 1 T , 2 T , …, T S are the anti-parallel thyristors valve group that are connected in series in the control-winding circuits. The design principle of 'high impedance and weak coupling' shall be met in the structural design of the CRT [13]. Therefore, the working winding and control winding of the CRT shall meet the high impedance with short-circuit impedance of approximately 100%. To realise stepless continuous smooth regulation, the CRT has single branch regulation modes of sequential, fixed and transfer single branches and multi branch regulation mode [14]. Taking the sequential single branch regulation mode as an example, this paper introduces the basic working principle of the CRT. Figure 2 shows the VOLUME XX, 2017 1 structural diagram of the dual control-winding basic unit magnetic integrated CRT that is composed of multiple independent dual control-winding basic units, and each basic unit contains one working and two control winding units. Under the regulation mode of the sequential single branch, by regulating the turn-on of anti-parallel thyristors 1 T , 2 T , …, T s that are connected in series in the control winding circuits at all levels, the control windings at all levels ( 1 CW , …, CW s ) are short circuited in turn to gradually increase the current of working winding BW and meet the transition of the CRT from no load to full load. The single branch regulation mode of the CRT is based on the harmonic dilution principle, that is, the harmonic current generated by only one regulation winding is diluted through the short circuit of multiple control windings to suppress the harmonic current of the CRT [15]. Therefore, the number of stages of CRT control winding should be greater than 1; otherwise, a large number of harmonics will be injected into the power grid under light load [13]. II. WORKING PRINCIPLE OF A NEW MAGNETIC INTEGRATED CRT Dual control-winding basic unit 1 Dual control-winding basic unit k Dual control-winding basic unit (s/2) III. MATHEMATICAL MODEL OF NEW MAGNETIC INTEGRATED CRT The dual control-winding basic unit magnetic integrated CRT is composed of multiple independent dual control winding basic units. Any basic unit (k) of the dual control winding is selected as the analysis object, and its structural diagram is shown in Figure 3. If the winding resistance is ignored, then its equivalent circuit model can be established, as shown in Figure 4. L are the excitation inductors corresponding to magnetic circuits ab, ahgf, af, fe, be and bcde, respectively. The calculation method and formula of each parameter are discussed in detail in reference [11] and thus not presented in this paper. According to the equivalent circuit model of dual control winding basic unit k shown in Figure 4, the calculation formula of impedance between the windings reduced to the working winding side under different working conditions can be deduced, as shown in Equation (1). = is the operating angular frequency of the CRT. When the dual control winding basic unit is in no-load, that is, ( + + ) When the dual control winding basic unit is in half load, that is, [ For a magnetic integrated CRT with k independent dual control winding basic units and s = 2k control winding stages, given that control windings are short circuited in turn, the magnetic integrated CRT impedance of the dual control winding basic unit magnetic integrated CRT is as shown in Equation (5). No control winding is in operation In Equation (5), 0 j Z , 1 j Z and 2 j Z represent the shortcircuit impedance of the j-th dual control winding basic unit under the no-load, half load and full load states, respectively, and their calculation formulas are expressed as Equations (1)-(4). In According to Equation (5) (6) Where, N U is the rated voltage of the working winding. A. CALCULATION OF INSTANTANEOUS CURRERNT AND HARMONIC CURRENT OF WORKING WINDING The single branch regulation mode of the CRT means that except for the control branch where the thyristor is in the regulation state, the thyristors of the other control branches of the CRT are in full operation or off state [16]. Obviously, except for the control branch in the regulation state, other control branches will not introduce harmonics to the power system [14]. For the CRT with S-level control winding, the compensation capacity of the CRT increases gradually with the sequential input of control windings If the capacity increasing coefficient The relative value of the amplitude of CRT working winding can be obtained by combining Equation (7) By Fourier decomposition of Equation (10), the relative values of fundamental amplitude and nth harmonic amplitude of CRT working winding current can be obtained, as shown in Equation (11) and Equation (12) respectively. B. ANALYSIS OF CONTROL CHARACTERISTICS The working form and operation sequence of the thyristor valve group in each control winding circuit of the CRT are called the regulation mode of the CRT [13]. Taking the sequential single branch regulation mode of the CRT as an example, this paper analyses its control characteristics. For the sequential single branch regulation mode of the CRT in the regulation process of CRT from no-load to rated load (as shown in Figure 1 (14) To summarise, Equations (11)- (14) shows the interrelations among the working winding current, harmonic coefficient, thyristor trigger angle, capacity increasing coefficient, control winding technology, capacity of each control winding and regulation mode of the CRT. Therefore, the reasonable calculation and optimisation of the above parameters can control the harmonic current of the working winding effectively and improve the control characteristics. C. ANALSIS OF VOLT-AMPERE CHARACTERISTIC The CRT is equivalent to a multi-winding transformer in graded short-circuit state, so the essence of its compensation inductance is the short-circuit impedance of the transformer. Therefore, having a good volt-ampere characteristic during operation is of great significance for the CRT [17]. The unit value of the CRT voltage is m U U U   , and the relative value of the fundamental current amplitude of the CRT working winding is known, as shown in Equation (11). Then, the unit value of the equivalent impedance of the CRT is as shown in Equation (15). Given that the working winding of the CRT is directly connected in parallel to the high-voltage bus and the power system voltage is constant, that is, 1(p.u.) = U  , according to Equation (15), the volt-ampere characteristic curve of the CRT in the regulation process can be obtained by changing the trigger angle of the thyristor. V. EXAMPLE ANALYSIS Assuming that the dual control winding basic unit magnetic integrated CRT has control winding stage s=4, it has two independent dual control winding basic units. For the convenience of analysis and calculation, the following are assumed: the rated voltage of the CRT is 220 V, the working frequency is 50 Hz, the rated current of the control windings at all levels is 10 A, and the turns of the working and control windings at all levels are 266. The inductance parameters in the equivalent circuit diagram of the dual control winding basic unit in Figure 4 are shown in Table 1. Given that the rated parameters of the two dual control windings' basic unit structure are the same, their inductance parameters are also the same. According to the Figure 4 and Table 1, the simulation model of the dual control winding basic unit magnetic integrated CRT is established based on the MATLAB/Simulink simulation platform, as shown in Figure 5. A. WINDING CURRENT CAICULATION OF CRT By introducing the inductance parameters of the dual control winding basic unit magnetic integrated CRT shown in Table 1 into Equations (1)-(6), the analytical calculation results of each winding current effective value of the dual control winding basic unit magnetic integrated CRT when control windings 1 CW -CW m (1 ≤ m ≤ s) are short circuited in turn can be obtained, as shown in Table 2. Bringing the working winding current shown in Table 2 into Equation (7), the instantaneous current waveform of the working winding under thyristor regulation can be obtained. Assuming that the 4th stage control winding (CW4) of the CRT is in the regulation state, CW1-CW3 are completely short circuited. At this time, the analytical calculation results of the instantaneous current waveform of the CRT working winding under different trigger angles are shown in Figure 6(a). To verify the analytical calculation results of the winding current on the basis of the CRT simulation model shown in Figure 5, the simulation calculation results of each winding current of the CRT are shown in Table 3. By changing the thyristor trigger angle of the 4th stage control winding, the simulation result of the instantaneous current waveform of the working winding under different trigger angles can be obtained, as shown in Figure 6(b). VOLUME XX, 2017 1 Tables 2 and 3 indicate that the working winding current of the dual control winding basic unit magnetic integrated CRT increases step by step with the sequential input of control windings, and the current of winding that has been put into operation hardly changes with the input of subsequent windings, that is, decoupling is realised among the control windings of the CRT. Combined with Figure 6, for any control winding, the current can be changed by regulating the trigger angle of the thyristor, and the current distortion is not serious. FIGURE.5 Simulation model of dual control-winding basic unit magnetic integrated CRT In addition, the comparisons of the current of each winding shown in Tables 2 and 3 the winding current of the dual control winding basic unit magnetic integrated CRT are consistent with the simulation results, verifying the validity of the CRT equivalent mathematical model, the analytical calculation formula of winding current and the CRT simulation model. B. HARMONIC CHARACTERISTIC ANALYSIS OF CRT By combining Equations (8) and (13) and the working winding current shown in Table 2, the variation law of each harmonic coefficient of the CRT under different trigger angles can be obtained, as shown in Figure 7. Figure 7 shows that among the harmonic components within the entire thyristor regulation range of the CRT, the harmonic coefficients of the 3rd, 5th, 7th, 9th and 11th harmonics are 3.39%, 1.18%, 0.59%, 0.35% and 0.23%, respectively, that is, within the entire regulation range of the thyristor trigger angle. The harmonic characteristics are good and meet the harmonic requirements of the reactor. Overall, the dual control winding basic unit magnetic integrated CRT has the characteristics of low harmonic current. C. ANALYSIS OF CRT COST By introducing the working winding current shown in Table 2 into Equation (15), the control characteristic curve of the CRT can be obtained, as shown in Figure 8. For the CRT, to prevent the injection of a large number of harmonics under light load, the first stage control winding must jump directly [13]. In addition, with the continuous change of the thyristor trigger angle of the control winding at all levels, the working winding current of the CRT can realise a continuous and smooth transition. With the operation of each control winding in turn, the dual control winding basic unit magnetic integrated CRT exhibits the characteristics of hierarchical and smooth adjustment. D. VOLT-AMAPERE CHARACTERISTIC ANALYSIS OF CRT By combining Equations (11) and (15), the volt-ampere characteristic curve of the CRT in the regulated state can be obtained, as shown in Figure 9. Figure 9 shows that the volt-ampere curve of the dual control winding magnetic integrated CRT is approximately linear, indicating its good volt-ampere characteristics. E. VOLT-AMPERE CHARACTERISTIC ANALYSIS OF CRT On the basis of the CRT simulation model shown in Figure 5, the transition process of the CRT from 25% load to 100% load is shown in Figure 10. When (0, 0.1) ts  , only first stage control winding CW1 is put into full operation, and the other control windings are not put into operation. When t = 0.1s, all other control windings (CW2-CW4) are put into operation. At this time, the CRT reaches the rated capacity. Figure 10 shows that the dual control winding basic unit magnetic integrated CRT quickly enters the steady state at the moment of subsequent control winding input, which can not only realise the transition from steady state to steady state but also have a very fast response. In this paper, a new type of magnetic integrated CRT, that is, the dual control winding basic unit magnetic integrated CRT, is taken as the research object, and the winding current, harmonic current, control characteristic and volt-ampere characteristic expressions are derived. According to the CRT's mathematical model, the MATLAB/Simulink simulation model is established, and the winding current and the response speed are simulated. Finally, its working characteristics are summarized and analyzed. The conclusions are as follows. (1) The analytical calculation results of the winding current of the dual control winding basic unit magnetic integrated CRT are in good agreement with the simulation results, which can verify the correctness and effectiveness of the theoretical derivation. (2) The working winding current of the dual control winding basic unit magnetic integrated CRT increases step by step with the sequential input of the control winding. By regulating the trigger angle of the thyristor in the control winding circuit, the smooth adjustment of the current can be realized, and the current distortion is small. That is, the dual control winding basic unit magnetic integrated CRT has the characteristics of hierarchical smooth regulation and low harmonic current. (3) The dual control winding basic unit magnetic integrated CRT can realize the transition from steady state to steady state. It has the characteristics of fast response and high sensitivity, and its volt-ampere characteristics is good and approximately linear.
2022-02-12T16:05:14.708Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "0a9c46d6fdca8e76d39c35310c7a69089ccfe84c", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09709773.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "2d9d1ae76e41fa417ebc8ae67da66aaebda5685f", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
257393727
pes2o/s2orc
v3-fos-license
Building a Machine Learning Powered Chatbot for KSU Blackboard Users —Chatbots have attracted the interest of many entities within the public and private sectors locally within Saudi Arabia and also globally. Chatbots have many implementations in the education field and can range from enhancing the e-learning experience to answer students' inquiries about course schedules and grades, tracking prerequisites information and elective courses. This work aim is to develop a chatbot engine that helps with frequently asked questions about the Blackboard system, which could be embedded into the Blackboard website. It contains a machine-learning model trained on Arabic datasets. The engine accepts both Arabic textual content as well as English textual content if needed; for commonly used English terminologies. Rasa framework was chosen as the main tool for developing the Blackboard chatbot. The dataset to serve the current need (i.e. Blackboard system) was requested from Blackboard support staff to build the initial dataset and get a sense of the frequently asked questions by KSU Blackboard student users. The dataset is designed to account for as many as possible of KSU Blackboard related inquires to provide the appropriate answers and reduce the workload of Blackboard system support staff. Testing and evaluating the model was a continuous process before and after the model deployment. The model post-tuning metrics were 93.4%, 92.5%, 92.49% for test accuracy, f1-score and precision, respectively. The average reported accuracy in similar studies were near 90% on average as opposed to results reported here. INTRODUCTION A chatbot is an artificial intelligence (AI) software that can simulate a conversation (or a chat) with a user in natural language through messaging applications, websites, mobile apps or through the telephone [5]. It"s an environment that receives questions from users in natural language, relates these questions with a knowledge base, and then answer based on pre-defined answers. Chatbots are more formally referred to in the literature as conversational agents or conversational assistants. The core principle of every conversational agent is to interact with humans using text messages and act as it were able to understand the user and replay with the appropriate message. The origin of computers talking to humans goes back to the start of the computer science field itself. Alan Turing defined a simple test referred to now as the Turing test back in 1950 where a human judge would have to predict if the entity they are communicating with via text is a computer program or not [6]. However, this test's scope is way greater than the case of chatbots, the main difference being that the domain knowledge of a chatbot is narrow compared to the Turing test. Turing test assumes one can talk about any topic in mind with the agent. Conversational agent environment consists of five different main parts [7]. Starting with user messages: they are a dynamic input received by the agent to process and replay. They contain a string representation of the actual text sent by the user, and a metadata that contains additional information like a reference to the session the conversation belongs to, and possibly the date and time the message was sent to the agent, on which platform the message was sent from if the agent is linked to more than one message, etc. The agent receives the message along with the information it contains in a read-only mode only with no possible means of making changes to it. The backend is one of the significant parts of the environment that the agent has access to. It contains additional information about the agent users and the database's states to store the user messages, their metadata, and keep track of the conversation events. The agent can view and update certain aspects of the backend. The chatbot can also obtain new information from the user if necessary, by asking the user to provide it. RASA is a modular design framework proposed by [1] that consists of two main components, Rasa Core for dialog management and Rasa NLU for natural language understanding. Those are open-sourced python libraries for building machine-learning based conversational agents. They provide dialog management and NLU capabilities in an easy manner. By nature, a modular by design architecture allows for easier integration of modules with other systems and services. For instance, Rasa NLU can be used as a service in a different system other than rasa by exposing HTTP APIs for external requests and vice versa for Rasa Core. The code can be found by visiting: https://github.com/RasaHQ. Although chatbots have been present for a long time, 2016, before the spring, was the true start of this technology. There are two main reasons for the renewed interest in chatbots (1) massive advances in artificial intelligence (AI) and (2) a major usage shift from online social networks to mobile applications such as WhatsApp, Telegram, Slack, and many more advances in AI holds a promise that intelligent chatbots are in fact, can be within reach. The increased usage of mobile applications attracts service providers to reach users through them. However, in spite of these advances, chatbot applications entail many challenges that need to be overcome in order to reach desired goals. Chatbots not only imply changes in the interface between users and technology; they imply changing user dynamics and usage patterns. A recent study indicated that 56% of chatbot users were interested in ordering meals from www.ijacsa.thesai.org restaurants using chatbots, while 34% had already ordered at least one meal [2]. Chatbots are considered to be beneficial for retailers in terms of customer service (about 95%), sales/marketing (about 55%), and order processing (about 48%) [3]. Generation Z and Millennials are more interested in using chatbots: 25% of a global sample aged 18 to 34 opted for a personal shopping chatbot [4], and the students using Blackboard system fall under this age range. King Saud University (KSU) is the largest public university in Saudi Arabia and at any time encompass thousands of students whom some of them use and struggle with the online course and learning management system (blackboard), as such the customer support staff (which is not more than three employees) at KSU is overwhelmed with enquiries which can cause great user dissatisfaction and affect the adaption of Blackboard system by KSU students. The goal of this work is to build a Minimum Viable Product (MVP) for an Arabic Chatbot which is intended to serve users of Blackboard system from King Saud University students by answering their frequently asked questions about the Blackboard system to reduce the load of answering repeated, one answered questions and allow customer service staff to focus on more dynamic issues that require human intervention. II. RELATED WORK A revision for several chatbot related papers that highlights the usage of chatbots in the education field was conducted, along with other chatbot usages in different areas such as retail and government entities. A review and summarization of those implementations are discussed in the next paragraphs. A. Chatbots in Education A chatbot named EASElective that was built to advise students on what to choose as an elective course was proposed in [22]. EASElective is a conversational agent that was built to supplement existing academic advising systems. It has an interactive, online interface that supports basic official course information to informal students' opinions about that course. Its major components included intent detection, conversational management routines, dialogue design, course information management, and a collection of analyzed students' peers' opinions. In this study, a survey was conducted to capture students' perceptions of the chatbot. The subjects were briefed about the chatbot's purpose and instructed on how to use it and were given up to a half-hour to interact with it and then fill the surveys. The survey results showed that many students preferred to either ask their friends for course information. Around 22% preferred to ask the program leader or use the official university website instead of the chatbot. There were a number of limitations, including the chatbot not having enough interactions to learn from before going live. And also, the chatbot patterns usages are neither recorded nor pre-defined in advance to prepare the appropriate responses. Another chatbot implementation to enhance the LMS experience was proposed by [23]. This model classifies the main keywords that could be asked by students using R programming language, and this classification is then used in an Artificial Intelligent Markup language (AIML) script as a query. If this query was unsuccessful, it would run against SQL lite. If neither AIML nor SQL lite worked, then the student query will be transferred to a human agent to take over and answer the query. Although the implementation of AIML scripting language is easy and also free to use as a scripting language, this model is a rule-based model and is less tolerant to changes in users' input and, thus, harder to capture the user intent. Another study for developing a chatbot for university inquiries was put forward by [24]. This study discussed the development of a deep-learning based chatbot using RASA framework. RASA has many connectors to be used in integrating it with communication platforms. One of them is for Facebook. This chatbot is integrated with FB as the majority population is using FB as their main social media channel. This chatbot uses Long Short-Term Memory (LSTM), which is a recurrent neural network architecture that is used in deep learning. This architecture is included in RASA framework. Although the chatbot performed well in terms of intent classification and provided the appropriate replays, there was a platform limitation as they had to perform platformspecific steps to run the chatbot on Facebook, which can result in some limitations to the interaction with the chatbot. A chatbot for instantly answering students' questions to reduce teacher's workload was proposed in [25]. It supports multiple common social platforms, including Telegram, Facebook Messenger, and Line. The chatbot can reply to commands and natural language questions. Once the instructors transfer the course-related data to an internet database, the chatbot can reply to questions about the course materials and logistics (e.g., course plan). It also supports student login to provide profile-based answers such as the schedule of student registered courses. B. Chatbots in other Fields Chatbots also have many usages in other fields besides education. Some of those applications are in healthcare, such as self-diagnoses based on symptoms, using chatbots as a communication means in e-commerce websites, providing account data and paying bills in banking, etc. Below are some of the related works of chatbots. A text-to-text chatbot engages patient's medical issues were proposed by [26]. It's a medical chatbot that diagnoses diseases using AI. This chatbot was built to reduce medical costs and improve patient's accessibility to medical knowledge. In this chatbot, a series of questions about the patient's symptoms are asked to give suggestions that help in clarifying the disease. The accurate disease is fount based on the user reply to those series of questions, and in case of major diseases, a doctor is suggested to be consulted. The patient's past responses are recorded, and in order to reach an accurate diagnosis, the patient is asked more specific questions. There are three main components of the system, which are (1) user validation and symptoms extraction from the conversation, (2) mapping of extracted, potentially ambiguous symptoms to their corresponding database codes, and (3) personalized diagnosis and referring the patient to a specialized doctor if required. The sole focus of this system is extracting symptoms by analyzing natural language using NLG components, which in term makes it easier and less technical for the end-user. www.ijacsa.thesai.org Another example of chatbot usage in e-commerce to support customers in their website journey is called "SuperAgent" [27]. This chatbot scrapes public e-commerce websites' content of products description, user questions and answers, and product reviews and feeds them to its knowledge base. It uses NLP techniques to understand users' text and machine learning techniques to predict responses to it, including opinion mining for product reviews, fact QA for product information, and FAQ search for customer reviews and chit-chat for greetings and goodbyes. ChatPy is one of the chatbot implementations in the wholesale business [28]. "Mundirepuestos" is a wholesaler automotive spare company. This company is an SME company that started operating in 1992 that specialized in the distribution and sales of Volkswagen, Skoda, and Audi automotive parts. ChatPy is a conversational agent built mainly using a tool called Dialogflow. This tool makes use of intents, actions with parameters, entities, voice-to-text, and text-tospeech with automatic learning. A major reason for choosing this tool was its compatibility with the most known messaging platforms. A summary of chatbot related works in different fields is shown in Table I. Opinion mining for product reviews, fact QA for product information, and FAQ search for customer reviews and chit-chat conversations. No intent detection. ChatPy: Conversational agent for SMEs: A case study [28] Business DialogFlow's ML engine and knowledge base. Facebook-specific implementation to run the chatbot which limits customization. To avoid issues in [22], the system needs to be internally deployed and used by diverse students' backgrounds while recording their usage patterns and interactions which is recorded by default in Rasa framework. Rule-based chatbots like the one presented in study [23] cannot learn which is not the case for Rasa as it has interactive learning capabilities which allows it to learn and refrain from making the same mistake in the future. Unlike the cases presented in study [24] and [28], this work is going to use Rasa API to communicate with the chatbot which remove the platform specific limitations and allows for more customizations. There needs to be a fit number of participants to fairly evaluate the chatbot to overcome the limitation in study [25]. As opposed to the case presented in study [26], Rasa provides a fallback policy that can be triggered when the prediction of the action to be taken is below a specified threshold; this fallback could be used to ask the user to rephrase or show some buttons for the user to choose from. Knowing the user intent can help greatly in providing the right answer to the user question and also helps in performing actions based on the user intent; Rasa uses deep learning embeddings to detect user intention which is not the case in study [27]. Fig. 1 below gives an overview of Rasa open-source architecture that consists of two main components which are the Rasa NLU and Dialog management (Core). Rasa NLU is responsible for predicting intents, extracting entities and retrieving responses. It uses the saved model in the filesystem. The Core component is responsible for choosing the appropriate next action with regards to the conversation context and uses Tracker store to store the conversation states, messages and metadata. Rasa ensures that messages are being processed in the right order using the lock store. Actions are running on the Action Server and executed when called by the Core component. Fig. 2 shows the message flow and how Rasa architecture works. III. SYSTEM DESIGN The user first types in a message; this message is then passed to the interpreter in which NLU is used to extract user intention, intent for short, and any entities contained in the text. The conversation state is then saved in a Tracker object, and an event is created, i.e., new message arrival. The state is then received by the policy, and the next action is chosen by the police to be taken. The action is also logged into the Tracker and then implemented, which could be a response that is based on an external API call or a simple text response that is sent back to the user. A. Data Collection To gather the required data for training the chatbot to answer most frequent questions asked by students, the LMS Blackboard team admin was contacted to provide this data. This data is in the form of a Word document format and will be used to manually generate the training data and build chatbot stories to train the chatbot. It contains 17 of the frequently asked questions by the Blackboard users. Examples are shown in Appendix 1. There are two datasets to be built, one for the NLU model which contains examples for each user intent along with labeled entities. The data for this project are in MSA, Modern Standard Arabic and some common English words. The second dataset is for the Core model or the Dialog Management model, it has all the possible flows for the conversation (intents with their corresponding actions). The latter might not be needed when using mapping policy which maps each intent to an action or a template. The dataset to serve the current need (i.e., Blackboard system) was requested from Blackboard support staff to build the initial dataset, and get a sense of the frequently asked questions by KSU Blackboard student users. This data will be then increased by synthesizing text that could be asked by the chatbot users to increase the chances of understanding students' questions about the Blackboard system. For chatbots systems, the datasets should be continuously updated after deployment for continuous enhancement. The data formats for both NLU and Core are written in a user-friendly format to make it easier to build, revise and edit. For the NLU model, examples for each intent along with labeled entities are created. There are two available formats for building the dataset, either as a json or a markdown format. Markdown formats are the most used as it can be rendered by most text editors. B. Building NLU Corpus The main goal of building this corpus is to make the chatbot see many examples of what the user might say regarding a specific intention of the user. There are two available formats for building the dataset, either as a json or a markdown format. Markdown formats are the most used one as it can be rendered by most text editors. Below is an example of markdown NLU dataset record. The other format is the json format; it"s not sensitive for whitespaces and better in exchanging data among applications. The actual NLU corpus can be found in Appendix 2. C. Building Stories Stories are a type of data that is used in order to teach the chatbot the possible messaging flow with user. Markdowns are used to specify the conversation paths i.e., stories. Below is an example for Dialog management model training data. The naming convention for stories is to start with two hashes, followed by the story name. Actions are events that start with a dash. The actual Core model corpus can be found in Appendix 3. D. Implementation Rasa environment requires a list of hard and software requirements for running Rasa on Docker. Although there are minimum hardware requirements on Rasa official website, the hardware requirements depend on the size of the model and training data as the training time and the size of the NLU data are positively correlated. Those requirements need to be met to www.ijacsa.thesai.org develop the chatbot and train it in a productive manner. Markup language will be used for building the dataset, defining stories and domain, for training and testing the model we will use the command line interface. Python 3.6 or higher will be used for developing the chatbot actions and replies. And finally, docker will be used to host the chatbot system. The domain is the context the chatbot operates on. It is the place where user intentions or intent, entities, actions, responses and slots can be defined in and the chatbot should know about. The domain.yml file is the file where the domain is specified on and can be found in Appendix 4. For the initial model configuration, the suggested configurations by Rasa official website will be used and the data will be trained on that configuration. In the testing and evaluation phase, the model will be fine-tuned and evaluated to select the best parameters for the model configuration. IV. TESTING AND EVALUATION As opposed to traditional software testing techniques such as unit tests and functional tests, Rasa has specific types of tests which are the data validation test, the NLU model test, and dialog management model test. The purpose of data validation is to make sure that there are no typos or major inconsistencies in the data or the domain. Fig. 3 indicates that there are no errors or inconsistences in the chatbot data. If there were errors in the training data, they must be fixed and the model needs to retrain as errors will cause the model to stop working or produce unwanted behavior. By synthesizing test stories, we can simulate users" interactions and test the chatbot on a data the chatbot did not see before. This will allow us to see if the model will behave in an expected manner when provided with certain data. The test stories are similar to the training stories with a single difference which is the user message. To test the chatbot, three to four test stories were written on each intent in a total of 61 test stories and these test stories are placed in "tests/test_stories.yml". Those test stories can be found on Appendix 5. These test stories are written by the chatbot developer in a way that simulates actual interaction with the chatbot. The purpose of these tests is to see if the dialog model predicts the next action in a conversation correctly. For example, when the user sent ‫"اھال"‬ and the intent classifier predicted "greet" intent, did the dialog model predict the next action to be "utter_greet" as the developer wrote in the test story above or not? To test the natural understanding model (NLU) we need to split our training data into train/test to simulate external user input that the chatbot did not see before. After that, cross-validation tests were performed. To test the dialog management model, we will use the test stories created earlier. Predicted stories are considered failed if at least one of the story actions was falsely predicted. Table II shows the results of running 5 folds crossvalidation on the NLU model. The training dataset accuracy, f1-score, and precision are all 1 while the test accuracy, f1-score and precision are 0.924, 0.911, 0.922, respectively which is considered a good starting point. The model can be further optimized as we will see later. The matrix in the Fig. 4 allows us to see what intents were mistakenly predicted as another intent. For example, we can see that the intent "greet" was two time falsely predicted as "goodby" and one time as "affirm". Also we can see that the intent "FAQ_submit_button_is_not_ working" was two times falsely predicted as "FAQ_in_lms_sound_issue" and so on. This graph is particularly helpful in optimizing the NLU model by adding more examples and removing examples that might mislead the model into falsely predicting intents. The intent prediction confidence distribution histogram in Fig. 5 is used to show how many samples were correctly and wrongly predicted along with the confidence of the prediction. For our model to perform well, we need to try to minimize the number of samples that were wrongly classified which will automatically increase the correctly classified sample. From Table III, we can see that all actions were predicted correctly with a value near to 1 for F1-score, precision, and accuracy. The reason for such high results is that the dialog management model is classifying actions based on the results of the intent classifier. If there are no errors in predicting the intention of the user, the prediction of the next action becomes easier and hence, result in a high hit rate. As mentioned earlier, we will try to optimize the NLU model by adding more examples and removing examples that might mislead the model into falsely predicting intents. We will also change some of the NLU model configurations to see if those changes yield better results (Table IV). Although those are minor changes, they do have an effect and it means that it"s possible to further optimize the model by adding more data and tuning the model parameters to find the ones that best fit the data. The average reported accuracy in similar case studies mentioned in [22], [24], [27] is near 90% as opposed to our results which is slightly higher. V. CONCLUSION This work intended to develop a chatbot engine that helps with frequently asked questions about Blackboard system, which could be embedded into Blackboard website. It contains a machine-learning model trained on Arabic datasets. The engine accepts both Arabic textual content as well as English textual content if needed; for commonly used English terminologies. The interactions with the chatbot, as well as the users' evaluations, are stored and used for optimizing the chatbot model to improve future interactions. Chatbot systems development entails many challenges in terms of preparing the training dataset in a way that incorporate as much as possible of users' inquiries without confusion, preprocessing it before feeding it to the NLU model to try to normalize the data and remove unnecessary words and symbols that could confuse the model, and deploy and maintain the model to be used. Rasa framework was chosen to as the main tool for developing the Blackboard chatbot. The actual chatbot implementation started by preparing the datasets required for Rasa NLU and Core models. The dataset is designed to account for as many as possible of KSU Blackboard-related inquires to provide the appropriate answers and reduce the workload of Blackboard system support staff. When the data was ready, the model training and tuning began along with a number of experimentations to find the best model pipelines that fits the data. The chatbot is built using a combination of tools such as Python for programming, YAML as the markup language. For future work, the chatbot should be deployed using Docker and Docker-compose for running the chatbot service. The chatbot can also be deployed in a distributed cluster either on cloud or on-premise to handle the workload and make the chatbot system scalable.
2023-03-08T16:21:13.370Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "89dffdf58bf384a8a9dcf8a764938ff4edc3451f", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume14No2/Paper_90-Building_a_Machine_Learning_Powered_Chatbot.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5b3405dac2e7c2bc015a143b0d5c28025e3b7bd9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
15477563
pes2o/s2orc
v3-fos-license
Qualitative Approach to Attempted Suicide by Adolescents and Young Adults: The (Neglected) Role of Revenge Background Suicide by adolescents and young adults is a major public health concern, and repetition of self-harm is an important risk factor for future suicide attempts. Objective Our purpose is to explore the perspective of adolescents directly involved in suicidal acts. Methods Qualitative study involving 16 purposively selected adolescents (sex ratio1∶1) from 3 different centers. Half had been involved in repeated suicidal acts, and the other half only one. Data were gathered through semistructured interviews and analyzed according to Interpretative Phenomenological Analysis. Results We found five main themes, organized in two superordinate themes. The first theme (individual dimensions of the suicide attempt) describes the issues and explanations that the adolescents saw as related to themselves; it includes the subthemes: (1) negative emotions toward the self and individual impasse, and (2) the need for some control over their lives. The second main theme (relational dimensions of attempted suicide) describes issues that adolescents mentioned that were related to others and includes three subthemes: (3) perceived impasse in interpersonal relationships, (4) communication, and (5) revenge. Conclusions Adolescents involved in suicidal behavior are stuck in both an individual and a relational impasse from which there is no exit and no apparent way to reach the other. Revenge can bridge this gap and thus transforms personal distress into a relational matter. This powerful emotion has been neglected by both clinicians and researchers. Introduction Adolescent suicide is a major public health concern in all western countries. Epidemiological data show that it is one of the three leading causes of death worldwide among those younger than 25 years [1,2]. A more statistically widespread phenomenon is attempted suicide: its prevalence is about 7.8% in the United States [2] and 10.5% in Europe [3]. The highest attempted suicide rate is recorded among those aged 15-24 years, and their attempted/completed suicide ratio is estimated to be between 50:1 and 100:1 [4]. The prevention of suicidal behavior is therefore a primary social and medical concern throughout the world [1,5]. Nonetheless, despite a large number of research and prevention programs, the attempted suicide rate among youth is increasing [6], and secondary prevention interventions have thus far achieved limited results [7,8]. The numerous studies, conducted from multiple perspectives (including psychological, psychiatric, and sociological), show that one of the most important risk factors for attempted suicide is a previous attempt [9][10][11]. According to a recent English study, repetition of self-harm occurs in about 27% of adolescents, and the four major risk factors for repetition are age, prior psychiatric treatment, self-cutting, and previous selfharm. This study also found that youths who sought care at a hospital for self-harm are 10 times more likely to die by suicide than would be expected in this age group [12]. Although an understanding of the adolescent perspective is essential in preventing the relapse of suicidal behaviors, the subjective experience of those directly involved in suicidal acts has not been sufficiently explored [13]. Qualitative methods are particularly suited to investigating participants' viewpoints, their lived experiences, and their interior worlds [14,15]. Nevertheless, qualitative research in adolescent suicidology is rare [16]. To our knowledge, only two qualitative studies [17,18] have directly addressed the problem of relapse of suicidal or self-harming behavior among youth. In particular, one of them showed that current services respond inadequately to self-harming behaviors among young people and struggle to deal with the needs this population experiences [18]. The aim of this qualitative study is to explore the perspective of adolescents (for clarity's sake, we refer to our participants as adolescents) who have directly engaged in suicidal acts (in either single or repeated suicide attempts). Exploring the factors related to their success or failure in overcoming and moving beyond the suicidal period might provide clinicians with important insights useful in caring for young people involved in suicidal behavior, especially in a perspective of preventing repetition. Participants and Setting Participants received complete written information about the scope of the research, the identity and affiliation of the researchers, the possibility of withdrawing from the study at any point, confidentiality, and all other information required in accordance with Italian policies for psychological research and with the Helsinki Declaration, as revised in 1989. Participants (and their parents, for minors) provided written consent. This research received approval from the institutional review boards of the three hospitals involved: Santa Giuliana Hospital, Verona; Este Hospital, Padua; Monselice Hospital, Padua. These were two local general hospitals (with inpatient and outpatient adolescent psychiatric departments) and one psychiatric hospital in northeastern Italy. Physicians or psychologists at these hospitals were contacted and asked if they had patients who might be appropriate subjects for a study of adolescent suicide attempts. Subjects were eligible only if they had attempted suicide during adolescence or in the postadolescent period and were aged 15 to 25 years old at the time of the interview. Eligible subjects were then contacted. Purposive sampling [19] was undertaken, and inclusion of subjects continued until saturation was reached [20]. As recommended for Interpretive Phenomenological Analysis (IPA) [21,22], we chose to focus on only a few cases and to analyze their accounts in depth. Moreover, to include a heterogeneous sample with maximum variation [19], we included both adolescents with only a single suicidal act and those with multiple acts. We were therefore able to consider a wide range of situations and experiences. Sixteen Italian adolescents (sex ratio 1:1) freely agreed to participate in the study (two refused, one male and one female). Their median age was 20 years at the interview, and 16 at the suicide attempt. Half had a history of previous attempts ($1, see Table 1). Data Collection Data were collected through 16 individual semi-structured faceto-face interviews. The interviews were audio-recorded and subsequently transcribed verbatim, with all nuances of the participants' expression recorded. An interview topic guide ( Table 2) was developed in advance and included 8 open-ended questions and several prompts. The logic underpinning the construction of the interview guide was to elicit in-depth and detailed accounts of the subjects' feelings before the suicide attempt and afterwards, as well as the expectations and meanings that they connected to this action. Our overall objective in using this qualitative method was to put ourselves in the lived world of each participant and explore the meaning of the experience to each of them. Fourteen interviews took place at the adolescents' treatment facility, one at the adolescent's home, and one at the residential facility where the adolescent was living. Since the sensitive topic of our interviews, considerable attention was given to the evaluation of participants' opinion about the interview after its end. All adolescents felt comfortable discussing their experience and explaining their perspective without receiving any judgment from the researcher. Referent psychologists or physicians never reported any concern. In addition, researchers themselves discussed their own feelings about the interviews during study group meetings, in order to take into account potential influences on data collection and analysis (reflexivity). Data Analysis Qualitative analysis was performed according to IPA methodology. The aim of this method is to understand how people make sense of their major life experiences by adopting an ''insider perspective'' [23]. Three epistemological points underpin IPA: first, it is a phenomenological method that seeks to explore the informants' views of the world. As Husserl pointed out [24], the objective of phenomenology is to understand how a phenomenon appears in the individual's conscious experience. Hence, experience is conceived as uniquely perspectival, embodied, and situated [21]. Second, IPA is based on hermeneutics: interpretative activity, as defined by Smith & Osborn [22], is a dual process in which the ''researcher is trying to make sense of the participant trying to make sense of what is happening to them''. In practice, during the analysis, the researcher might move dialectically between the whole and the parts, as well as between understanding and interpretation. Third, the idiographic approach emphasizes a deep understanding of the individual cases. IPA is committed to understanding the way in which participants understand particular phenomena from their perspective and in their context [21]. The analytic process proceeded through several stages: we began by reading and rereading the entirety of each interview, to familiarize ourselves with the participant's expressive style and to obtain an overall impression. We took initial notes that corresponded to the fundamental units of meaning. At this stage, the notes were descriptive and used the participants' own words; particular attention was paid to linguistic details, including the use of expressions (especially youth slang) and metaphors. Then conceptual/psychological notes were drafted, through processes of condensation, comparison, and abstracting the initial notes. Connections with notes were mapped and synthesized, and emergent themes developed. Each interview was separately analyzed in this way and then compared to enable us to cluster themes into superordinate categories. Through this process, the analysis moved through different interpretative levels, from more descriptive stages to more interpretative ones; every concept not supported by data was eliminated. The primary concern for researchers is to maintain the link between their conceptual organization and the participants' words [25]. For this reason, the categories of analysis are not worked out in advance, but are derived inductively from the empirical data. To ensure validity, two researchers (MO and MP, both expert psychologists trained in qualitative research) conducted separate analyses of these interviews and compared them afterwards. A third researcher (ARL, psychiatrist specialist in qualitative research) triangulated the analysis. Every discrepancy was negotiated during study group meetings, and the final organization emerged from the work in concert of all the researchers. We agreed to considered data saturation to be reached because no new aspects emerged from the interviews (i.e. no more coded were added to our codebook) in each of our themes, and last interviews did not provide additional understanding of our participants' experience. We report the study according to the COREQ statement. (Table S1) Results We identified five themes describing the experience of attempted suicide as narrated by participants and organized them into two superordinate themes, according to the meaning the adolescents attributed to their suicidal act ( Figure 1): the first superordinate theme (Individual dimensions of the suicidal act) comprises the issues and explanations that the adolescents saw as related to themselves; it includes the themes: (1) negative emotions toward the self: the experience of an impasse with no exit, and (2) need to have some control over their lives. The second superordinate theme (relational dimensions of the suicidal act) involves issues with others in the three subthemes: (3) perceived impasse in family and peer relationships, (4) communication, and (5) revenge. Theme I: Individual dimensions of the suicide attempt Two subthemes comprised this first theme: (i) negative emotions toward the self: the experience of an impasse with no exit, and (ii) the need to have some control over their lives. Negative emotions toward the self: individual im- passe. During the interviews all participants gave detailed descriptions of themselves, their state of mind, and the thoughts that led to the decision to attempt suicide. The words they used to talk about themselves described a devalued self, in which their dominant feeling was that they were not accepted. That day, I took the pills looking myself in the mirror…I kept repeating that I was disgusting, that no one really cared about me…[I was thinking] that everything about me was wrong! That nothing I did came out right…I don't know, I continued this thing of not feeling accepted, not feeling that anybody cared about me… (F4). 1. Shame and guilt were the feelings that adolescents evoked most frequently during the interviews, and their narratives were dominated by a sense of estrangement, loneliness, and loss of any meaning to their lives. One participant described her feelings of loneliness with a meaningful metaphor: I was alone, stretched out on the ground, I didn't know what to hang on to…I was looking in vain for something to hang on to, but I failed…essentially I was alone… (F3). 2. Need to have some control over their lives. These adolescents broached issues of control and mastery during their interviews in several ways. During the period before their act, they lived a situation that they perceived was out of their control. They described their struggles to move beyond this lived situation that, as we have just reported, appeared impossible to overcome or resolve, that they experienced passively, were subjected to. What emerged from the interviews was that acting on their body offered them control of/over their life, in contrast to all the other uncontrollable situations they were living. Half of the adolescents interviewed had cut themselves as a positive action, to make themselves the actor of something in their life. I had no control over the others, but I had control over myself…so I could do what I wanted to myself …and the cuts were a way to comfort my pain… I still have the scars -blood everywhere, I was crying, but…but the problem was still there…however, during these moments […] it was as if I had control of my life… (F7). 2. These adolescents lived their suicide attempt as an escape from an overwhelming life situation that was beyond their ability to manage: I said 'that's OK, stop, let's finish it off, that way, I'll put everything straight…I won't have to think about anything anymore, there won't be anything to deal with, and…everything will be better. Interviewer: What do you mean by ''everything will be better''? That is, more than anything, that there will be nothing else so it will necessarily be better! […] I was glad to have made that decision… I was glad and sure about my decision… (M7). Qualitative Approach to Attempted Suicide by Youth PLOS ONE | www.plosone.org 2. Narratives related to the post-suicidal period shed light on the failure of the adolescents' attempts to achieve control of their own lives. They talked about feeling of anger, described as a physical and violent rage closely linked to the failure of their act, and about finding themselves in a situation they perceived as still more difficult. They lived the failure of their act as yet another demonstration of their ineptitude, just one more in their long string of personal failures. Interviewer: What about the changes in your life [after the suicide attempt]? Nothing…maybe, I began to see things darker […], I thought I wasn't able to do anything, that I was afraid…now I'm tired, I can't take it anymore, before it wasn't like this […]. I began to see everything as darker…I began to think that I was wrong, that I was the problem…because when there is a problem now, I give up…and before it wasn't so. From that, I feel my life has changed (F6). Theme II: Relational dimensions of the suicide attempt The second superordinate theme is the relational dimension of the suicidal act. The three subthemes belonging to this domain are described below: Perceived impasse in interpersonal relationships. Our participants' narratives of their family relationships focused on the description of an impasse, a sort of gridlock dominated by the absence of acceptance or trust and the perception of being written down or even off. It seems to parallel the negative emotions toward the self and the perceived impasse described above (theme 1). Because I was changing and they didn't realize that, they only realized it when I ran away from home […] at the beginning, I did it because…that is, I didn't even think about it much, but then, as the hours were going by I kept on thinking about it and…I don't know, but it was like running away to make myself visible… (M1). 3. The participants described rigid and overwhelming family dynamics and their perception that it was impossible to escape an unbearable situation. They also directly linked their need to escape and their choice to attempt suicide: ''When I began to make her understand that I wasn't going to accept this situation anymore, all hell broke loose…and then, from that, my act…since I began to tell her 'look, Mama, I can't take that anymore.' …she didn't accept that…maybe she understood I'm no longer the baby who's happy with a new pair of shoes so she'll be good, keep quiet and make believe she's happy…I don't know…'' Interviewer: can you tell me more about the relationship between that and your act? I think that it is… the fundamental relationship…I think that is the main reason that I did it, fundamentally… (F6). 3. The peer group was also described as a source of intense emotions. Although the narratives revealed that the teens hoped their peer group might supply what their families failed to give them, these texts also demonstrated fragility. Sometimes, they felt that being part of their peer group produced emotions very like to those about their family life; this increased the feelings of loneliness and of not being understood: I felt they were superficial, and I didn't want to keep on pretending to be like that…I didn't feel at ease with them, and slowly I lost the people I went out with (M5). 3. A frequent topic was the emotional investment in one core relationship, an investment the adolescents perceived as a way to cope with the instability and difficulties of their lives. It was described in terms of dependency: the relationship became the repository of their hopes, and the person they were involved with, the reference point of their life: My ex-boyfriend F. was my first one…I was sixteen…my first sexual relationship, my first love story, it lasted 3 and a half years. He was my reference, because my parents are separated, my father is far away, and I have an awful relationship with my mother…and he was like… like an older brother… a father…his mother was like a mother to me, and she was almost my mother for three and a half year […]. With F. I had finally found that kind of stability…but, I guess it was only a stopgap, a stopgap that covered up all my problems…and in fact, when he was gone, they all reappeared on the surface (F3). Communication. All the participants explicitly described the communicative issues related to their suicide attempt. It is clear that each suicidal act was primarily an interpersonal act, concerning not only the self but also the environment of significant others. The suicide attempt was closely linked to a situation with which the adolescent could not deal -all efforts were in vain. Suicide thus became the only possible way to get the person to listen to the adolescent's difficulties and to send a message that was impossible to deliver otherwise. The suicidal act was described as the only choice, once every other communicative possibility had failed. I was sick and tired of my mother's behavior…and to keep on talking was useless. I went on for several months and kept talking and talking and…that was hurting me…and I was tired. And so I finally did something like that [attempted suicide], but it was mainly to make her understand that she was killing me!…either she would kill me, or…or I had to find another way […]. If I tried to do that there, it's because I had already talked about it in every other way… (F4). 4. Our analysis of the narratives about the period after the suicidal act found these youth travelled two different paths. Those who successfully emerged from the suicidal crisis described the first as a progressive opening of the line of communication with others, a process that established a basis for a change in the family relationship: […] So, I realized that I had made her suffer so much, and that she had done so much for me…to help me, but I didn't realize that… I wasn't going to listen to her, or even give a damn,… because I believed that she couldn't possibly succeed in understanding me… (F1). 4. This excerpt shows that the communication that developed after the suicide attempt led to the explicit recognition of feelings, emotions, and thoughts that had been present before the attempt, but never successfully communicated. It is important to note that it was not a dialogue about the suicidal act, but an attempt at mutual understanding. 4. The second path was that of the adolescents who described a situation in which dialogue and communication remained as impossible after the suicide attempt as it has been before. The communication so unambiguously embedded in the attempt remained unanswered. The indifference described by the participants -including, for some, their family's refusal to admit they had attempted suicide -had the effect of reinforcing the feelings that led to the attempt. They didn't create a good situation…they act like they did when I crashed the car when I was drunk… They rub it in that they can't even fall asleep at night, they rub everything in, they were really full of hatred…and every time I did it [attempted suicide], it was always worse, because they were increasingly irritated, and I increasingly hated them…and so…the situation just kept getting worse (F7). Revenge. A strong relational theme that the participants described explicitly was revenge. Several adolescents explained the aggressiveness of their act as a way to make other people feel guilty for their deaths and made the vindictive intent of the attempted suicide very plain, as the following excerpt shows: 5. Revenge carries a message, one intended to make the others aware of their mistakes, their carelessness. One adolescent described it as a communication that was impossible to misunderstand: finding her body will cause her parents ''suffering, crying, and regret'' (F5). It almost appears that she expects to be present to witness the scene. It is a way to put the blame on others and make them feel guilty through remorse: Discussion Our phenomenological analysis of young adults' accounts of their suicide attempts elicited five themes that described the experiences they lived. These themes were organized into two superordinate themes, according to whether they concerned the individual or the relational dimensions that emerged from the narratives. We showed that the attempts to link the two dimensions -to communicate their anguish -were a key aspect of our participants' experience. The vengeful meaning of suicide that we found exemplifies this attempt to reach a relational dimension, to hurt someone else by hurting oneself. Accordingly to Knoll [26], revenge is an intense and pervasive emotion that has nevertheless received little attention, especially in the domain of youthful suicidal behavior. Our findings showed that revenge is a strong other-directed emotion, which aims to communicate an individual's own internal state by inflicting permanent suffering on others -by suicide. This revenge, moreover, is not only directed at other but is also a means of relieving one's own intense experience of internal struggle and helplessness. Clinicians caring for suicidal adolescents need to acknowledge the violence (aggressiveness and revenge) inherent in the suicidal act. It is not obvious for them to think about violence, aggression, and revenge when they are confronted with these teens. This study provides an opportunity to illuminate this aspect of suicide and make clinicians aware of the role of this powerful emotion. We argue that openly addressing this issue with adolescents themselves and their families may play an essential role helping them recognize the multiple factors (both individual and relational, as we showed) that led to a particular suicide attempt, to put things in perspective (clarifying the individual/relational confusion), and begin the process of moving beyond the crisis and avoiding a repetition. Comparison with the literature Our findings are consistent with previous work. The subthemes of the first theme (individual dimension of attempted suicide) show the subjective experience of loneliness, isolation, and negative emotions toward the self. The experience of suicidal acts described by adolescents is primarily a solitary experience involving the loss of any meaning in life and the impossibility of finding another way to exit a perceived impasse. Studies focusing on the internal world of the suicidal adolescent have consistently demonstrated negative emotional experiences [17,27,28]. We show that the need to recover control over one's own life plays an important role in the decision to kill oneself, as others have found [9,18,28] for people involved in non-suicidal self-harming behaviors [29]. The subthemes of the second theme deal with the relational dimensions of the act. Adolescents described the meaning of the situation that led to their decision to attempt suicide with interpersonal explanations, such as a lack of communication with their family and peers, a sense of not belonging to either group, and the impossibility they felt of overcoming an interpersonal stalemate. Moreover, they recounted changes that the primary suicidal act produced (or failed to produce) in their interpersonal world that eventually enabled important relationships to be restructured in ways that, for example, increased mutual understanding. Several authors have investigated the relational aspects of suicide attempts in various populations, including LGBT [30], ethnic minorities [31], and depressed adolescents [32]. Consistently with our findings, these studies pointed out the importance of interpersonal relations in understanding both the reasons for suicide attempts and the patterns of recovery in adolescent suicidal behavior. We go further, however. Although previous studies have mentioned the relation between the individual and interpersonal dimensions of suicidal acts, they have not discussed it clearly, and several gaps remain. The hypothesis we propose, which emerges from our findings, is that confusion exists between these two dimensions. Adolescents continually try to link their individual state of personal distress, helplessness, and loneliness to the presence of others, seeking to connect. They described situations in which their unhappiness is not recognized or acknowledged by others. Our findings suggest that for adolescents suicidal behavior represents a means of establishing a connection between their personal distress and the others, through the act itself. Revenge, as discussed above, is one way to do that. Moreover, failure to establish that link appears to be a major factor responsible for keeping the adolescent in the same state of mind that led to the initial act and thus keeps him or her at risk for repeating it. Limitations This study has two main limitations. The first concerns its generalizability. Our purposive sampling procedure allowed us to include a wide sample of experiences among young men and women, with both single and multiple suicidal acts, of different durations of time since the act, and initially treated at 3 different hospitals. Nonetheless, our findings can be generalized only to young Italian adults, and attitudes may differ in other countries or even in other regions of Italy. However, our methodological precautions assure the trustworthiness of our findings. Because the socio-cultural environment has a strong influence on suicidal behaviors [31], further research needs to be conducted to compare and integrate perspectives from several countries. The second limitation is that all the participants were contacted through a healthcare facility where they underwent a period of psychiatric or psychological treatment. This might have affected the way that they retrospectively understood their act Conclusion and perspectives for future research Adolescent suicidal behavior appears to be a relational act that aims to bridge a gap between the adolescents and their significant others in order to resolve a perceived impasse. Failure -by the others and by the therapist -to recognize this intent and take it into account appears to be a key factor for repetition of this behavior. Revenge assumes a particular role that appears to have been neglected by both clinicians and researchers until now, and further research should address this issue. Additionally, qualitative studies should be conducted to understand both caregivers' and health-care professionals' perspectives about the issue of revenge in adolescent suicide attempts.
2017-06-14T16:36:35.347Z
2014-05-06T00:00:00.000
{ "year": 2014, "sha1": "22ee7f72c499f3ab0f386173b2c38817d81685bb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096716&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22ee7f72c499f3ab0f386173b2c38817d81685bb", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229324251
pes2o/s2orc
v3-fos-license
Genetic analysis reveals an east-west divide within North American Vitis species that mirrors their resistance to Pierce’s disease Pierce’s disease (PD) caused by the bacterium Xylella fastidiosa is a deadly disease of grapevines. This study used 20 SSR markers to genotype 326 accessions of grape species collected from the southeastern and southwestern United States, Mexico and Costa Rica. Two hundred sixty-six of these accessions, and an additional 12 PD resistant hybrid cultivars developed from southeastern US grape species, were evaluated for PD resistance. Disease resistance was evaluated by quantifying the level of bacteria in stems and measuring PD symptoms on the canes and leaves. Both Bayesian clustering and principal coordinate analyses identified two groups with an east-west divide: group 1 consisted of grape species from the southeastern US and Mexico, and group 2 consisted of accessions collected from the southwestern US and Mexico. The Sierra Madre Oriental mountain range appeared to be a phylogeographic barrier. The state of Texas was identified as a potential hybridization zone. The hierarchal STRUCTURE analysis on each group showed clustering of unique grape species. An east-west divide was also observed for PD resistance. With the exception of Vitis candicans and V. cinerea accessions collected from Mexico, all other grape species as well as the resistant southeastern hybrid cultivars were susceptible to the disease. Southwestern US grape accessions from drier desert regions showed stronger resistance to the disease. Strong PD resistance was observed within three distinct genetic clusters of V. arizonica which is adapted to drier environments and hybridizes freely with other species across its wide range. Introduction a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 speculated that the failure of European grapes in Florida in 1850 was also due to PD [11] Early breeding programs in the southeastern US utilized wild Vitis species that did not show typical PD symptoms and could survive longer in the field [12,13]. The grape breeding programs in central Florida and Mississippi used local grape species, mostly accessions of V. aestivalis and V. shuttleworthii, to develop many PD resistant hybrids capable of surviving field trials [4,[14][15][16]. Resistance to PD was also evaluated in Vitis rotundifolia [17,18], a grape species that has shown remarkable resistance to fungal diseases, a broad range of nematodes and grape phylloxera [19]. This taxon also possesses a broad array of genetic and morphological differences that distinguish it from all other Vitis species and creates sterile hybrids with them, leading to its inclusion in a separate subgenus or genus-Muscadinia. Symptoms of PD infection are superficially similar to acute water stress. They include marginal leaf necrosis, leaf scorch, leaf blade drop leaving attached petioles, uneven cane lignification, shriveled fruit, die back and eventual death within one to five years [20]. Early PDresistance breeding work lacks information about the pattern of bacterial infection and there are no reports that quantify bacterial levels in infected plants [14][15][16]. Pierce's disease resistance was primarily evaluated by the presence of visible PD symptoms in the field under natural infection. Wild accessions and breeding lines with reduced symptoms and that survived longer in field trials were assumed to be resistant. Based on field evaluations of breeding populations derived from three southeastern US grape species, it was postulated that resistance to PD in southeastern grape species is trigenic with resistance dominant to susceptibility [15]. The presence of the pathogen in southern states and early breeding work led to the hypothesis that the causal agent of PD in grapes, X. fastidiosa subsp. fastidiosa, originated in the southeastern US and that the native grape species from that region coevolved with the pathogen and developed resistance [3,[14][15][16]. A challenge to this hypothesis emerged from the genomic analysis of X. fastidiosa isolates from different regions. Sequence analysis revealed that only X. fastidiosa subsp. multiplex, which does not cause disease in grapes, is native to the US [21]. Furthermore, subsp. fastidiosa, which causes disease in cultivated grapes, has limited genetic diversity within isolates collected from California, Texas and Florida indicating a possible recent introduction [22,23]. Comparisons of genome sequences suggested tropical Central America as the origin of subsp. fastidiosa, which separated from the subsp. multiplex (native to the eastern US) a minimum of 15,000 years ago and possibly more than 30,000 years ago [21,22]. Greater genetic diversity was observed in the isolates of subsp. fastidiosa from Costa Rica, and the isolates from the US were nested within that group. These results raised important questions about the origin of PD resistance in the southeastern US grape species given the absence of subsp. fastidiosa. From an evolutionary perspective, 150-200 years of exposure to the pathogen may not be enough time for grape species native to the eastern US to evolve resistance to this pathogen. The 2006 discovery of strong PD resistance in grape species from northeastern Mexico [24], also cast doubt on the hypothesized southeastern US origin for subsp. fastidiosa. The resistant accession, b43-17, collected in Monterrey, Mexico, appears to be a hybrid of V. arizonica and V. candicans (syn. V. mustangensis) [25]. All plants in the F1 population from a cross of susceptible V. rupestris cv. A. de Serres and b43-17 were resistant to X. fastidiosa. A major resistance locus that segregated 1:1 in a pseudo-backcross population, was identified on chromosome (chr) 14 and it was named PdR1 [24,26]. A recent study by Riaz et al. [27] identified nine genetically distinct PD resistant accessions collected from Mexico and the bordering states of Texas and Arizona. Three additional accessions (b41-13, b40-14 and T03-16) that are the subject of another study [28] were also found to have strong resistance to PD. The PD resistance of all 12 accessions genetically mapped to chr 14 at a similar genomic position to where the PdR1 locus from b43-17 mapped. The widespread geographic distribution of a similar resistance locus across different grape species in the southwestern US and Mexico raises questions about the origins of PD resistance in the southwest US and Mexico, how it differs from PD resistance in the southeastern US grape species, and what evolutionary forces shaped PD resistance in Vitis species from North, Central and South America. The North American Vitis taxonomy and nomenclature is complex due to widespread hybridization among sympatric species and morphological variation within species [29,30]. In the recent 'Flora of North America' (http://www.efloras.org/florataxon.aspx?flora_id= 1&taxon_id=134649), 19 grape species were recognized. However, it did not include Mexican grape species that have been described by Comeaux [31,32] and Comeaux and Lu [33], and that deserve special attention because Mexico exhibits great genetic diversity of flora and fauna within a complex transition zone where Nearctic and Neotropical biotas overlap [34,35]. The grape germplasm from this region reflects historical speciation events promoted by environmental and geological changes that occurred over thousands of years. The mountain system of the Sierra Madre in Mexico, the Rockies in US and the deserts of Sonora, and Chihuahua that cover both countries contain a rich biodiversity of grape species. Vitis species, with the exception of the muscadine species, are all interfertile, but remain distinct due to differences in their habitat preference, physical geographical barriers, and phenological differences in flowering dates. The desert regions of the southwestern US and Mexico create significant geographical barriers between isolated mountain ranges that provide unique niches for grape species where water is available. A better understanding of the phylogenetic relationships of grape germplasm from different geographical regions in the southwest and southeastern US and Mexico would help shed light on the origin and dispersal of PD resistance in the grape species collected from this important transition zone. In this study, we used 20 Simple Sequence Repeat (SSR) markers to genotype 346 accessions representing 19 grape species collected from Costa Rica, Mexico, the southwestern states bordering Mexico (California, Arizona, New Mexico, and Texas), and the southeastern coastal states where PD greatly limits the cultivation of V. vinifera grapes. Two hundred and sixty-six accessions were phenotyped using an optimized greenhouse-based screening method (high temperature and water stress) to evaluate their resistance to X. fastidiosa. Bacterial amounts in the stem were quantified and PD symptoms (uneven cane lignification, leaf scorch and leaf loss) were recorded. Germplasm previously identified as PD resistant was also included in the study. Finally, allelic data from 20 SSR markers were used for genetic analysis. The first objective of the study was to evaluate the historical relationships among different grape species and to determine if there is a genetic continuum between eastern and western US Vitis germplasm. The second objective was to determine whether geographic and taxonomic associations exist for PD resistance. The final goal was to compare previously reported PD resistant breeding lines from the southeastern US to the southwestern US germplasm under similar screening conditions. Tables 1 and S1 Table provide summary and detailed information, respectively, on the 346 accessions within 19 grape species. Species designations were made based on morphological characteristics of the field grown plants. All accessions reported in this study are maintained either by the Department of Viticulture and Enology, University of California, Davis, CA (UCD), or the National Clonal Germplasm Repository, USDA-ARS, Davis (NCGR-Davis), or both. Germplasm from Mexico was acquired as seeds or cuttings by H.P. Olmo in 1961, and later by B.L. Comeaux in 1991. US grape germplasm was collected as cuttings across the southern states during collection trips from 1997 to 2016. Designated Davis Vitis Identification Tag (DVIT) names were used for accessions maintained by NCGR-Davis. Global positioning system (GPS) coordinates of the collection site are provided for recently collected material housed at UCD (S1 Table). For historic collections maintained at the NCGR-Davis, location coordinates are not available. In these cases, location was determined from the collection notes when possible. Disease evaluations A total of 266 accessions were evaluated for PD resistance using the greenhouse-based screen described in earlier studies (Table 1) [27,36]. Data for an additional 80 accessions were not reported for various reasons: either they failed to propagate, had too few replicates or died during greenhouse screening. In cases where accessions were from a seed lot, three to five seedlings were selected for testing. A minimum of four biological replicates of each accession were tested. A total of 19 screening experiments were carried out from 2011 to 2018. Grape accessions with known strong and intermediate resistance to PD and the susceptible V. vinifera cultivar Chardonnay (un-inoculated and inoculated) were used as reference plant controls (hereafter called reference plants) in every screen. The use of these reference plants allowed us to compare screen results across different experiments and years. Plants were propagated from hardwood or herbaceous cuttings taken from plants growing in the field at UCD or at the NCGR-Davis. Hardwood cuttings were soaked in water overnight, placed in a callusing media (60:40 ratio of perlite:vermiculite) and kept in a dark room at 100% relative humidity and 29˚C for two weeks. After two weeks, cuttings were placed in 5 x 5 x 15cm paper sleeves with a 1:1 ratio of callus media:peat moss after trimming excess roots and dipping the exposed portion in wax to prevent water loss. The sleeved cuttings were kept for PLOS ONE an additional 3-4 days in the dark at conditions described above and were then transferred to beds in a fog room with 27˚C bottom heat for two weeks before transplanting to pots. Actively growing plants were propagated using two-to three-node herbaceous cuttings that were rooted in 2 x 2 x 6 cm cellulose plugs in a fog room with 27˚C bottom heat. Both herbaceous and hardwood rooted cuttings were transplanted to 1 L pots with 1:1:1 Yolo sandy loam soil/ perlite/peat mix. To ensure uniform growth at the time of inoculation, after about 4 weeks of growth all plants were cut back to two buds and regrown. As the main shoots grew to 1 m all lateral shoots were removed routinely to promote better air circulation and light penetration. Plants were fertigated with a 25% Hoagland's solution (Sigma-Aldrich, St. Louis) via a drip system (130 ml daily per plant). Plants received both supplemental and ambient light for an average of 18 h per day. A X. fastidiosa isolate collected from Yountville, Napa County, California was used for all screens. The bacteria were maintained in greenhouse grown susceptible Chardonnay plants and isolated using the procedures described by Krivanek et al. [37]. For inoculations, actively growing bacteria were washed from Petri plates with ddH 2 O, and the cell suspension was standardized to a 0.25 absorbance at 600 nm (approximately 6 × 10 8 CFU/ml as determined by culture plating). Plants were needle inoculated [38] twice about 10-15 cm above the base of each shoot with a total of 20 μl of bacterial suspension. The plants were sampled to quantify the bacterial amount 10 to 14 weeks post inoculation when the susceptible reference plants started showing leaf scorch and uneven cane lignification. For each test plant, a 0.5 g section of stem tissue was taken 30 cm above the point of inoculation and placed into a grinding bag (Agdia, Elkhart, Indiana, USA) with 5 ml of phosphate-buffered saline (PBS), 0.05% Tween, and 2% soluble polyvinylpyrrolidone (PVP-40) buffer (Nome et al. 1981). Samples were lightly crushed with a hammer and further processed using a Homes 6 mechanical homogenizer (Bioreba, Longmont, Colorado, USA), and the resulting extract was stored at -20˚C before ELISA testing. Disease severity was assessed by three different methods at 10-14 weeks post inoculation. The mean percentage area of leaf scorch and leaf loss (LS/LL) on the four leaves above and nearest to the point of inoculation (POI) were measured. The degree of cane maturation in terms of green islands and necrosis development designated as the cane maturation index (CMI) was measured as described in an earlier study [37]. Finally, ELISA was used to measure the X. fastidiosa levels in the stem [36]. To obtain homogeneous variances and normally distributed residuals, ELISA data were natural log transformed. All statistical analysis was performed using JMP Pro14 software (Copyright 2018, SAS Institute Inc.). The reference plant controls across the 19 screens were analyzed to determine the variability of ELISA values and a two-way ANOVA analysis was carried out with 'genotype', 'experiment take down (TK) date', and the interaction of the two factors. Least square means comparisons were made with Tukey's test based on a least significant difference (LSD) for the reference plants and 19 screens. The ELISA values, CMI, and LS/LL data of wild accessions was analyzed with the inclusion of the reference plants to adjust for variation among the screens. Genotyping DNA was extracted from young leaf tissue using a modified CTAB protocol as described earlier with the exclusion of the RNase step [26]. Standard alcohol DNA precipitations were carried out following a single chloroform-isoamyl alcohol wash; DNA was dissolved in 1X TE buffer and stored at -20˚C for further use. A total of 24 SSR markers were used to develop fingerprint data (S2 Table). Amplifications for each marker were carried out separately. The PCR amplifications were performed in 10 μl reaction following the protocols described in an earlier study [27]. Amplified products were combined depending on the amplicon size and fluorescent labels of the markers and run on an ABI 3500 capillary electrophoresis analyzer with Gen-eScan-500 Liz Size Standard (Life Technologies, Carlsbad, California, USA). Eight samples were used as an internal reference on each plate to standardize the allele calls between different runs on the ABI 3500. Allele sizes were determined using GeneMapper 4.1 software (Applied Biosystem Co., Ltd., USA). Genetic diversity analysis STRUCTURE V2.3.1 was used to infer the number of clusters [39]. The algorithm was run for a range of genetic clusters (K) from 1 to 20 using the admixture model, and it was replicated 20 times for each K. Each run was implemented with a burn-in period of 100,000 steps followed by 100,000 Monte Carlo Markov Chain replicates using no prior information and assuming correlated allele frequencies. Structure Harvester [40] and CLUMPPAK [41] were used to process the STRUCTURE results. The optimum value of K was obtained by calculating the Δk value [42]. The bar plots were drawn with STRUCTURE PLOT (2.0) [43]. Simple matching distance and principal coordinate analysis (PCoA) was carried out with DARwin software (version 5.0.158) [44]. For the genetic diversity analysis, GenAlEx 6.5 software was used to calculate alleles observed (Na), observed heterozygosity (H O ), expected heterozygosity (H E ), coefficient of inbreeding (F IS ), genetic differentiation coefficient (F ST ), and gene flow (Nm) [45]. All maps showing STRUCTURE assignment, and PD evaluation results were created using ArcGIS1 software by Esri (ArcGIS1 and ArcMap™, www.esri.com) [46], using World topographic map as base layer (Sources: Esri, DeLorme, HERE, TomTom, Intermap, increment P Corp., GEBCO, USGS, FAO, NPS, NRCAN, GeoBase, IGN, Kadaster NL, Ordnance Survey, Esri Japan, METI, Esri China (Hong Kong), swisstopo, MapmyIndia, and the GIS User Community). Reliability of the greenhouse assay for disease evaluation In this study, 19 screens from 2011 to 2018 were carried out to evaluate 266 accessions for PD resistance. To compare the results across multiple screens, seven accessions [two resistant (b43-17, U0505-01), three intermediate (U0505-35, Roucaneuf, Blanc du Bois), and two susceptible (U0505-22, Chardonnay)] were used as reference plants in every experiment. The S1 Fig shows the seven reference plants at 12 weeks post inoculation. The uninoculated Chardonnay control did not have measurable bacteria in 19 screens and was excluded from further analysis. The variability of greenhouse temperature during the year has an impact on disease expression and quantifiable bacterial levels (unpublished data). Therefore, 'genotype', 'experiment take down date', and the interaction of these two factors were statistically significant based on the two-way ANOVA (Table 2A) (Table 2B). These results indicate that the greenhouse-based ELISA screen is a reliable method to distinguish PD resistant and susceptible accessions with reproducible results that could be combined and compared across multiple screens. Disease evaluations of diverse germplasm The S1 (Fig 1 and S3 Fig). The categories to display ELISA results were: 1 (6.9-9.1), 2 (9. For all three the same color scheme was used for the five categories (dark green = 1, light green = 2, yellow = 3, orange = 4, and burgundy = 5). Results indicate that there is an east-west divide for PD resistance formed by the Sierra Madre Oriental and central Texas. Most accessions from eastern Mexico, the bordering state of Texas and the southeastern US had higher values for all three resistance parameters in comparison to accessions from the southwestern US and Mexico. These regions have contrasting climates. Eastern Mexico and the Gulf Coast states are more tropical with higher humidity and rainfall in comparison to the southwestern US and northwestern Mexico, which have hotter and more arid conditions. Over all, the state of Texas possesses a rich Vitis flora with 14 species and two natural hybrids, and many sympatric zones where introgressive forms appear. The state also has a mixture of resistant, intermediate and susceptible accessions (S3 Table, Fig 1 and S3 Fig). Fifty-two accessions grouped into category 1 with ELISA values ranging from 6.9-9.1 (S1 and S3 Tables). This group contained b43-17, ANU05, ANU71, b40-29, b46-43, SAZ7, and T03-16, all of which have PD resistance mapped to chr14. Forty-seven accessions in this group were called V. arizonica based on their morphological features and were collected from Mexico and the bordering states of Arizona, New Mexico, and Texas (S1 and S3 Tables). A majority of the accessions in this group had lower CMI and LS/LL scores than b43-17, with the exception of two accessions (AZ11-107, ANU71), which had better cane maturation but higher scores for LS/LL. PLOS ONE A group of 63 accessions had ELISA values in category 2, including five accessions with PD resistance on chr14. The CMI category values ranged from 1.0-4.0 with five accessions having LS/LL values of 4.0-5.0. Interestingly, the accession ANU46 collected from Arizona had high values for both CMI and LS/LL indicating that lower ELISA values do not necessarily mean that the plant will have reduced cane and leaf symptoms. Two accessions, ANU63 and T03-06S02, had good cane maturation but higher scores for LS/LL (S1 and S3 Tables), suggesting that both parameters of disease manifestation are independent of each other, and that there is genetic variation among different accessions for the ability to lignify the stem evenly and display leaf scorch symptoms. Twenty-seven accessions in this group were identified as V. arizonica based on morphological features. Almost all of the accessions in this group were collected from Mexico and bordering states with the exception of two V. aestivalis (DVIT1416, DVIT1609) accessions from Florida and Illinois (S1 and S3 Tables). Both accessions had moderate scores for CMI and LS/LL. A group of 57 accessions had Category 3 ELISA results. Only 18 accessions in this group were identified as V. arizonica based on their morphology; the other accessions were within 12 other grape species (S1 and S3 Tables). The CMI results for these 57 accessions were in all five classes from very good cane maturation (1.0) to necrotic spots and multiple green islands (5.0). There were accessions with lower CMI and higher scores for LS/LL. Three accessions of girdiana also displayed a wide range of responses from resistant to susceptible. Two accessions from this species were identified as having the PdR1 locus in a previous study (S1 Table) [27]. All tested accessions of V. rupestris had high bacteria levels in the stem and high values for CMI and LS/LL, indicating susceptibility to PD. The accessions identified as V. arizonica based on leaf morphology were the most resistant with low bacteria levels in the stem, and low CMI and LS/LL values. Among the 110 tested accessions collected from Mexico and bordering states, 82 had ELISA values in category 1 or 2, and with few exceptions, had very low CMI and LS/LL values (S1 and S3 Tables). Only nine accessions of V. arizonica had higher ELISA, CMI and LS/LL values, and most of these were collected from Utah. Table 3 presents the greenhouse screening results from 12 reportedly resistant accessions released from southeastern US grape breeding programs. Their designation as PD resistant was based on survival in field trials and the lack of PD symptoms under natural infection. Two accessions (Blanc du Bois and Roucaneuf) were used as intermediate reference plants in all of this study's greenhouse experiments. With the exception of Florilush and Midsouth, all other accessions had high ELISA values (category 4 and 5) indicating that they can tolerate high levels of bacteria. However, they had variable CMI and LS/LL values; none were devoid of LS/LL symptoms and they had moderate to severe necrotic green islands on the stem. Only Florilush and Midsouth lignified normally with some leaf scorch and leaf loss. Blanc du Bois and Roucaneuf were similar to each other in terms of ELISA and LS/LL values, but their CMI values varied. The three parameters that were used to determine PD resistance were also significantly different when comparing the southeastern and southwestern grape species (Table 4A). The LSM comparisons clearly separated the southwestern species, which had lower ELISA, CMI and LS/ LL values (Table 4B). Genotyping of Vitis species Initially, 346 accessions were genotyped with 24 nuclear SSR markers. A total of 20 accessions were excluded from further analysis due to missing data at 10 or more loci. Similarly, four markers were excluded due to difficulty in resolving single base pair variations or missing data, resulting in a set of 326 individuals genotyped at 20 loci. S2 Table details chromosome assignment, % missing data, number of alleles observed, and observed and expected heterozygosity for each locus for the 326 accessions. S4 Table provides the allelic data for the 326 accessions. Overall, 20 markers represented 15 of the 19 grape chromosomes and only 2.2% of the data were missing. The number of alleles at each locus ranged from 10 (VVIq52) to 50 (VVIv67). The observed heterozygosity was lower than expected heterozygosity for all markers (S2 Table). Genetic diversity and clustering analysis The genetic diversity of the 326 accessions was evaluated with a model-based clustering method implemented in the program STRUCTURE and by PCoA. The delta K value calculated from the output of STRUCTURE was 650 at K = 2 compared to less than 3 at all other K Table 3 . List of 12 southeastern US varieties (with the exception of Roucaneuf) reported to be Pierce's disease (PD) resistant based on the survival rate and visual symptoms of leaf scorch under natural disease pressure in the field. The two italicized and bolded accessions are reference plants, included in every greenhouse screening experiment. The other ten accessions were tested in earlier years of the PD resistance breeding program under greenhouse conditions and bacterial populations were quantified by ELISA. Cane maturation index (CMI) and leaf scorch/leaf loss symptoms were also recorded. Table 4. Two-way ANOVA of the three groupings of three Pierce's disease evaluation parameters: ELISA readings of bacterial levels in the stem (CFU/ml), cane maturation index (CMI), and leaf scorch/leaf loss (LS/LL). a. There was significant variation among the three groups for the three parameters. b. Least square means (LSM) comparisons of three Pierce's disease evaluation parameters based on ELISA readings of bacterial levels in the stem (CFU/ml), cane maturation index (CMI), and leaf scorch/leaf loss (LS/LL) using Tukey's test for three groups. Results significantly separate the southwestern US species from the southeastern US grape species across all three parameters of disease evaluation. Confidence levels (CL) were established at 95%. PLOS ONE values indicating division of genotypes into two groups (S5A Table). The Q-values (proportion of a given individual's genome that originated from a given population) assigned by STRUC-TURE for 326 accessions in two groups are displayed in S6 Table. The threshold of 0.80 was selected to assign accessions to a particular group: a total of 140 accessions were assigned to group 1, 144 accessions in group 2 and 42 accessions were not fully assigned to either group. Fig 2 displays were not fully assigned to either group were collected from eastern Mexico and the bordering state of Texas (S1 Table). Texas seems to be a major hybridization zone where the range of many grape species overlap. S7 Table shows Interestingly, the average gene flow (Nm) for southeastern group was 3.340, a much higher value than the southwestern group indicating more geneflow is prevalent, however, it does not increase the genetic differentiation. On the other hand, southwestern group had lower gene flow but higher genetic differentiation (S7 Table). The hierarchal STRUCTURE analysis was carried out to identify diversity within each group. The threshold of 0.50 was used to assign accessions to develop two data sets for the second round of STRUCTURE. A range of genetic clusters (K) from 1 to 10 using the admixture model and 30 replications for each K were used for both runs and the threshold of 0.80 was selected to assign accessions to a particular group. The delta K values indicated four distinct groups within the southeastern species (S5B Table), and three distinct groups within the southwestern species (S5C Table). S6 Table presents the Q-values assigned by STRUCTURE for the first and second-round of analysis. Fig 3A shows the bar plot of first and second round of the STRUCTURE results. The four southeastern species groups consisted of V. cinerea/berlandieri, V. candicans/monticola, V. aestivalis/labrusca and V. cinerea/tiliifolia collected from Mexico. Nineteen samples were admixture and 12 of them were also shown to be admix in the first round of analysis. The southwestern group divided into V. arizonica, V girdiana with V. arizonica accessions collected from the Big Bend area of Texas and V. arizonica collected from Mexico with 28 admix samples that were not assigned to any group. Principal coordinate analysis explained 12.38% of the variation among 326 accessions on two axes (S4 Fig). The color coding of the STRUCTURE assignment was used for the PCoA display. The PCoA analysis was also carried out on each group and results were consistent with the Bayesian clustering analysis (Fig 3B). Four distinct groups were identified within southeastern grape species that explained 14.50% variation with admix genotypes in between the groups. Similarly, the southwestern group divided into three sub-groups that explained 15.77% of the variation. Accessions that were not fully assigned to a structure group were positioned between the clusters in the PCoA analysis. The accessions collected from the Big Bend region (group 2.2), and from Mexico (group 2.3) were called V. arizonica based on their morphological features. It is most likely that they are complex hybrids of other species that are potentially not represented in this study set. Table 5 presents the summary of PD evaluation results across groups identified in the second round of the STRUCTURE analysis. Within the southeastern group, V. cinerea accessions collected from Mexico and V. candicans were the only species with strong resistance to PD. The accessions of V. cinerea collected from Texas were genetically distinct and susceptible to the disease (Table 5, Fig 3B). Similarly, accessions of V. aestivalis and V. labrusca were also susceptible to the disease when tested under these greenhouse conditions. In the southwestern group, accessions of V. arizonica appeared in all three clusters and were highly resistant to PD. They were collected from different geographic regions. The accessions of V. girdiana and V. treleasei showed moderate resistance to the disease. Discussion In this study, we surveyed 326 grape accessions of 19 grape species with molecular markers and combined population genetic diversity information with the results of greenhouse-based PD resistance evaluations to determine the range of PD resistance in wild grape species. Historic breeding lines from the southeastern US, reported to be resistant to the disease, were also tested. Pierce's Disease resistance status was quantified by measuring the bacterial levels in the Table. https://doi.org/10.1371/journal.pone.0243445.g002 PLOS ONE stem and recording cane maturation, leaf scorch and leaf loss symptoms. These genetic diversity and PD evaluation data revealed major trends. Genetic divide between eastern and western US grape species and the presence of hybrid zones Two distinct genetic groups were identified with a Bayesian clustering approach based on the allele frequencies of SSR markers (Figs 1 and 3, S5 and S6 Tables). Group 1 consisted of grape species native to the southeastern US, eastern Mexico and Costa Rica, and group 2 primarily consisted of grape species accessions from the southwestern US and Mexico. Principal coordinate analysis also revealed two main groups of species (S4 Fig). Geographically, the two groups were separated by the Sierra Madre Oriental mountain range, which extends along the eastern . Bright cyan represents group 1 (K1 = southeastern US grape species) and moderate red represents group 2 (K2 = southwestern US grape species). STRUCTURE analysis on each group resulted in four distinct sub-groups within K1 and three sub-groups within K2. For details of membership coefficient values see S6 Table. b) Principal coordinate analysis for each group resulted in the comparable results to the Bayesian analysis. The color coding of the STRUCTURE assignment was used for the PCoA display. https://doi.org/10.1371/journal.pone.0243445.g003 PLOS ONE side of Mexico. This east/west dividing line extends north through central Texas and separates the drier western US from the wetter eastern US. The current topology of North America, resulting from geologic events over millennia, gave rise to the current coastal and central plains, extensive mountain ranges that stretch thousands of miles (the Rocky Mountains, Sierra Madre) and desert regions (Sonora, Chihuahua) [47]. This topology also created unique niches for different grape species kept separate by phenological differences in flowering dates, and water availability [30,48,49]. The results of hierarchal STRUCTURE and PCoA analysis on each group were comparable and further split each cluster into distinct species groups with overlapping habitats (Fig 3A and 3B). We identified four sub-clusters within the southeastern grape species, V. cinerea/berlandieri, V. candicans/monticola, V. aestivalis/labrusca, and V. cinerea/tiliifolia collected from Mexico and Costa Rica. These species are reported to be phylogenetically close to each other and also exhibit greater overlap of habitat [30,50,51]. However, this is the first time we have seen differentiation among accessions of V. cinerea that were collected from different areas indicating regional types or varieties. The accessions of V. cinerea collected from Mexico represent a very unique gene pool. Within southwestern grape species, three sub-clusters were identified and accessions of V. arizonica were present in all of them indicating a higher level of variability than has been reported earlier [51]. Vitis arizonica hybridizes without difficulty with V. girdiana in its western range, with V. riparia in its northeastern range, and V. acerifolia, V. candicans and V. cinerea in its eastern range. This hybridization makes it difficult to determine where species PLOS ONE boundaries exist, to what extent intraspecies variation occurs, and whether these sympatric species are giving rise to new species. It can be very difficult to distinguish many of these hybrid forms based on morphological features alone. A thorough taxonomic and genetic analysis of the southwestern US Vitis species and the Mexican Vitis is required to gain a better understanding of this important germplasm. The overall east-west genetic divide of grape species that we have observed in this study has also been identified in other plant and animal taxa. Escalante et al. [52] analyzed 40 Mexican plant and animal taxa whose range extends to both the Nearctic and Neotropical regions, and also identified two main clades with an east-west pattern. The plant and animal taxa from the Mexican Gulf, from Tamaulipas to Yucatan, were in one clade, which forms the lowland region of eastern Mexico along the Caribbean coastline and as far north as the southern US. The other clade included biota from central and western Mexico [52]. We identified a similar distribution pattern for grape species. It is interesting to note that the climatic conditions for the species in group 1 are more tropical with more precipitation, while Group 2 consisted of grape species from more arid climates (Fig 2). Results from this study indicate that the Sierra Madre Oriental (to the east and often considered an extension of the southern Rocky Mountains) is a major physical barrier that has kept Mexican grape species apart for many thousands of years. The Sierra Madre Occidental (to the west), along with the Sierra Madre Oriental, enclose the Mexican plateau that merges with the Basin and Range provinces of the southwestern US. This region has an extraordinary topography with numerous small mountain ranges that act as rain collectors and are separated by the desert plains of Sonora and Chihuahua. It is not surprising to find that mountain ranges and dry desert regions act as natural barriers for gene flow. We also identified lower levels of gene flow among southwestern grape species in comparison to the grape species from the southeast that have higher levels of habitat overlap and more genetic continuum (S7 Table). Migrating birds and their berry feeding are also a primary factor in the dispersal of grape seeds over long distances, helping to expand the range grape species. Four major bird flyways or migratory routes exist in North America (https://www.fws.gov/refuge/arctic/birdmig.html). The central flyway route covers Texas and Arizona where many Vitis species exist and where we detected several accessions that were hybrids of two or more grape species. This is also the region where the ranges of different grape species overlap providing further opportunities for hybridization and development of new variant forms capable of adapting to climatic niches. Geographic pattern of resistance to PD in grape species A total of 266 accessions were evaluated for PD resistance using an established greenhouse screening protocol (Table 5, S1 and S3 Tables). The presentation of PD screening results on the map demonstrates an east-west axis with stronger resistance to PD present in western grape accessions in terms of lower X. fastidiosa levels in the stems, better cane lignification, and reduced leaf loss and leaf scorch (Fig 1, S1 and S3 Tables). Most accessions of southeastern grape species (with exception of V. candicans, and V. cinerea that was collected from Mexico) had higher levels of X. fastidiosa in the stems and more severe symptoms in the stems and leaves, all of which were intensified under high temperature greenhouse screen conditions. A similar trend was observed in PD resistant breeding lines from the southeast US, which had higher levels of X. fastidiosa and acute symptoms on the stem and leaves (Table 3). These breeding lines in their native habitat can have mild symptoms and be long-lived in the field [4,[14][15][16]. A possible explanation for this disparity is that southeastern US grape species are not resistant to PD, but instead are tolerant and the stem and leaf symptoms of PD are suppressed by wetter, more humid conditions. After infection, X. fastidiosa inhabits and spreads within the xylem. The infection initiates a plant response that results in vascular occlusions, predominantly tyloses, to limit bacterial spread. This response also decreases water transport up to 90% in susceptible plants [53]. Warm humid climates promote more vegetative growth, which may allow infected plants to dilute the infection and tolerate blocked vessels for a longer time. In general, PD symptom development is highly correlated with the number of clogged vessels [54] and with higher levels of bacteria in the stem [36]. In this study, ELISA values had 76% correlation to CMI and 68% to LS/LL. However, we found many accessions from different species that had high ELISA values but lower values for the CMI and LS/LL and vice versa (S1 and S3 Tables). These results indicate that there are differences in the manifestation of PD symptoms among different grape species and more research is needed to understand X. fastidiosa's pathogenicity, particularly in grape species endemic to regions with high rainfall during summer months. Vitis candicans collected from Texas and V. cinerea accessions collected from Mexico were the exception in the southeastern grape species group possessing accessions with PD resistance in the greenhouse screen. Vitis candicans is known to have preference for warm humid conditions [51] and is found abundantly with vigorous growth in central to eastern Texas [30]. In this study we tested twelve accessions of V. candicans that were collected during field collection trips in late nineties and early twenties. All of them were promising with strong PD resistance. Future studies should focus on this valuable source of resistance to PD and other grape pests. Among the southwestern grape species, V. treleasei (a glabrous form of V. arizonica) [30], and V. girdiana showed moderate PD resistance, while the strongest PD resistance was found in accessions of V. arizonica-which had 82 out of 110 accessions with low bacterial levels and low CMI and LS/LL values (Table 5, S1 and S3 Tables). Nine accessions with the PdR1 locus, identified in earlier studies, were also pure forms of V. arizonica or apparent hybrids with this species that ranges across the arid southwestern US and northern and northwestern Mexico [27,37,55]. Results from this study shows that the accessions of V. arizonica collected from different geographical regions of the southwest US and Mexico belonged to three distinct genetic subgroups and all of them were highly resistant to PD. In contrast, accessions of V. cinerea had one genetic subgroup resistant to the disease and the other was susceptible ( Table 5). Forms of V. arizonica are morphologically adapted to droughty and xeric conditions, but they are most often found in wet areas within these isolated xeric regions such as springs, creeks, and catchment basins. It hybridizes with other grape species when their ranges overlap or are connected by migratory flight patterns of birds. It is important to collect more germplasm from Mexico and Central America to carry out phylogenetic analysis as well as disease evaluations with this germplasm to confirm the results of this study, and gain more insight into the extent and spread of PD resistance in different geographic regions of Mexico and Central America. Conclusions Grape species from Mexico and southwestern US do not exist on a genetic continuum-the Sierra Madre Oriental mountain range acts as a phylogeographic barrier to the south and to the north central Texas and the Great Plains separate wet from arid climates of the US. Vitis candicans and Mexican V. cinerea accessions were the only grape species with strong PD resistance in the southeastern group. Among southwestern grape species, Vitis arizonica accessions displayed strong PD resistance despite genetic variation and V. girdiana and V. treleasei showed moderate resistance. Table. List of 346 accessions and their species designation based on the morphology. A total of 20 accessions were excluded from the genetic analysis due to lack of data with more than four markers. Greenhouse screening for Pierce's disease was completed for 266 accessions. Least square means were calculated for the bacterial count (colony forming unit-CFU), cane maturation index (CMI) and leaf loss/leaf scorch (LS/LL). Thirteen bold and underlined accessions were found to have PD resistance on chromosome 14 in previous studies. (XLSX) S2 Table. List of 24 SSR markers with chromosome designation, and fluorescent label. The % missing data, number of observed alleles (Na), observed heterozygosity (Ho), and expected heterozygosity (He) were determined with 326 accessions that were included in the final analysis. (XLSX) S3 Table. List of 266 accessions with species designation and collection location that were tested for PD resistance under greenhouse conditions. Least square mean values for cfu/ml, CMI index and LS/LL from Table S1 were divided into 5 categories; 1 = 6.9-9.1, 2 = 9.2-11.1, 3 = 11.2-13.1, 4 = 13.2-15.1, 5 = above 15.2. The five categories for the CMI and LS/LL scores were: 1 (0-1), 2 (1.1-2.0), 3 (2.1-3.0), 4 (3.1-4), and 5 (4.1and above). Accessions are organized from lower to higher CMI values. Underlined and bold accessions were identified to carry PdR1 resistance locus. (XLSX) S4 Table. Simple sequence repeat marker data for 326 accessions that were included for the genetic analysis. ND is no data. (XLSX) S5 Table. Delta K value as a function of K based on 20 runs for the first round of STRUCTURE analysis indicates two genetic clusters (a) of southeastern and southwestern grape species. STRUCTURE analysis was run on each group with 10 runs and 30 replications. Delta K value for the southeastern group indicates four genetic groups within 170 accessions (b) and three genetic groups within 156 accessions of southwestern origin (c). (XLSX) S6 Table. Population assignment to two groups and Q-values were determined with the STRUCTURE program. Hierarchal STRUCTURE analysis on each group clearly divided each group into clusters of distinct species. K1 cluster divided into four groups and K2 cluster into three distinct groups.
2020-12-20T06:18:04.894Z
2020-12-18T00:00:00.000
{ "year": 2020, "sha1": "c562ef01c580efe80c54471ca470da7a896198a7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0243445&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5fe09c6df7c9dae52adff2afca9f8d97510a3651", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
256050465
pes2o/s2orc
v3-fos-license
ICU-Managed Patients' Epidemiology, Characteristics, and Outcomes: A Retrospective Single-Center Study Background Resources are limited, and it is exceedingly difficult to provide intensive care in developing nations. In Somalia, intensive care unit (ICU) care was introduced only a few years ago. Purpose In this study, we aimed to determine the epidemiology, characteristics, and outcome of ICU-managed patients in a tertiary hospital in Mogadishu. Methods We retrospectively evaluated the files of 1082 patients admitted to our ICU during the year 2021. Results The majority (39.7%) of the patients were adults (aged between 20 and 39 years), and 67.8% were male patients. The median ICU length of stay was three days (IQR = 5 days), and nonsurvivors had shorter stays, one day. The mortality rate was 45.1%. The demand for critical care services in low-income countries is high. Conclusion The country has a very low ICU bed capacity. Critical care remains a neglected area of health service delivery in this setting, with large numbers of patients with potentially treatable conditions not having access to such services. Introduction Despite continuing to be one of the world's lowest-income countries, Somalia has improved its healthcare over the last two decades. Its citizens' life expectancy has increased from 51 years in 2000 to 58 years in 2020 [1]. However, until recently, there had been little investment in secondary and tertiary care services. Given the disproportionate burden of diseases like malaria, tuberculosis, HIV/AIDS, and trauma, the prevalence of critical illness in developing countries is disproportionately high. 25% of the world's disease burden is carried by sub-Saharan Africa [2]. Critically ill patient management necessitates substantial human, material, and fnancial resources. Low-income nations like Somalia often have fewer of these resources. Large hospitals in urban or metropolitan settings are the most common places to fnd major intensive care units (ICUs) [3]. Te intensive care unit (ICU) facility is a signifcant and expensive part of modern healthcare. With extensive ICU infrastructures in all developed countries, the number of ICU beds available varies greatly between and among nations [4]. Evaluating the characteristics and outcomes of critically ill patients admitted to ICUs in low-income nations may contribute to determining the priorities and resources needed to enhance the care of critically ill patients in resource-constrained areas of the world. So, in this study, we aimed to determine the characteristics, admission diagnoses, and outcomes of patients admitted to the Mogadishu Somali Turkish Training and Research Hospital ICU from January 2021 until December 2021. Te information collected will be used by other ICUs in the country to improve services and assist institutions that are establishing new ICUs, and the fndings are believed to add something valuable to the growing amount of research showing the diferences between critical care in high-and low-income nations. Methods Tis retrospective study was approved by the Institutional Review Board of Mogadishu Somali Turkey Training and Research Hospital. Te health information system was reviewed, and anonymity was preserved for each case record. We evaluated all adults patients admitted to the intensive care units from January 2021 to December 2021. Data analysed during this study came from the demographics, characteristics, and outcomes of these patients admitted to Mogadishu Somali Turkey Training and Research Hospital's ICU. ICU Admissions. Te criteria for admission to an intensive care unit were used to identify hospitalizations with ICU admissions by the medical or surgical teams of the hospital. 2.2. Outcomes. Te primary outcome measure was shortterm mortality of ICU-admitted hospitalizations, defned as the ICU admission till discharge from the ICU to inpatient. Study Covariates. (a) Te frst covariate is demographics like age and gender, (b) the second is comorbid conditions, (c) third variable is the patients' need for hospitalization for medical or surgical procedures (based on the primary diagnosis-related grouping), (d) the fourth is organ failures, (e) the ffth is hospital length of stay (f ), and the sixth is the interventions made during the ICU stay, such as intubation, hemodialysis, and so on. Data Analysis. Continuous variables were reported as the mean (standard deviation (SD)) or the median, while data on categorical variables were summarized as numbers and percentages (interquartile (IQR)). Continuous variables were compared using the t-test, Mann-Whitney test, and Kruskal-Wallis test, as appropriate. We used the Kaplan-Meier estimator test to show the survival outcome. Results Te majority (39.7%) of the patients were adults (aged between 20 and 39 years); 734 (67.8%) were males. 595 (54.9%) were transferred to the inpatient, while 488 (45.1%) died in the intensive care unit. Te median ICU length of stay was 3 days (IQR � 5 days), and nonsurvivors had a shorter stay of one day. Te median no. of organ failure is one (IQR � 1). 73.4% of patients were in the ICU for less than 6 days, whereas only 4.2% of patients were in the ICU for more than 20 days. Table 1 describes sociodemographic and clinical characteristics among patients admitted to the ICU. When it comes to diagnosis on admission, complications related to acute and chronic kidney injuries top the list of medical patients, where perforation due to organ injury top the list of surgical patient, as Somalia is a war-torn country, where gunshot injuries are prevalent. Table 2 shows sociodemographic factors and their association with outcome using the Chi-square test. Since the p values for age group, comorbidity, type of organ failure, clinical intervention, type of admission, duration, and number of organ failures were less than 0.05, we may infer that all these variables signifcantly afect the outcome. Te survivor function S(t) is the Kaplan-Meier estimator. From Figure 1, we can see that most patients died in the frst 20 days, as indicated by the steep slope of the estimated survival function in the frst 20 days ( Figure 2). S(10) � 0.3 means that the probability of a patient surviving longer than ten days is 30%. Kaplan-Meier estimator showed that the probability of a patient surviving longer than two days in the ICU is 76%; however, the probability of surviving longer than 20 days is dropped to 23%. Survival curves do not go head-to-head, indicating that there is a diference between groups being studied (Figures 3 and 4). Te factors associated with outcome in univariate logistic regression are displayed in Table 3. It can be seen that the p value of hypertension, renal failure, respiratory failure, clinical intervention, and duration is below <0.005. Terefore, these variables are statistically signifcant. On univariate analysis, adult patients had a considerably higher likelihood of experiencing the outcome (OR � 1.707, 95% CI: 1.271-2.285). Surprisingly, compared to patients with three or more organ failures, patients without organ failure were more likely to have a outcome (OR � 4.586, 95% CI: 2.174-9.674). Te outcome was signifcantly lower in patients with hypertension as compared to patients without hypertension (OR � 0.629 95% CI: 0.463-0.855). Te patients who did not have respiratory failure were less likely to have outcome compared to the patients who had respiratory failure (OR � 1.472 95% CI: 1.111-1.950). Patients without renal failure were more likely to have outcome compared to patients with renal failure. Patients admitted through medical had lower odds of outcome than those who admitted through surgical (OR � 0.6140, 95% CI: 0.497-0.786). Patients who stay less than 7 days in ICU were less likely to have outcome in comparison with patients who stay more than 7 days (OR � 2.114, 95% CI: 1.559-2.868). Te Cox-proportional hazard model estimates are shown in Table 4. Te variables respiratory, clinical intervention, intubation, and duration have highly statistically signifcant coefcients. Te p value for clinical intervention is <0.005 and HR is 1.292, indicating a strong relationship between the clinical intervention and increased risk of death. HR is 1.292 for respiratory; this means that a patient's hazard ratio increases by a factor of 1.292 (versus the baseline). Discussion We conducted a single center based analysis of ICUadmitted patients in Mogadishu. Te center has 21 tertiary level ICU beds, the only one in Mogadishu, with a population of around 2.5 million [5]. Tis clearly indicates the scarcity of tertiary ICU levels in Somalia. In this retrospective study, we aimed to determine admission patterns in our ICU during the year 2021. Te overall mortality rate was 45.1%, somewhat higher than the reports from the neighbouring country, Ethiopia [6]. Tis could be caused by patient care-seeking delays, a lack of treatment protocol, a pharmaceutical shortage, or other factors. Te outcome of intensive care depends on many issues given by the surgeons and doctors who make the frst judgments that lead to their patients needing intensive care, as well as the amenities ofered in the unit and the competence and timing with which they are administered [7]. Te length of stay was comparable to some US hospitals (3 days) [8], but lower than studies from Austria and Switzerland (7.6 in survivors and 11.7 in nonsurvivors) [9] and higher than Scandinavian countries (1.9 in nonsurvivors) [10]. Males were admitted to our ICU in greater numbers (67.8%) than females. Te majority of males accessing health facilities, which is also observed in all hospital admissions as recorded by other studies in Ethiopia, could be one of the causes [11,12]. However, this is in contrast to the general demographic [13]. Identifying patients who should be targeted for interventions early in their hospital or ICU course is one of the challenges to improving the quality of end-of-life care in the ICU. Tere are several potential methods for identifying the most appropriate patients. Te SUPPORT study, for example, is aimed at seriously ill patients with one or more of the following illnesses: acute respiratory failure, multiple organ system failure with sepsis, multiple organ system failure with malignancy, coma, chronic obstructive pulmonary disease with respiratory failure, decompensated congestive heart failure, severe cirrhosis, metastatic colon cancer, and non-small-cell lung cancer [14]. As our ICU was a level 3 ICU, most of the interventions needed to support vital organs were available such as mechanical ventilation and hemodialysis. When it comes to the type of organ failure, renal failure topped among all, with respiratory failure being second in the rank. Tis is in consistent with a previous report by Sari and Bashir [15], about the patients admitted to the medical ward, which showed that 45% of the admissions were due to renal failure. However, it is in contrary to the fndings of Sakr et al. from Belgium that the cardiovascular failure was the most common type of organ failure [16]. Conclusion In this frst ICU-related study from Somalia, we demonstrated the demographic characteristics of and the outcome of patients admitted to a level 3 ICU in Mogadishu. It also demonstrated the scarcity of ICU beds in Somalia's capital city of Mogadishu. Although there are facilities for tracheal intubation, mechanical ventilation, hemodialysis, and patient monitoring, the survival rate of patients in our ICU is uncomfortably low. Tere is a lack of data in our region on workload, outcomes, costs, and the heterogeneity of ICUs; any recommendation about future provision will be highly speculative. Limitations Tis study has limitation. Due to the retrospective nature of the study, we could not get all data relating to APACHI Score and SAPS. Data Availability Te data used to support the fndings of this study are available from the corresponding author upon request. Consent As our study is retrospective, consent was waived by the Ethical Committee of Mogadishu Somali Turkey Training and Research Hospital.
2023-01-21T16:31:59.925Z
2023-01-17T00:00:00.000
{ "year": 2023, "sha1": "b288bb9d080da255cc040285b95530709f053400", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/arp/2023/9388449.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac3c6094b43966fdb33feb190e3167711ed86a28", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
202982239
pes2o/s2orc
v3-fos-license
Water Absorption and Flexural Property of Unidirectional Polypropylene/Sumberejo Kenaf Fiber Composites Kenaf fiber can be used as an alternative to synthetic fiber reinforcement in composites. This research was conducted in order to determine the flexural and diffusion properties of the composite by varying the fiber fractions. Before being used as a reinforcement, kenaf fiber was firstly treated with alkali treatment in NaOH solution for 24 hours. Polypropylene and kenaf fibers were fabricated with a hot press machine. The fiber fractions were 30 wt%, 40 wt%, 50 wt%, and pure polypropylene samples were also fabricated as a comparison. The highest flexural strength, the minimum water absorption and water content were found in the polypropylene/30 wt% kenaf fibre composites with the value of (4.77 ± 0.799) MPa, (3.61 ± 0.823)% and (0.192 ± 0.154)% respectively. Introduction Nowadays, composites become one of materials that are widely used in daily necessities, from household appliances to automotive. Synthetic fibers have been used in composites as reinforcements but it have non-biodegradable properties so they are not environmentally friendly materials [1]. Composites with natural fiber reinforcement can be an alternative to replace synthetic fibers that can cause damage to nature. The use of natural fibers as reinforcement in composites has been widely researched and developed at this time. Increased utilization of natural fibers was due to low costs, high specific modulus, lightness, and lower energy requirement [2]. Table 1. Chemical composition of kenaf fiber [4]. Kenaf fiber is one of the natural materials that can be used as an alternative to synthetic fibers. Kenaf is a non-wood lignocellulose material because its main constituent materials are cellulose, hemicellulose, and lignin [3]. Table 1 shows the chemical composition of KF [4]. Before being used as a reinforcement in composites, natural fibers need alkali treatment. Mechanical properties such as strength, flexibility, and stiffness of natural fibers can be improved by this treatment [5]. To obtain optimal strength of kenaf fiber composite, NaOH 5% aqueous solution was used [6]. Fiber fractions were varied to determine which fiber fractions is the best. Based on research conducted by Ollivia [7], the highest tensile strength and deflection temperature were obtained from PP/40wt% Sumberejo kenaf fiber composites. There is a lack of study in water content and flexural properties of PP/Sumberejo kenaf fiber composites. The aim of this research was to determine the value of water content, water absorption, and flexural properties of PP/Sumberejo kenaf fiber composites to meet the requirements of SNI 01-4449-2006. Preparation of Kenaf Fiber Kenaf fibers (KF) and polypropylene (PP) were materials used in this research. KF was initially treated with alkali treatment to remove lignin by soaking it in NaOH 5% solution for 24 hours. Treated KF was rinsed with distilled water until the distilled water was colorless. After that, KF was dried at room temperature for 48 hours and in the oven with the temperature of 60 o C for 24 hours. Both untreated and treated KF were analyzed using Fourier Transform Infrared Spectroscopy (FTIR) to identify the chemical change of the fibers in a range of 400 -4000 cm -1 . For this measurement, pelletized mixtures of KF samples and potassium bromide (KBr) powder were used. Fabrication of Composite PP pellets were extruded into sheet forms using a hot press machine at 190 o C for 6-7 minutes then were cooled using a cold press machine for 2-3 minutes. The purpose of turning PP into sheet forms was to make PP easily dispersed and wetted the KF. PP and KF respectively weighed and arranged unidirectionally into molds in accordance with certain weight fraction and it was fabricated using a hot press machine with a pressure of 5 MN/m 2 , temperature of 190 o C for 6-7 minutes. Fiber fraction used in this experiment was 30 wt% (PP/KF30), 40 wt% (PP/KF40), and 50 wt% (PP/KF50). Referring to ASTM D7264, the flexural test was carried out by cutting the specimens with size ratio thickness: length equal to 1:20. Then, the specimens was given a dot right in the center. The flexural test conducted with pressure speed of 10mm/minute. SNI 01-4449-2006 was used as a reference in this research. Specimens with a dimension of 50 mm x 50 mm were tested by soaking the specimens in water at room temperature for 24 hours. To calculate the value of water absorption in the specimens, Equation (1) was used. (1) According to SNI 01-4449-2006, water contents of composites were obtained based on the weight of specimens with a dimension of 50 mm x 50 mm before and after being heated in an oven at a temperature of 103 o C until the weight was constant. Equation (2) was applied to calculate the value of water content. (2) Specimen morphology before and after flexural test was observed with an optical microscope (OM) to see the type of damage of the composite surfaces. Figure 1 showed the Fourier Transform Infrared spectra of untreated and treated KFs by alkali treatment. The broad peak around 3300 cm -1 can be associated with the O-H group of raw kenaf fiber and was 3 identified as cellulose. The peak shown at 2978 cm -1 suitable to the C-H group. The area between 2000 and 1000 cm -1 were stretched carbonyl group (C=O, 1745 cm -1 ) and were related to the presence of lignin and/or hemicelluloses. The peak at 1732 cm -1 belongs to hemicelluloses and it was showed that hemicelluloses were removed from the fiber surface after the alkali treatment. C-O stretching of the lignin was found at peak 1243 cm -1 and the loss of this peak in treated KF confirmed the removal of lignin by alkali treatment. These results are similar to a research by Zarina et al, who conducted research on different treatments to kenaf fibers [8]. Figure 3 presents the water absorption of PP/KF composites at temperature of 25 o C for 24 hours. PP/KF50 composites water up-take has a greater value because these composites contain the highest KF that absorbed more water. According to previous research on kenaf fiber/unsaturated polyester composites conducted by E. Osman [10], the higher the amount of fiber fraction, the higher the water absorption. PP/KF30 has the smallest value of (0.192 ± 0,154) %. All of the Sumberejo kenaf fiber composites fulfilled SNI 01-4449-2006. The result of water content test is illustrated by Figure 4. The water content of all specimens compatible with SNI 01-4449-2006. However, the PP/KF30 composites had the lowest value of water content because these composites had the lowest fiber loading. PP as a polymer does not contain water and KF was the one that released water from the composites. Conclusion It can be concluded that the highest flexural strength, the minimum water content and water absorption were found in PP/Sumberejo KF30 composites with the values of (4.77 ± 0.799) MPa, (3.61 ± 0.823)% and (0.192 ± 0.154)%, respectively. The optical microscope observation indicated fiber pulled-out occurred after the flexural test.
2019-09-17T02:46:47.674Z
2019-09-04T00:00:00.000
{ "year": 2019, "sha1": "793e1d0aa58bbab7e6585e498016f1cf2680a365", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/599/1/012014", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6711125c22d33efff9d1ce74e4a63a62de5f47bb", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
266461016
pes2o/s2orc
v3-fos-license
Mobile Robot Vision Image Feature Recognition Method Based on Machine Vision The rapid development of machine vision and the widespread application of mobile robots in various environments have posed new demands and challenges for efficient visual image feature recognition. To improve the efficiency and accuracy of mobile robot visual image feature recognition, a mobile robot visual image feature recognition method based on machine vision is proposed in this paper. Firstly, the development of mobile robot vision is analyzed, and the specific functions of robot visual feature recognition method are designed. Then, the Fourier series method is used to collect the mobile robot visual image, and the matrix associated with the auto-correlation function is calculated according to the Harris algorithm to complete the edge feature extraction of the mobile robot visual image. SIFT feature points of mobile robot visual image are classified, and mobile robot visual image feature recognition is realized through machine vision. The experimental results showed that when the number of images was 600, the accuracy of image feature recognition and the loss value of image edge feature extraction of this method were 96.98% and 6.38%, respectively, and the number of iterations was 500. The time of visual image feature recognition of this method was only 3 minutes. The method had the lowest error mean and error variance under different noise conditions. This method can effectively improve the efficiency and accuracy of image feature recognition, promote the development of machine vision and mobile robot technology, and stimulate new research and applications. Introduction In today's era of rapid development of science and technology, a variety of advanced scientific and technological products with various functions have entered thousands of households, which was unimaginable before, such as mobile terminal equipment.Since the mid-1980s, robots have transitioned from the structured environment of factories to the daily living environment including shopping malls, restaurants, and households.This expansion has penetrated further into chaotic and uncontrollable environments.Intelligent service robots have advanced significantly in recent years.They can complete tasks independently, collaborate with humans, or carry out tasks under human guidance.The progress of mobile robots acting as shopping assistants, caregivers, and receptionists is particularly noteworthy [1].Robot technology has become a new generation of revolutionary technology after computer technology, which will affect the development pace of the whole society.Robots can be broadly divided into industrial robots and service robots, which has different application environments.Industrial robots, such as drying and code robots, are commonly utilized.On the other hand, service robots like restaurant service and humanoid robots are often employed in indoor settings [2].After years of collection and sorting, the International Federation of Robots has given a preliminary definition to the service robot: the service robot is a semi-autonomous or fully autonomous robot, which can complete the service work intended for human health but does not include the equipment engaged in production. An important sensing source for mobile robots is the visual sensor.However, early research on this topic was often abandoned by many researchers due to two major defects: high hardware costs and lengthy computing times [3].Due to the emergence of large-scale integrated circuits and the improvement of computing speed, visual sensor performance, and price decline, modern visual mobile robots have developed rapidly.In addition, the continuous changes in image processing and visual technology, as well as the objective development prospects and importance of visual mobile robots in military applications, have also promoted its rapid progress.The robot competitions held around the world and some influential international robot competitions have all promoted the development of this website [4].At first, the vision system of mobile robot was only applied to some specific occasions.Later, with the continuous strength of its function, it developed to simulate the function of human eyes.Now, the vision system has become increasingly practical.Summarizing its development process, it can be divided into four distinct stages: The initial stage of the paper is from the 1960s to 1970s.At this time, robot vision was only academic, but its architecture and image processing process were studied theoretically.At that time, Stanford Institute, French National Research Center and other institutions made outstanding contributions in this field [5].From the end of the 20th century to the mid-1980s, robot vision developed to the military stage.At this time, the main purpose of its research was to design and develop a series of vehicles that can move autonomously in an unstructured environment for the military, including vehicles and transportation robots.The Autonomous Landed Vehicle (ALV) project at that time was a very favorable description, and the high-speed km intelligent vehicle technology derived from the project made outstanding contributions to the civilian application of robot vision [6].The subsequent phase entailed supplementary scientific research that focused primarily on Mars rover and autonomous vehicles utilized in related scientific investigations.This research also achieved various technological advancements and innovations.Currently, the practical stage of robot vision development has been reached.Forgotten research findings, in conjunction with rapid advances in software and hardware technology, have facilitated the gradual integration of visual mobile robots into daily life.As a result, their functions have become more practical and increasingly intertwined with people's lives. Despite undergoing several stages of rapid development, the vision of mobile robots remains significantly lower compared to human vision abilities.However, advancements in large-scale integrated circuits, machine vision, and artificial intelligence make it possible for mobile robots to eventually have functions comparable to human eyes.Therefore, relevant scholars have made some progress in comparative research. Kong Yan et al. proposed a human-behavior recognition method based on visual attention [7].Using the depth convolution neural network of visual attention, they added a weight to the video image features to pay visual attention to the beneficial areas in the features.Experiments were carried out on the self-built oilfield-7 oilfield data set and hmdb51 data set to verify the effectiveness of the proposed network model suitable for human behavior in oil field.This method could improve the effect of human behavior recognition.Zeng Jinle et al. proposed an automatic recognition method for weld trajectory based on multi visual feature acquisition and fusion [8].It combined multiple visual information of the weld seam area for comprehensive decision-making, fully utilizing the redundancy and complementarity between different visual feature information to accurately identify the position of the weld seam trajectory.Thus, the deviation between the actual welding trajectory and the machine teaching trajectory was compensated in real-time, improving the accuracy of welding seam trajectory recognition.Xue Teng et al. presented a technique for stable robot gripping that relies on visual perception and prior tactile knowledge learning [9].The authors assessed the gripping performance by measuring the object's resistance to external disturbances during the gripping process.On this basis, the visual tactile joint data set was established, and the tactile prior knowledge was learned.The stable grasp structure was formed through the fusion of visual and tactile data in the robotic grasp system.Ten target objects were experimentally verified.The stability of the grasping method had been improved resulting in a good robotic grasping effect, although the efficiency of stable grasping remained low. Improving the visual recognition ability of mobile robots is of great significance for enhancing their intelligence, safety, and accuracy, especially in automation and interactive tasks.This article proposes a machine vision-based image feature recognition method for mobile robots to develop advanced image processing algorithms, achieve environmental awareness, and enhance the autonomous decision-making ability of mobile robots. Method and Function Design of Robot Visual Feature Recognition Robot vision technology aims to create a vision system for robots that enables them to perceive the environment as flexibly and intelligently as human vision system and make corresponding processing in time.Bottom vision, middle vision and high vision are three different levels of vision technology, as shown in Figure 1 [10]. Robot vision is a technology that enables automatic image-based detection, control, and analysis.In robot vision system, computer is used to simulate human visual objects.In establishing a visual information system for computer-assisted human completion of visual tasks, application of image understanding and recognition in photographic geometry, probability theory, random processes, artificial intelligence, and related theories are Q.Dong necessary [11].For example, human eye recognition and robot vision need the help of two kinds of intelligent activities: perception and thinking. Image acquisition and preprocessing High level vision Fig.1. Schematic diagram of robot vision technology level The robot visual feature recognition method consists of two parts: hardware and software.The hardware part can be regarded as the skeleton and body of the robot vision system, including image acquisition components (such as Charge Coupled Device image sensor or Complementary Metal Oxide Semiconductor camera), video signal digital conversion components (such as image acquisition card) and video signal central processing components, as well as processors (such as Digital Signal Processor based fast processor, single chip microcomputer and systolic structure) [11].There are generally two ways of image acquisition: monocular vision and stereo vision.Monocular vision is a vision system with one vision sensor.Stereo vision generally refers to a vision system with two vision sensors.Monocular vision has the advantages of simple structure, short measurement time and low program complexity.However, for applications that demand high accuracy, monocular vision exhibits limited robustness.Binocular vision can make up for the deficiency of monocular vision in the case of high accuracy requirements [12]. The software part is the soul and idea of the robot vision system, including the development platform of the software system (computer software), the realization of the software, the functional algorithm and the robot control software.This part is mainly the implementation of image processing theory and algorithm. The composition of robot visual feature extraction is shown in Figure 2. Robot visual feature recognition method Hardware Software ccd camera Digital equipment processor Computer software The one-dimensional function ( ) g t is defined as a time continuous analog signal, and it is represented by sampling sample ( ) g kt .k -a represents the whole value.T is the sampling period [13]. Robot control software The method of reconstructing the original function ( ) g t from the sample ( ) g kt is to interpolate at the appropriate place among the samples.Generally, the following interpolation function ( ) g t can be used: ( ) h t is the time that interpolation function kT moving along the t axis.The effect of sample ( ) g kt In formula (3), τ and δ are both parameters. ( ) which can be expanded into Fourier series. The Fourier expansion coefficient n a can be obtained from the following formula (5): In the integral above, the only time n a is not equal to 0 when As shown in formula (7), ( ) g t can be expressed as the sum of the convolution of f and ( ) According to the convolution characteristics of Fourier transform, it can be concluded that the transformation of each term in the summation sign in formula (7) is the product of the Fourier transforms of two functions [14]. G ω and ( ) H ω represent the Fourier transform of ( ) g t and ( ) h t , respectively.That is to say, Fourier series can be used to collect visual images of mobile robots.The robot vision system primarily focuses on enabling robots to emulate the human and organism's visual feature recognition function.This enables it to perceive, conceptualize, and evaluate its surrounding environment, thereby achieving its recognition and comprehension objective.The primary objectives of recognizing image features through robot vision include the acquisition of images, preprocessing, image segmentation, description of features, recognition and classification, comprehension of three-dimensional information, depiction of scenes, image interpretation, and more, as indicated in Figure 3. Based on the research of two-dimensional image recognition algorithms, this paper proposed a real-time point cloud image recognition algorithm, which is a recognition and judgment method that integrates feature space and the minimum distance of the same element.The effectiveness of the real-time recognition method was verified on the computer, and successfully integrated into the robot system.The experimental data were analysed and processed.Finally, the problems in the current robot vision system were analysed, and some suggestions were proposed for the design of the next generation robot vision system to optimize the robot vision system.Image feature extraction is a key problem in the field of computer vision image processing.Image feature extraction exists due to machine vision.To recognize the image, the computer extracts the relevant pixels composed of the image, and analyzes the pixels to determine their feature attribution, which is image feature extraction [15].From the starting point of the first mock exam, it is a method to transform a set of measured values of a pattern to highlight the typical characteristics of the pattern.It can be used to identify the feature points in some regions as the input of continuous identification through image analysis and transformation.The starting point of subsequent processing is the image features.As the "interesting" part of image description, image features reflect the most basic attributes of the image itself, which can be quantified in combination with vision [16].Image visual features are the description of image regions containing significant structural information of the image, such as edges, corners, and other image features.To detect the region of interest of an image, a salient feature measure is defined and calculated by the extreme values of image pixels and local regions.The purpose of examining different image sizes is to enable identification of the same image region, even if it exists within distinct scale spaces of various images.This process is called scale invariant detection.The extreme value of salient feature measurement is selected to ensure the repeatability of the inspection process.The definition of feature repeatability is because the same feature point may be detected in the same scene of two or more images [17].In fact, there are many kinds of image features that can be extracted from digital images, including corner features, edge features and speckle features. Corner Feature Extraction of Mobile Robot Vision Image Generally speaking, a point is defined as the intersection of two edges.In a digital image, a point refers to the maximum value of the adaptive correlation function corresponding to the point pixel.In recent years, a series of point feature extraction algorithms have emerged in the field of image processing, which is mainly divided into two categories.The first kind of algorithm first extracts the image edge information, and then looks for the point with the maximum curvature value, or the intersection of edge segments as point features.The second kind of algorithm is mainly aimed at finding point features in gray image.Point features are defined as a point with two dominant directions and different edge directions in the local neighborhood of this point.The ability to detect the same point of the same image under different backgrounds, including varying lighting conditions, is a reflection of the quality of extracting point features [18]. Harris algorithm calculates the matrix associated with the autocorrelation function and sets the first-order curvature of the auto-correlation function as the eigenvalue of the matrix.When the row column curvature value of a point in the image reaches the maximum, the point is defined as the image point feature.The mathematical expression of Harris algorithm is as follows. x h is the gradient in the x direction.y h is the gradient in the y direction.( ) The angular response function of Harris algorithm is: det( ) L is the determinant of the matrix. ( ) ktr L is the direct trace of the matrix, and k is the default constant.The angular response criterion P is positive in the angular region, negative in the edge region, and small in the unchanged region.To judge whether the point is a corner by calculating the P value of the center point of the image window.If P is greater than a given threshold value, this point is considered as a corner [19]. Harris points feature extraction algorithm has the characteristics of simple calculation, uniform and reasonable corner features, quantitative extraction of feature points and stable operator.The feature points extracted by Harris algorithm are the pixels corresponding to the value of great interest in the local range.The threshold in Harris algorithm depends on the attributes of the actual image, such as size and texture.It does not have intuitive physical meaning, and the specific value is difficult to determine. Edge Feature Extraction of Mobile Robot Vision Image Line features include edges and lines.The meaning of edges is to distinguish local areas with different features, while lines are edge pairs that delimit the same feature area.Edge is very important for people to distinguish objects.Edge extraction is a basic and important problem in image analysis.In digital images, edges represent object boundaries.These distinct boundaries can aid people in directly identifying objects on many occasions.Therefore, edge feature extraction has important application value in the fields of image segmentation, image reconstruction and target recognition [20].The edge is located where the brightness value of the two-dimensional image function changes suddenly and violently from one shape to another, such as from a white square area to a black background area.The edge is a For an image ( , ) I x y , x and y are the abscissa and vertical coordinates of a pixel respectively.The directional derivatives are x h and y h , respectively [21].Based on the characteristics of gradient and direction distribution of pixels in the neighborhood of feature points, the gradient amplitude can be obtained as follows: 2 2 ( , ) ( ) ( ) The direction of the gradient is formula (11).( , ) I x y is found to be the basic idea of constructing the first derivative edge detector.An odd symmetric filter can approximate the first derivative, and the convolution output peak corresponds to the edge in the image.Usually, the first digital image derivative is expressed through the convolution of a digital image convolution template, referred to as an edge operator, and then the resulting output is processed to obtain a mapping of gradients.The value of the gradient mapping is calculated as the input of the non-maximum suppression process, and the local maximum of the mapping is finally set as a threshold to reduce the edge mapping.When the maximum value of the first derivative of the digital image is obtained and the second derivative of the digital image is zero, the zero-crossing point is found in the second derivative of the image ( , ) I x y gradient to detect the image edge [22].The typical zero crossing detection operator is Laplace operator: ( , ) ( , ) The Laplacian operator is sensitive to noise, resulting in bilateral effects and an inability to detect the edge direction.It is generally not employed directly for edge detection due to these limitations. In the field of computer vision, the main idea of speckle detection is to detect the region in the image that is larger or smaller than the surrounding pixel gray value.Typical speckle detection algorithms are divided into two categories: derivative based differential method, which is called differential detector, and watershed algorithm based on local extremum.Detecting image spots using Gaussian Laplacian is the most typical spot detection method.The two-dimensional Gaussian function is defined as: Its Laplace transform is defined as: The normalized Gaussian Laplace transform is: The normalized algorithm is a circular symmetric function displayed on the two-dimensional image.This operator is used to detect spots in the image, and two-dimensional spots of different sizes can be detected by changing the value. 3.3 Mobile Robot Vision Image Classification SIFT Feature Point Acquisition Scale Invariant Feature Transform (SIFT) is a common feature point extraction and description algorithm in computer vision.This algorithm can detect feature points with scale invariance and rotation invariance in images, which can be used for tasks such as image matching, localization, and recognition.The core idea of the SIFT algorithm is to detect stable feature points in images at different scales and directions.It detects image features at different scales by constructing Gaussian and differential pyramids.Then, the position of key points is determined by detecting local extremum points at each scale, and the scale space extremum suppression method is used to eliminate unstable edge responses.After detecting the position of key points, the SIFT algorithm calculates the main direction of each key point and describes the key points as feature vectors with rotation invariance.These vectors have good distinguishability and robustness, allowing for image matching and comparison that is not affected by image scaling, rotation, brightness changes, and other disturbances.The SIFT algorithm is widely used in the field of computer vision, especially in tasks such as target recognition, image stitching, 3D reconstruction, and object tracking.Its stability and robustness make it suitable for processing images with various perspectives, lighting conditions, and scale changes, which is why it is popularly used in image processing and computer vision applications. Q. Dong In the SIFT feature point extraction stage, firstly, the scale space is established, and then the extreme points are found from the scale space.In Lowe's algorithm, the intermediate detection point corresponds to 8 adjacent points on the same scale and 9 adjacent points on the upper and lower scales.These two points are compared to 26 points to ensure that extreme points are detected in both scale space and two-dimensional image space.If a point is the maximum or minimum value in the 26 fields of this layer and the upper and lower layers of the dog scale space, it is considered as a feature point of the image under this scale. Based on the 200 robot laboratory images obtained through the mobile robot vision system, the maximum eigenvalue points extracted from an image are almost identical to the minimum eigenvalue points.Additionally, all properly matched feature points originate from the same class of sift extreme points.Therefore, the extracted SIFT feature points are divided into two groups.In the feature matching stage, only the feature points belonging to the same type are compared.In this way, the matching speed is effectively improved without losing the correct matching feature points. To calculate the feature matching time after classifying SIFT feature points, it is assumed that the number of features extracted from the two images are: In feature matching, when only SIFT feature points of the same type are compared, the feature matching time is: Because the number of extracted maximums SIFT feature points is basically the same as that of minimum SIFT feature points in the same image, the formulas are obtained: Through formula (20) and formula ( 21), the following ( 22) is obtained: It is proved that the matching time of classification SIFT feature point matching method is reduced by 50% compared with the original SIFT algorithm.The robot laboratory images collected by the rehabilitation robot vision system are selected.The original SIFT algorithm and the classified SIFT feature point method are applied to carry out the feature matching experiment respectively.Some experimental results are shown in Table 1.In probability theory, the probability density function ( ) h x of the sum of two independent random variables is the convolution of the probability density functions 1 ( ) h x and 2 ( ) h x of the two random variables: The utility of this function can be attributed to the fact that calculating convolution allows for the simple determination of the probability density function of the sum of independent random variables.This is very useful Mobile Robot Vision Image Feature Recognition Method Based on Machine Vision for understanding and analyzing complex probability distributions, calculating the expected values and variances of the sum of random variables, and so on.By using convolution operations, the paper can combine the probability density functions of two independent random variables to obtain the probability density function of their sum.This approach enables scholars to more readily study and describe the combined distribution of several random variables and extract valuable information from it.From the perspectives of statistics and applications, this is of great significance for simulation, prediction, and decision-making problems.In summary, using the convolutional function of the probability density function of two independent random variables can conveniently calculate the probability density function of the sum of random variables, providing convenience for scholars to study the sum of random variables in probability theory and statistics.If π and X is the sum of 1 X and 2 X , the probability density function of X is triangular in the range of [ 2 ,2 ) π π − , because the convolution of the two rectangular functions is a triangular function. Mobile Robot Vision Image Feature Recognition Based on the VC dimension theory of statistical learning theory and the principle of structural risk minimization, machine vision seeks the best compromise between the complexity of the model (i.e., the learning accuracy of specific training samples) and the learning ability (i.e., the ability to identify any samples without errors).The aim is to obtain the best generalization ability.Machine vision has been widely utilized by scholars across various fields due to its numerous benefits, including sample prioritization, algorithm simplification into a quadratic problem, algorithm complexity not being dependent on sample dimension, avoiding the " dimension disaster " problem, and simplification of classification and regression problems, as well as good robustness.In this paper, the feature extraction of mobile robot visual image has been discussed, which paves the way for image classification.This chapter mainly realizes the image classification and recognition by programming the machine vision algorithm. After the program starts, the training sample data of the image is read.The sample space size is a certain value, which is set to 50, 100 or 150 in this paper.When the program determines that all the samples are read, the feature extraction of each image is started. First, the image color feature extraction mainly uses the image histogram feature and the image color feature after histogram equalization and establishes the sample color feature database. Second, the sample image is grayed to prepare for image texture feature extraction.Graying adopts ( , ) 0.3 ( , ) 0.59 ( , ) 0.11 ( , ) , in which ( , ) f i j is the grayscale values of a pixel after conversion.( , ) ( , ) ( , ) R i j G i j B i j 、 、 are the sizes of red, green and blue primary colors of the original image respectively. Then, the texture feature of the sample image is extracted, and the texture feature database is established.Here, the texture feature is extracted by the method of image gray level co-occurrence matrix. Next, two methods are used to establish the support vector machine feature database.One is to use the overall color histogram and texture feature as the feature vector, and the other is to use the three primary color histogram and texture feature as the feature vector.The next step is featuring training.The sample features of four road images are used as training sets for support vector machine feature training.Through rigorous training, a support vector is derived that is capable of matching the sample data features of each image to the largest extent possible.This support vector can serve as a foundation for machine vision to accurately classify the diverse visual features of mobile robot images.After the establishment of sample space features and feature vectors, read in the image data in the test database, extract color and texture features, and then carry out classification and recognition.When a certain data meets certain classification requirements, it will be classified into this class.When the data cannot be classified into any image class, the features of the image will be returned to the feature learning part.The class of the image is determined by learning.The feature parameters of the image are added to the feature vector of the class to provide data support for the subsequent establishment of machine vision model. Finally, through the classification of Lowe algorithm, the purpose of mobile robot visual image feature recognition is achieved. The process of image feature recognition for mobile robot vision based on machine vision is shown in Figure 4.The visual image feature recognition of mobile robot is completed on the simulation software of MATLAB, and the accuracy and time of visual image feature recognition of mobile robot are verified.Among them, Python will be used as a programming tool, and the operating system will utilize Windows XP.After the vision system completes the positioning of the target workpiece, the workpiece position information obtained in the camera is transformed into the coordinates of the robot world coordinate system in the form of matrix transformation.Since image feature recognition cannot be carried out during positioning, it can only be performed after the workpiece has traveled a certain distance on the assembly line.When the positioning is completed, the encoder value of the motor on the pipeline can be cleared.The specific experimental robot image sample is shown in Figure 5. Image recognition accuracy To verify the accuracy of mobile robot visual image feature recognition under different methods, experiments are carried out by using visual attention recognition method [7], multi visual feature acquisition and fusion recognition method [8], visual perception and tactile prior knowledge learning and recognition method [9] to Mobile Robot Vision Image Feature Recognition Method Based on Machine Vision Loss value of edge feature extraction of robot vision image To verify the loss value of edge feature extraction of mobile robot visual image under different methods, experiments and recognition methods [9] and the research method were carried out using visual attention recognition method [7], multi visual feature acquisition and fusion recognition method [8], visual perception and tactile prior knowledge learning.The results are shown in Table 3.According to Table 3, when the number of images was 600, the loss value of edge feature extraction of visual attention recognition method for mobile robot vision image was 8.96%.The loss value of edge feature extraction of multi visual feature fusion recognition method for mobile robot vision image was 9.16%.The loss value of edge feature extraction of visual perception learning recognition method for mobile robot vision image was 9.62%.The loss value of edge feature extraction of mobile robot vision image was 6.38%.Under the research method, the loss value of edge feature extraction of mobile robot vision image was far lower than that of other methods.This showed that the loss value of edge feature extraction of research method was small. Noise evaluation of robot vision image feature recognition To evaluate the noise of mobile robot visual image feature recognition under different methods, experiments and recognition methods [9] and research method were carried out using visual attention recognition method [7], multi visual feature acquisition and fusion recognition method [8], visual perception and tactile prior knowledge learning.The visual image feature recognition of mobile robots was completed on MATLAB simulation software, and the noise situation of visual image feature recognition of mobile robots was verified.A total of 50 evaluations were conducted, and the average error value and error equation average results of noise evaluation were obtained.Among them, Python will be used as a programming tool, and the operating system will utilize Windows XP.The results are shown in Table 4.To enhance the precision and steadiness of the technique, Gaussian noise with zero mean and standard deviation ranging from 1 to 6 is incorporated into the image.From Table 4, under different noise conditions, the mobile robot vision image feature recognition noise was the smallest, regardless of the error mean value or error variance, which was more conducive to image feature extraction. Conclusion This paper presented a feature recognition method of mobile robot visual image based on machine vision.The specific module of mobile robot visual feature recognition was designed.The mobile robot visual image was collected by Fourier series method, and the edge feature extraction of mobile robot visual image was completed according to Harris algorithm.SIFT feature points of mobile robot visual image were classified, and mobile robot visual image feature recognition was realized through machine vision.The following conclusions could be drawn from the experiment: a.When the number of images was 600, the accuracy of mobile robot visual image feature recognition was 96.98%.It showed that the proposed method had high accuracy of image feature recognition. b.When the number of iterations was 500, the visual image feature recognition time of this method was 3 min, indicating that the robot visual image feature recognition efficiency of this method was high.c.When the number of images was 600, the loss value of mobile robot vision image edge feature extraction was 6.38%.It showed that the research method had lower loss value of image edge feature extraction. d.Under different noise conditions, the mean and variance of the error in mobile robot visual image feature recognition were the lowest, which showed the low image noise of research method. In summary, improved visual feature recognition technology can make robots more intelligent, improve their autonomy and efficiency, and enable robots to better understand and adapt to human environments, especially in the fields of service robots and collaborative robots.At the same time, this study will promote technological innovation, with significant economic benefits and broad social impacts. Figure 2 : Figure 2: Design of robot visual feature recognition function Figure 3 : Figure 3: Visual task flow chart of mobile robot Chapter 2 mainly introduces the methods of robot visual feature recognition, explains the methods and functional design of robot visual feature recognition, and mobile robot visual image acquisition based on Fourier series.On this basis, Chapter 3 will conduct research on mobile robot visual image feature recognition based on machine vision, including edge feature extraction, edge feature extraction, and classification SIFT feature point acquisition of mobile robot visual images, in order to EAI Endorsed Transactions on Energy Web | Volume 10 | 2023 | Q. Dong better improve the research on mobile robot visual image feature recognition. Mobile Robot Vision Image Feature Recognition Method Based on Machine Vision collection of points, and these points are the extreme values of the local region of the image gradient. 1 ( of the gradient amplitude of image u are minimum eigenvalue points.It is supposed that the matching time of the original SIFT algorithm is: 1 X and 2 X are evenly distributed in the angle space[0, 2 ) Figure 4 : Figure 4: Process of Image Feature Recognition Method for Mobile Robot Vision Based on Machine Vision Figure 5 : Figure 5: Mobile robot Mobile Robot Vision Image Feature Recognition Method Based on Machine Vision on function ( ) g t at time t is weighted by the coefficient ( ) g t kT − .Now it is assumed that both g and h can perform Fourier transform: EAI Endorsed Transactions on Energy Web | Volume 10 | 2023 | 3 Table 1 . Feature matching experiments of original sift algorithm and classified sift feature point method Table 3 . Loss value of image edge feature extraction under different methods Table 4 . Noise Evaluation of Image Feature Recognition Under Different Methods Mobile Robot Vision Image Feature Recognition Method Based on Machine Vision EAI Endorsed Transactions on Energy Web | Volume 10 | 2023 | 11
2023-12-22T16:07:42.968Z
2023-12-20T00:00:00.000
{ "year": 2023, "sha1": "67ec46967178f121a397ebc41ac8c22c52307db1", "oa_license": "CCBY", "oa_url": "https://publications.eai.eu/index.php/ew/article/download/3450/2781", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09b6077692c348df1c2a7437ade408a00e2293b6", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
53348087
pes2o/s2orc
v3-fos-license
Attending physicians ’ attitudes towards physical exam and bedside teaching Background: Medical education has experienced a gradual shift away from traditional bedside attending rounds, from 75% of rounds occurring at bedside in the 1960s to about 30% today. Aim: To examine attending physicians’ attitudes towards bedside teaching and physical exam. Methods: Anonymous survey of medical attendings in six academic hospitals. Results: 77% of respondents (n=97) completed the survey. The vast majority (89%) of attendings concurred that physical diagnosis skills are essential, felt that more emphasis on bedside teaching is needed (77%), and believed that bedside teaching should be a priority (71%). Additionally, 87% reported that bedside rounds are important to patient care. Yet, only 31% reported conducting teaching rounds at bedside. Finally, only 5% of attendings trained outside the US expressed fear of poor teaching performance in front of house staff, compared with US trained attendings (28%, p=0.023). Conclusion: Physicians are spending less and less time at the bedside, particularly those trained within the United States. At a time when the U.S. healthcare system is struggling to meet the increasing demand of escalating costs and declining patient satisfaction, the return to bedside teaching may be a surprisingly simple and untapped solution. Introduction Within the last 50 years, medical education has experienced a gradual shift from traditional bedside attending rounds to the conference room and hallway.In the 1960s, it was reported that 75% of attending rounds occurred at the bedside (Reichsman, Browning, & Hinshaw, 1964), while recent studies estimate this percentage to be between 8% and 28% (Gonzalo, Masters, Simons, & Chuang, 2009;Williams, Ramani, Fraser, & Orlander, 2008).This decline seems to have shifted the focus away from the patient, depriving the new generation of physicians of an opportunity to observe and learn physical exam techniques, interpersonal skills, and professionalism from senior role models at the bedside (Williams et al. 2008).Commonly cited reasons for this decline include time constraints, noisy wards, greater work demands, logistics of hospital organization, overreliance on technology, and perceived patient discomfort (Crumlish, Yialamas, & McMahon, 2009;Dewji, Dewji, & Gnanappiragasam, 2015;Gonzalo et al. 2009;LaCombe, 1997;Nair, Coughlan, & Hensley, 1997;Nair, Coughlan, & Hensley, 1998;Rogers, Carline, & Paauw, 2003;Williams et al. 2008).One study utilizing qualitative methods to explore these barriers, reported that physicians who are performing bedside evaluation are expected to possess an almost unrealistic level of diagnostic skill, making it daunting for the average physician to rise to the challenge (Ramani, Orlander, Strunin, & Barber, 2003).It has been suggested that the less time one spends at bedside, the more uncomfortable they are conducting rounds, and therefore the less time they spend doing so (Thibault, 1997). Despite these barriers, a recent review revealed that patients, learners, and teachers all seem to favor bedside teaching (Peters & ten Cate, 2014).Learners still believe that bedside learning is important for professional development and for learning core clinical skills such as patient-physician communication, physical examination, and clinical reasoning (Crumlish et al. 2009;Gonzalo et al. 2009;Nair et al. 1997;Rogers et al. 2003;Williams et al. 2008).Moreover, the belief that bedside presentations are stressful for patients has not been supported (Simons, Baily, Zelis, & Zwillich, 1989).Studies in both outpatient and inpatient settings reveal that patients exposed to bedside presentations are more likely to report favorable perceptions of their care, perceive greater educational benefit, and would prefer subsequent presentations by the bedside (Fletcher, Rankey, & Stern, 2005;Lehmann, Brancati, Chen, Roter, & Dobs, 1997;Majdan, Berg, Schultz, Schaeffer, & Berg, 2013;Nair et al. 1997;Rogers et al. 2003;Wang-Cheng, Barnas, Sigmann, Riendl, & Young, 1989) .Overall, patients have been found to be very satisfied with bedside teaching (Peters & ten Cate, 2014).In essence, bedside teaching fosters the patient-physician relationship, as it can provide a profound catalytic experience, allowing physicians to immerse themselves in the depths of human illness (Qureshi & Maxwell, 2012). Today's ubiquitous medical technology further shifts the focus away from the traditional examination skills cultivated at bedside (Dewji et al. 2015;Qureshi & Maxwell, 2012), undermining the diagnostic utility of the physical exam (Crumlish et al. 2009;Gonzalo et al. 2009). Of additional concern, is the lack of growth in bedside clinical skills over time along with the striking absence of progress in the skill level of medical professionals, including students, fellows, and faculty (Vukanovic-Criley et al. 2006;Mangione, 2001;Mangione, Burdick, & Peitzman, 1995;Mangione & Nieman, 1997;Mangione & Neiman, 1999).Indeed, previous research has shown that half of practicing hospitalists do not feel confident in teaching physical examination skills (Crumlish et al. 2009).Similarly, studies of cardiac examination competency reveal that bedside skills do not improve significantly through different levels of training (Mangione, 2001;Mangione & Nieman, 1997;Vukanovic-Criley et al. 2006).In one such study that included cardiology students, fellows, and faculty, only cardiology fellows tested significantly better than students and residents (Vukanovic-Criley et al. 2006).Likewise, when Mangione et al. (1995) had internal medicine and family medicine residents listen to 12 prerecorded common cardiac events, both groups recognized only 20% of events on average, improving only slightly with training level (Mangione & Nieman, 1997). Despite the popularity of simulations to supplement learning (Peters & ten Cate, 2014;Qureshi & Maxwell, 2012), certain components of the physical examination have great diagnostic utility and cannot be learned any other way than at the bedside.For example, the presence of a third heart sound (Drazner, Rame, Stevenson, & Dries, 2001), indicative of severe hemodynamic dysfunction (Tribouilloy et al. 2001), is the most important predictor of postoperative complications, and can only be assessed at the bedside (Goldman et al. 1977).Other experiences that cannot be simulated and adequately learned outside of bedside teaching include the tactile sensation of hepatosplenomegaly and joint effusions (Qureshi & Maxwell, 2012). In addition to declining skills, studies also point to a lower perceived utility of the physical exam.In a study of attending physicians at an academic teaching center, authors found a significant negative correlation between the mean overall perceived utility of the physical exam and increased training level (Wu, Fagan, Reinert, & Diaz, 2007).They also reported a positive correlation between self confidence in performing the exam and increased training levels -with attending physicians reporting a mean level of confidence 3.9 out of 5 (Wu et al. 2007).When Fagan and colleagues (2006) used the same methodology to survey fourth-year medical students (MS4s) at United States and Dominican Republic medical schools, they found that students at the Dominican school reported significantly greater confidence in their overall physical examination skill as compared to US students.The students at the Dominican school also had more positive views about the diagnostic utility of the physical examination (Fagan, Lucero, Wu, Diaz, & Reinert, 2006).These findings could be related to the increased availability of diagnostic technology in the US, which results in decreased emphasis on the physical examination as a tool. Such data is troublesome, given the fact that a well-performed physical examination can provide over 20% of the data necessary for patient diagnosis (Campbell & Lynn, 1990).With the rising costs of healthcare and the relatively low cost of a physical examination, compared to imaging and laboratory studies, perhaps going back to the bedside would not only be prudent, but ultimately more economical (Peixoto, 2001).In an effort to contribute to a greater understanding of these issues, we decided to investigate whether there is an association between faculty attitudes toward the physical exam and frequency of bedside teaching during attending rounds. Methods Design and participants: A cross-sectional anonymous survey study was conducted.Surveys were distributed via interoffice mail and during grand rounds to all attendings taking part in the inpatient internal medicine teaching services of six hospitals in the New York metropolitan area during the 2009-2010 academic years.Attendings without inpatient teaching responsibilities during the year were excluded from the analysis. Attendings targeted for participation received a cover letter in inter-office mailbox introducing the survey, with the questionnaire attached.To ensure anonymity and unbiased responses, surveys were administered before grand rounds, during which attendings were instructed to drop off completed surveys into a box located right outside the grand rounds hall.Those physicians receiving the survey via inter-office mail also received an instruction letter detailing where to drop off completed surveys at their convenience.Attendings were given one month to return the surveys. The study was approved by the Northwell Health Institutional Review Board. Measures: A survey was developed for the purpose of this study, with questions about self-confidence adapted from Wu et al. (2007), and attitude questions adapted from Gonzalo et al. (2009) and Crumlish et al. (2009).Data collected included demographics (gender, ethnicity, field of expertise, whether they were trained in the US or abroad, number of years working as an attending physician), number of months per year working on a teaching service, average length and frequency of rounds, and time spent at different locations during rounds. In addition, participant self-rated confidence in performing an overall physical exam and eleven specific skills were assessed, along with Likert type items rating the importance of bedside teaching and the diagnostic utility of physical exam. Data analysis: Chi-square and Fisher's Exact Tests were used for hypothesis testing, using a 2-tailed analysis with an alpha of 0.05 as the criterion for significance.Attendings who did not report "length of service per year" (n=14) were excluded from the analysis. Results Out of the 126 surveys distributed, 97 were returned completed, with an overall response rate of 77%.After excluding attendings who did not report length of service, 83 respondents were included in the analysis.A summary of participant demographics can be found in Table 1. With regard to confidence in physical exam skills, most attendings felt very confident (48%), or somewhat confident (43%), in their overall diagnostic abilities.Respondents reported more confidence in the detection of ascites (93%) and interpretation of systolic murmur (92%), and were least confident with regard to the fundoscopic exam (21%) and distinguishing between mole and melanoma (59%).There was no significant association between physical exam skills and time spent at bedside during rounds. Table 2 presents overall attending bedside and physical exam attitudes.The majority of attendings reported that bedside rounds are important for teaching purposes (92%) and patient care (87%).The vast majority (89%) of attendings reported that physical diagnosis skills are essential, more emphasis on bedside teaching is needed (77%), and that bedside teaching is a priority (71%).Just under half reported their belief that patients (49%) and house staff (42%) prefer bedside teaching.Additionally, 87% reported that bedside rounds are important to patient care.Notably, 17% feared poor teaching performance in front of patients and house staff (22%). Interestingly, significant results were found for fear of poor teaching by location of training (Table 3).Only 5% of attendings trained outside the US expressed fear of poor teaching performance in front of house staff, while significantly more attendings trained within the US (28%) reported this fear (p=0.023).A significantly greater proportion of attendings with training outside the US also reported making bedside teaching a priority (91% v. 63%, p=.015). Significant associations were found between the amount of time spent teaching at the bedside and attitudes towards bedside teaching (Table 4).Attending beliefs that physical exam can only be taught at the patient's bedside was greater for those spending more than 25% of time at the bedside (p=0.027).Similarly, there was a significant association between the amount of time spent teaching at the bedside and attending beliefs that patients prefer bedside rounds (p=0.007), as well as beliefs that bedside rounds are important for teaching purposes (p=0.018). Attendings spending more time teaching at bedside felt strongly that more emphasis on bedside teaching was needed in the curriculum (p=0.003) and that bedside teaching is a priority (p=0.003). Discussion Our study results support well-established data published almost two decades ago which indicated that, from the patient's perspective, bedside case presentations were "at least as good as conference-room presentations, and perhaps preferable" ( Lehmann et al. 1997).Yet the present study is in line with previous research indicating that attendings spend only about 30% of teaching rounds at the bedside (Gonzalo et al. 2009).This percentage is likely to be an overestimate, as other studies found significant differences between estimated and actual times that physicians spend at the bedside (Miller, Johnson, Greene, Baier, & Nowlin, 1992).When comparing studies reporting physician estimates to direct observation, the percent of time devoted to bedside examination during teaching rounds dwindled to 11-17% (Crumlish et al. 2009;Miller et al. 1992). Among the most interesting of our results, are the differences found between attendings trained within and outside the US.Very few other studies have examined attitudes toward bedside teaching across different countries, often with methodological limitations (Mangione, 2001).In our study, when comparing physicians trained within and outside the US, both groups concur completely with the belief that physical diagnosis skills yield clinically relevant information and are required to make the correct diagnosis.However, the vast majority (91%) of physicians trained outside the US make bedside teaching a priority, compared with less than 65% of physicians trained within the US (p<.01).Furthermore, while both groups feel equally confident in their ability to lead bedside teaching rounds, US trained attendings report a significantly greater level of fear of exhibiting poor teaching performance in front of house staff than non-US trained physicians (28% vs. 5%, p<.03). These findings provide some context for much of the findings in the medical education literature with regard to international differences between curriculum approaches.For instance, a study examining cardiac auscultation teaching among trainees in the US, UK, and Canada found that British and Canadian trainees received significantly more training of this skill in medical school and residency as compared to those in the US (Mangione, 2001).In addition, both British and Canadian trainees are expected to undergo an objective assessment of physical examination skills.British trainees improved the most in assessing cardiac auscultation and Canadian trainees had the greatest accuracy. As common with survey studies, the reliance on subject recall represents one limitation of the current study and may have partially accounted for the findings.In addition, while the survey was designed to be anonymous, social desirability bias may have resulted in an overestimation of physician time spent at bedside.Notable strengths of this research lie in the fact that we sampled across six different academic hospitals across two urban and suburban boroughs of New York City, and attained an excellent response rate (77%). Conclusion The literature consistently reports, over several decades, that bedside teaching is greatly valued by physicians at all levels, from medical students to attendings.Yet, physicians are spending less and less time at the bedside.This is particularly true of physicians trained in the United States.At a time when the US healthcare system is struggling to meet the increasing demands of escalating costs and declining patient satisfaction, the return to bedside teaching may be a surprisingly simple and untapped solution. Take Home Messages The vast majority of attendings report that bedside teaching is important for teaching purposes and patient care.Findings indicate that only 31% of teaching rounds are held at bedside.Attendings trained outside the United States felt more confident in their ability to perform bedside teaching in front of house staff and were more likely to make bedside teaching a priority. Notes On Contributors REVEKKA BABAYEV, MD, received her medical doctorate from Albert Einstein College of Medicine and went on to do her Internal Medicine residency and nephrology fellowship at Columbia University Medical Center.She is currently working at Stamford hospital as a clinical nephrologist and is interested in medical education.
2018-10-20T07:26:19.718Z
2016-06-15T00:00:00.000
{ "year": 2016, "sha1": "1dc497077e587a02bea156b8a7677bb35be4d85a", "oa_license": "CCBY", "oa_url": "https://www.mededpublish.org/MedEdPublish/PDF/394-1007.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1dc497077e587a02bea156b8a7677bb35be4d85a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
247504644
pes2o/s2orc
v3-fos-license
AN ANALYSIS OF ACTIVITY AND LEARNING OUTCOMES OF ELEMENTARY SCHOOL STUDENTS IN THE NEW NORMAL ERA This article is motivated by the Covid-19 pandemic which has an impact on learning. The purpose of this article was to describe the activities and learning outcomes of class III students in theme 1 sub-theme 1 MI NU Tasywiquth Thullab Salafiyah in carrying out learning in the new normal era. This type of research is qualitative phenomenology, the data collection techniques used are observation and interviews, the subject and setting in this study there are six students and one third grade teacher, at school and at each student's home. The data analysis technique used is according to Milles & Huberman consists of collecting data, condensing data, presenting data, drawing and verifying conclusions. The result of this article is that all indicators of learning activities have not been implemented, indicators that have been implemented by students are proposing ideas and working independently. Meanwhile, indicators that have not been implemented are asking questions, providing ideas and suggestions, and expressing opinions. Student learning outcomes show that all learning objectives consist of aspects of knowledge, attitude aspect, and skills aspect have been seen in the six students. INTRODUCTION The COVID-19 pandemic has had an impact on all aspects of people's lives, including education. The Ministry of Education and Culture has issued several circulars regarding learning policies that must be carried out by educational institutions. Distance learning, online learning, online learning, and so on are terms used during this Covid-19 pandemic. The usual student learns with teachers and friends directly, must do learning from home with the assistance of parents. These learning activities are still ongoing until the new normal era. The existence of this certainly affects the activities and learning outcomes of students. Of course, there are differences in learning activities before and after the Covid-19 pandemic. Based on the fact that there are currently many obstacles experienced by teachers, students, and parents in carrying out learning during the pandemic until now. 1 This certainly makes student learning activities disrupted and of course, has an impact on student learning outcomes Given that this is the first time this has happened in the field of education. So, no special preparation has been made to deal with this problem, especially in the field of education. Each student has different readiness and adaptation while carrying out learning during the Covid-19 pandemic. 2 Not infrequently students also feel bored in learning, especially students in lower grades who need more assistance in learning. 3 The boredom experienced by these students makes it more difficult for students to understand the material because students must be able to build their knowledge independently. Even though we know that elementary schoolage students still need assistance in learning when compared to students at the secondary level. Mastery of the use of technology is also not evenly distributed among all Indonesian people so the delivery of material through the media has not been felt optimally. As has been said before, there is no readiness in the world of education during the Covid-19 pandemic, now what can be done is to adapt to keep the learning going. Seeing this, we can certainly say that student learning activities are disrupted while carrying out learning during the Covid-19 pandemic. This is because learning is less effective and students do not understand the material taught by the teacher indirectly. 4 As we know that before the Covid-19 pandemic, teachers, and students could interact directly in learning, besides that, students could also have discussions while studying. So that the learning experience obtained is more meaningful and easily understood by students. Meanwhile, currently what students can do is receive the material provided by the teacher through easyto-use media. With help from their parents or closest people, students try to understand the material so that they can do the assigned task given by the teacher. Meanwhile, the teacher is only able to direct and remotely supervise student learning activities. So that the teacher cannot directly assess the student learning process, especially in the aspects of attitudes and skills. Seeing the current phenomenon, of course, makes the teacher less than optimal when giving and explaining the material, so the learning objectives have not been fully achieved. 5 The teacher is not maximal in explaining the material, making students feel heavy in accepting and understanding the learning material. 6 Student learning activities that should be carried out freely so that students can develop their potential are limited due to these constraints, as well as student learning outcomes. This can be seen when students do learning activities from home, students only receive material either by reading books, watching videos, listening to teacher explanations through videos, and so on. Interaction that should go both ways, for now, it cannot be done. Students who are expected to be more active in learning activities, at this time are only able to be active in understanding the material and doing assignments from the teacher. This condition hinders student learning activities so it affects the student learning assessment process. The average level of student concentration on online learning is in a low category, while the average level of student motivation for online learning is in the medium category. 7 MI NU Tasywiquth Thullab Salafiyah is one of the schools in Kudus Regency that also carries out online learning during the COVID-19 pandemic which has been running for approximately one year. Based on the facts, it is known that student activities and learning outcomes have also decreased when compared to learning before the pandemic. Seeing the existing phenomena, and supported by previous studies, the researchers deemed it necessary to conduct this research precisely at MI NU Tasywiquth Thullab Salafiyah. The purpose of this study itself was to describe the activities and learning outcomes of third-grade students in theme 1 subtheme 1 MI NU Tasywiquth Thullab Salafiyah in carrying out learning in the new normal era. As we know, this goal was created because of a phenomenon that occurred at MI NU Tasywiquth Thullab Salafiyah and several other schools that were known through previous studies. To find out the answers to these objectives, the researchers used indicators of learning activities and aspects of assessment which were used as guidelines in conducting research. RESEARCH METHODS This research is phenomenological qualitative research. The subjects in this study were six students who had been selected and categorized based on their academic ability and socioeconomic conditions of the family, as well as one third-grade teacher. The setting of this research is at MI NU Tasywiquth Thullab Salafiyah, Kudus Regency, and at the home of each student who is the subject. Data collection techniques used are observation and interviews. Observations were made to observe student learning activities during learning activities from home in the new normal era of learning. In addition, this observation technique is also used 5 Ria Puspita Sari, Nabila Bunnanditya Tusyantari, and Meidawati Suswandari, "Dampak Pembelajaran to observe student learning outcomes based on the assessment made by the teacher. While the interview technique regarding student learning activities was carried out on the third-grade teacher. According to Milles & Huberman, the data analysis technique used is the data analysis technique. There are four stages, namely data collection, data condensation, data presentation, drawing and verifying conclusions. RESULT AND DISCUSSION Class III student learning activities, theme 1 subtheme 1 in carrying out learning in the new normal era at MI NU Tasywiquth Thullab Salafiyah In a learning activity, there are indicators so that it can be seen the activities carried out by students during learning activities. According to Hamzah B. Uno, there are five indicators of learning activities, namely submitting opinions, providing ideas and suggestions, expressing opinions, submitting thoughts, and working independently. The five indicators that the researchers used as a guide in conducting observations and interviews. Following are the results of these observations and interviews: Submit an opinion When the researcher made observations, it was found that the six students never asked questions, either at the beginning or at the end of the lesson. In addition, these students also did not ask questions about the relationship between today's material and the previous one. Questions about the difficulties or lack of students in understanding the material given by the teacher during learning activities from home were never asked by students. This is supported by the results of interviews with teachers who said: "With the help of parents, students can understand the material, so there are no questions asked by students" (Thursday, 23 December 2021). Provide suggestions and ideas It is known that students are willing to accept the material given by the teacher, it is proven when students want to listen to the learning video given by the teacher. However, there are no activities that indicate when the six students expressed their hopes or complaints during the learning activities from home. So that the attitude of students does not reflect the willingness and courage to submit proposals or ideas to improve the quality of learning activities in this new normal era. This is in line with what the teacher said during the interview, namely: "Nothing, students never complain about learning activities because maybe students are helped by their parents, so they understand the material I give" (Thursday, 23 December 2021). Express opinions The six students who were the subjects in this study were known to have not been able to express their opinions when learning from home took place. There are no specific activities that reflect the emergence of these indicators. However, even so, the students were still willing and able to collect singing videos directed by the teacher, so that the students' courage to express their opinions could be seen a little through these activities. Likewise with the answers given by the teacher during the interview, which said that: "Students never directly express their opinions, but when students want to express their thoughts when working on assignments, they indirectly express their opinions" (Thursday, 23 December 2021) The attitudes and responses of students seen when singing through the video can also be said to be good. Submitting thoughts In general, the six students have been able to do the assignments given by the teacher, both in writing and orally, or as a work of art. Through the tasks carried out by students, students' thoughts are channeled, so that it can be said that students have been able to submit their thoughts. This also agrees with what the teacher said during the interview, namely "As I said before, students convey their knowledge through the tasks that the students do, so the intensity is almost every day according to study hours" (Thursday, 23 December 2021) In terms of linking the knowledge, they have with the learning material these students have not been able to do it because there are no specific activities that reflect this attitude. Respect for other people's thoughts is also evident, as evidenced by the absence of negative comments when other students collect their assignments in the study group. Work independently As we know that currently, the independent attitude that students must have is necessary when learning from home in the new normal era. So that even though students are assisted by their parents or people around them, they must be more independent when carrying out learning activities. When making observations, the six students were able to prepare books and stationery when they were going to study and do assignments. In addition, these students are also able to open the pages of books directed by the teacher, although with constant guidance from parents. Students' responsiveness is also reflected in doing assignments, although still need guidance from parents or those around them. This is supported by the opinion of the teacher in the interview, "I think everyday students can be said to be independent in learning, but they still need direction and supervision from their parents" (Thursday, 23 December 2021). As we know that currently in learning students are required to be more active, so that interaction does not go one way, but two ways. So that the delivery of information and knowledge is not only by the teacher, but students are also expected to be able to convey their thoughts, opinions, and knowledge. Schools as a vehicle for students to develop their potential, for now, must be replaced by homes. The role of teacher must also be assisted by parents in supervising and directing students when studying from home. Students feel bored, because while studying from home students only receive material and then do assignments. The movement of students is limited because students cannot interact directly with teachers and friends. See the results of observations and interviews that have been carried out, as well as the results of data analysis. It can be seen that the six students who were the subjects in this study had not been able to achieve the overall indicators of learning activities. Two indicators can be implemented by students, namely submitting thoughts and working independently. Meanwhile, the other three indicators, namely asking questions, providing ideas and suggestions, and expressing opinions have not been able to be carried out by students during learning activities from home. This of course can happen because while carrying out learning activities from home in this new normal era, students only receive all learning materials from the teacher and then do the assignments. during online learning, student learning activity cannot be fully achieved according to the indicators of learning activity. 8 All questions, curiosity, complaints, and expectations of students when participating in learning activities from home have not been able to be conveyed. Although the subject of six students had been selected and categorized based on academic ability and socioeconomic conditions of the family, the learning activities carried out by these students can be said to be the same. The Covid-19 pandemic has had a huge impact on the learning process, which is usually the teaching and learning process carried out directly, has now changed to online which makes students feel bored and bored while carrying out learning. 9 The boredom is caused by the limitations for students in expressing and releasing whatever knowledge they already have in learning activities. For approximately one-year students carry out learning either by distance, online, or online. Although the government had once allowed the reopening of offline education services, conditions did not allow this plan to be realized. This certainly hinders the growth and development of students in terms of thinking and learning. If we look again at the results of the observations and interviews, we know that the three indicators that students have not been able to implement are broadly about students' ability to express their opinions. While at school this ability can still be encouraged by the teacher so that students' courage in expressing their opinions can still be trained properly. However, during learning from home, student learning activities are influenced by the presence and support of parents or those around them. So, it can be said that parental factors, for example working parents or other things make parents not able to fully guide students while studying from home. 10 The teacher is not able to convey the material and guide students to the maximum, so students feel heavy in capturing the material. Therefore, the role of parents is very important. The lack of parental roles in helping children learn has an impact on children's psychology. 11 In the current situation, however, for learning to continue, learning from home is the best solution. Although the indicators of student learning activities have not been able to be implemented completely, there are obstacles both from the students themselves and from outside the students themselves. Teachers who should be able to see the student learning process, for now, can only see assignments or student work done at home. Learning, which should prioritize the process that students go through, cannot be carried out because of the limited movement of teachers in observing the student's learning process. 12 Not all parents realize that the learning process or activity that students go through is the most important thing so that students have a meaningful learning experience. Therefore, it is necessary to have good cooperation between related parties, namely schools, teachers, and parents so that learning in this new normal era can be carried out better when compared to the Covid-19 pandemic. awareness of the importance of education for every student is needed, especially in this new normal era. This is because students have not been able to interact directly with friends and teachers, as well as the limitations of students in exploring their abilities and potential. Class III student learning out-comes theme 1 sub-theme 1 in carrying out learning activities in learning in the new normal era at MI NU Tasywiquth Thullab Salafiyah Talking about learning activities, cannot be separated from the term learning outcomes. In learning, learning outcomes consist of three aspects, namely knowledge, attitudes, and skills. Before the Covid-19 pandemic, teachers could easily assess these three aspects because the teacher saw firsthand the learning process carried out by students. However, in the current situation, where teachers have not been able to meet face-to-face with students, the assessment is of course done by looking at reports on assignments or the work done by students while studying from home. In this case, the researcher makes observations by seeing whether or not the learning objectives are visible so that the assessment is still carried out by the class teacher III. The following are the results of the researcher's observations based on the assessment made by the teacher: Knowledge aspect Assessment in this aspect is very easy and commonly done, grade III teachers conduct assessment assessments by looking at and assessing student assignments. The learning objectives in this aspect are to identify simple patterns, identify the characteristics of living things, write names and symbols for numbers, identify good habits before and after eating, solve problems with addition in layers, and identify how to be grateful. From all these learning objectives, it has been seen in the six students who were the subjects in this study. Through the tasks given by the teacher, students have been able to do it, but still, need assistance and direction from parents or people around them. Attitude aspect In contrast to the knowledge aspect, the assessment of the attitude aspect when students study from home is a separate obstacle for teachers. Teachers can not see directly how the attitude of students in learning and doing assignments. Therefore, as much as possible the teacher conducts an attitude assessment which is also through oral assignments, for example, such as making singing videos. However, the teacher also continues to formulate learning objectives on the attitude aspect. The goal is to follow the whole series of activities learning, practicing simple patterns in songs, carrying out teacher directions to do assignments, showing polite behavior during learning activities, and showing gratitude. Based on the assessment by the teacher, the six students have shown learning objectives in the attitude aspect. So that during learning from home the attitude of students can be seen even with the help of parents. Skill aspect This aspect relates to physical ability and muscle work. During the learning from home, the teacher had a little difficulty in assessing this aspect. As with the attitude aspect, the teacher cannot see student movements directly. So that as much as possible the teacher remains to carry out the assessment. Based on the results of observations, the learning objectives of making stories based on serial pictures, solving story problems related to addition, and writing the characteristics of living things, have been seen by the six students. In the learning objectives, singing the lizard-gecko song on the wall has appeared to the four students. Meanwhile, in the learning objective to sing chicks songs, the four students appeared. If we look at the results of the observations described above, in general, the learning objectives have been seen in the six students who were the subjects in this study. This is due to the presence of parents or people around students when students do the assignments given by the teacher. Even so, teachers who already know this, still appreciate the assignments and work done by students while carrying out learning activities from home. If you recall that in the 2013 curriculum, the actual assessment is more emphasized on the learning process. However, when students do learning activities from home, it is quite difficult for teachers to make assessments when students go through the learning process. As said earlier that in doing assignments, students are assisted by parents. This indicates that the assignment or work done by students is not purely the result of student thinking. However, the teacher still appreciates the assignments of these students. The existence of assistance from parents or people around has an unfavorable impact on the future development of students. Parents certainly want their children to get the best grades, but parents are also less or even unaware of the importance of the process that students must go through in learning. The learning experience that students get will certainly have a different impact if students are continuously assisted by their parents. So that the role of parents at this time is certainly needed, but must still be considered that it is also important for students to go through the real learning process, even though it is not as optimal as when learning directly at school. Indicators of learning activities that have not been fully implemented by students when carrying out learning from home in this new normal era, and it seems that all learning objectives for the six students. This statement indicates that there is no influence between learning activities and learning outcomes. Student learning outcomes during online learning have increased from the average learning outcomes. 13 Online or distance learning by giving assignments to students can improve student learning outcomes. 14 This phenomenon is a reality that does exist in a situation like this. If seen, it is not too bad for students. However, in the future, when students have to return to face-to-face learning, students will adapt again. When carrying out learning activities and doing assignments, they will go through it only with the direction and guidance of the teacher. However, even so, when students learn from home, parents are aware of their role in helping students learn. Parental assistance is seen when helping children with task difficulties, explaining material that students do not understand, and helping them respond to online learning. 15 CONCLUSIONS Based on the results of the study, it can be concluded that the learning activities of class III students in theme 1 sub-theme 1 in carrying out learning in the new normal era at MI NU Tasywiquth Thullab Salafiyah have not implemented all indicators of learning activities. The indicators that have been implemented are submitting ideas and working independently. Meanwhile, indicators that have not been implemented are asking questions, providing ideas and suggestions, and expressing opinions. The learning outcomes of class III students in theme 1 subtheme 1 in carrying out learning activities in the new normal era at MI NU Tasywiquth Thullab Salafiyah are that all learning objectives consist of aspects of knowledge, attitudes, and skills in the six students have been seen. Seeing this conclusion, the school government should consider the implementation of face-to-face learning in this new normal era, it can be carried out in waves adjusted to school conditions. It aims to ensure that there is harmony between learning activities and student learning outcomes. Further research can also be done by other researchers who are interested in this research topic. The next researcher can conduct research when students carry out face-to-face learning when it is allowed in this new normal era.
2022-03-18T15:11:21.500Z
2022-02-19T00:00:00.000
{ "year": 2022, "sha1": "dbfec8325cd0a95ac464321f460315545c9e0129", "oa_license": "CCBYSA", "oa_url": "https://e-journal.ikhac.ac.id/index.php/NAZHRUNA/article/download/2066/844", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "469d5df95d0a7066a8ad07c28f63cbfdda302a20", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
266709143
pes2o/s2orc
v3-fos-license
Burst fracture treatment caudal to long posterior spinal fusion for adolescent idiopathic scoliosis utilizing temporary lumbo-pelvic fixation with restoration of lumbar mobility after instrumentation removal Background Thoracolumbar burst fractures are common traumatic spinal fractures. The goals of treatment include stabilization, prevention of neurologic compromise or deformity, and preservation of mobility. The aim of this case report is to describe the occurrence and treatment of an L4 burst fracture caudal to long posterior fusion for adolescent idiopathic scoliosis (AIS). Case report A 15-year-old girl patient underwent posterior spinal fusion from T3–L3. The patient tolerated the procedure well and there were no complications. Seven years postoperatively, the patient reported to the emergency department with lumbar pain after fall from height. A burst fracture at L4 was diagnosed and temporary posterior instrumentation to the pelvis was performed. One-year postinjury, the hardware was removed with fixation replaced only into the fractured segment. Flexion/extension radiographs revealed restored motion. Conclusions Treatment of fractures adjacent to fusion constructs may be challenging. This case demonstrates that avoiding fusion may lead to satisfactory outcomes and restoration of mobility after instrumentation removal. Introduction Thoracolumbar burst fractures are a common type of traumatic spinal fracture, accounting for more than 2/3 of thoracolumbar fractures [1] .The goal of treatment for thoracolumbar burst fractures includes stabilization with or without decompression to deter progressive deformity and neurologic compromise [2] .However, debate still exists over the optimal treatment for this kind of fracture and how to preserve the most mobility [3] .Instrumentation plays a role in restoring immediate stability and correcting the deformity.Past research has demonstrated that mobility in the affected segment is more likely to be preserved in patients who do not undergo fusion for thoracolumbar burst fracture [4] .Several studies demonstrated that posterior fixation without fusion may reduce operation time, blood loss, and help avoid donor site complications [5][6][7] .However, in the long term, solid fusion may be required to prevent instrumentation failure and no data exists regarding fractures adjacent to long segment fusion [8] .In fact, Chou et al. [9] reported a higher rate of revision surgery in patient who did not undergo fusion.Nevertheless, a meta-analysis by Diniz et al. [4] cally significant difference in the rate of reoperation.Given the concern for both achieving stability and maintaining mobility, the question of whether to fuse the spine in posterior instrumentation of burst fractures is controversial.Traumatic vertebral fracture caudal to long segment posterior fusion and instrumentation for AIS is a rare occurrence and a limited number of cases have been described in the literature [10][11][12][13] .In these cases, the question of whether to treat the fracture with fusion becomes even more complex; given the balance between protecting the previous construct and preserving mobility in patients that may have already lost spinal motion in previous operations.Here we present the case of a patient treated with posterior spinal fusion for AIS who sustained a fracture at L4 following T3-L3 posterior instrumented fusion for AIS.graphs completed in 2016 demonstrated a 49°dextroscoliosis of the thoracolumbar spine centered at T9/T10 and a 30°levoscoliosis of the lower lumbar spine centered at L4 ( Fig. 1 ).She did not take any medications, is a nonsmoker with no comorbidities, and has a BMI of 21.The patient underwent uncomplicated posterior spinal fusion from T3 -L3 ( Fig. 2 ).The surgery was well tolerated and the patient was deemed stable for discharge on the 6th day postoperatively. Case presentation Seven years postoperatively, the patient presented to the emergency department with severe low back pain that began when she fell.The patient complained of lumbar back pain exacerbated with movement and alleviated by immobilization.She denied any paresthesia or sensorimotor deficits in the extremities.Computed tomography imaging and magnetic resonance imaging demonstrated a burst fracture of the L4 vertebral body with a 9 mm of retropulsion and focal narrowing of the central canal at that level ( Fig. 3 ).The fracture was believed to be unstable due to the magnitude of vertebral body destruction and location caudal to a long fusion-although, the posterior ligamentous complex was intact.Attempt at brace treatment was unsuccessful with inability to stand and mobilize due to unrelenting pain despite multimodal pain management. The patient was taken to the operating room 2 days after the trauma for fracture fixation by posterior segmental spinal instrumentation from L2 to the ileum without fusion.The decision was made to instrument to the pelvis in order to maintain alignment in the absence of interbody support or corpectomy.There were no complications during the surgery and imaging studies upon discharge showed adequate sagittal and coronal alignment with maintenance of adequate lumbar alignment ( Fig. 4 ). Fourteen months after the L4 burst fracture surgery, the patient was evaluated for continued axial lumbar spine pain and stiffness that was not relieved by medication and physical therapy.The patient desired instrumentation removal and CT showed complete healing of the fracture.Plain radiographs completed at that time demonstrated her scoliosis fixation construct with burst fracture fixation and extension to the pelvis with a fracture of L4 which appeared likely healed without any loosening or fracture of the instrumentation ( Fig. 5 ).Therefore, the patient underwent elective removal of segmental spinal instrumentation at L5, S1, pelvis with L3-4 fusion.The screws in L5, S1, and the ileum were removed bilaterally as well as all connectors.Bilateral pedicle screws were placed in L4 and rods with connectors and setscrews were placed.The posterior elements at L3 and L4 were decorticated and allograft bone graft was placed for fusion.The decision was made to fuse to the fractured segment due to the high possibility of progressive kyphosis across the injured disc and cranial endplate of the fractured level.The patient tolerated the procedure well and was discharged on post-op day 3. On postoperative follow-up the patient reported 100% improvement in the perception of stiffness and denied any pain or complications at that time.Lateral flexion-extension lumbar radiographs demonstrated retained mobility in L4-5 and L5-S1 with 39.6°of lordosis from L4-S1 on extension which reduced to 20.5°of lordosis on flexion from L4-S1.( Fig. 6 ). Discussion Fracture adjacent to fusion constructs are challenging to treat.Although proximal fractures are common following spinal deformity surgery, few case reports discuss post-traumatic vertebral fracture caudal to the lowest instrumented vertebra (LIV) of a previous posterior instrumentation for AIS [10][11][12][13] .Of these 4 case reports, 3 implemented fusion in their constructs [11][12][13] while one did not mention it [10] .In this case, a posttraumatic L4 burst fracture occurred almost 7 years after a T3-L3 posterior instrumentation for AIS.The location of this injury is determined by the mechanical load distribution of the spine, and tremendous forces can be imparted on the remained unfused spine after long fusion.When a normal spine is erect, 80% to 90% of the axial compressive load is transmitted through the anterior column and the remaining force is absorbed by the posterior joints and muscles [14] .In fact, in vertebral trauma, the most commonly affected segment is the thoracolumbar junction, where the highest load transmission is present [1] .However, when the spine is instrumented at the thoracolumbar junction, where the articulation between the stiff and mobile segments exists, the load-distribution pattern is changed [11] .In our patient, due to the T3-L3 posterior instrumentation, the junction between mobile and stiff segment was pushed to L3-L4 making L4 more susceptible to a post-traumatic fracture.An additional reason could be that the instrumentation and fusion mass protected the T3-L3 region and the increased mobility between the lower and upper adjacent segments after a long spinal segment arthrodesis [ 10 , 15 , 16 ].As for the management of thoracolumbar burst fractures, the addition of fusion with posterior instrumentation is a debated topic.For its advocators, fusion is thought to prevent fatigue and failure of the construct [4] .However, it is an additional step in surgery which has certain costs in morbidity, financial expense and permanent spinal stiffness [4] .A meta-analysis by Diniz et al. [4] comparing spinal fixation with and without fusion for thoracolumbar burst fractures showed a higher operative time and estimated blood loss in the fusion group (p < .01)with no statistically significant difference in the rate of postoperative fixation failure or kyphosis correction [4] .These findings are supported by another meta-analysis done by Lan et al. [3] adding no statistically significant difference in postoperative pain, but a higher rate of donor site-related complication in the fusion group (p < .01)[3] .Furthermore, another finding that may support nonfusion treatment in thoracolumbar burst fractures is the preservation of segmental mobility which was shown to be better in the no fusion group (p < .01)[ 3 , 4 ].In addition, in 2019, the congress of neurological surgeons recommended the omission of fusion in instrumentation of thoracolumbar vertebral fractures (grade A recommendation) due to the absence of any additional benefit and its association with increased operative time and estimated blood loss [2] . Conclusion In this reported case, an L4 burst fracture caudal to an AIS fusion construct was treated with temporary spanning instrumentation and subsequent removal which leads to the recovery of full lower lumbar spinal mobility, and absence of any perceived postoperative stiffness or pain.Additional studies examining patients with traumatic fracture adjacent to long fusion constructs may provide further guidance on the safety of this technique.Long term follow up with also be needed to ensure ongoing mobility and satisfactory alignment of the unfused segments. Patient informed consent statement Complete written informed consent was obtained from the patient for the publication of this study and accompanying images Declarations of Competing Interests One or more authors declare potential competing financial interests or personal relationships as specified on required ICMJE-NASSJ Disclosure Forms Fig. 4 . Fig. 4. Anteroposterior and lateral plain radiograph after management of the L4 burst fracture. Fig. 5 . Fig. 5. (A-B) Anteroposterior and lateral plain radiograph showing no implant related complication and a healed L4 vertebral body. Fig. 6 . Fig. 6. (A-B) Anteroposterior and lateral plain radiograph film after removal of the posterior instrumentation and (C-D) dynamic radiographs showing restoration of motion at L4-S1.
2024-01-02T16:15:20.543Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "c932bce9e48989aca3c80a901a2a17410d9fdd85", "oa_license": "CCBY", "oa_url": "http://www.nassopenaccess.org/article/S2666548423001099/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5d7e1d47e27ebfdd6064bcca235948ba6da52ec", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
233424628
pes2o/s2orc
v3-fos-license
Sources of Inaccuracy in Photoplethysmography for Continuous Cardiovascular Monitoring Photoplethysmography (PPG) is a low-cost, noninvasive optical technique that uses change in light transmission with changes in blood volume within tissue to provide information for cardiovascular health and fitness. As remote health and wearable medical devices become more prevalent, PPG devices are being developed as part of wearable systems to monitor parameters such as heart rate (HR) that do not require complex analysis of the PPG waveform. However, complex analyses of the PPG waveform yield valuable clinical information, such as: blood pressure, respiratory information, sympathetic nervous system activity, and heart rate variability. Systems aiming to derive such complex parameters do not always account for realistic sources of noise, as testing is performed within controlled parameter spaces. A wearable monitoring tool to be used beyond fitness and heart rate must account for noise sources originating from individual patient variations (e.g., skin tone, obesity, age, and gender), physiology (e.g., respiration, venous pulsation, body site of measurement, and body temperature), and external perturbations of the device itself (e.g., motion artifact, ambient light, and applied pressure to the skin). Here, we present a comprehensive review of the literature that aims to summarize these noise sources for future PPG device development for use in health monitoring. Introduction Remote and continuous/intermittent monitoring (RCIM) has proven to be a promising route to deliver preventative care by reducing both the death rate and burdens placed on the healthcare system [1][2][3]. One emerging RCIM technique frequently being used to monitor wellness is photoplethysmography (PPG). PPG works by illuminating the skin (commonly the finger, wrist, forearm, or ear) with light and collecting the transmitted or reflected light with a nearby detector. The collected light varies in intensity and has a pulsatile component, often called the AC component, and a quasi-DC component. The variation in the quasi-DC component is due to many factors: the optical properties of the tissue, average blood volume, respiration, vasomotor activity, vasoconstrictor waves, Traube Hering Meyer waves, and thermoregulation [4][5][6][7][8][9][10][11][12][13][14]. The common pulsatile ("AC"), change in the PPG is usually the variation associated with arterial blood volume. As the systolic and diastolic pulse travel through an artery or arteriole, the properties of the pulse itself and the compliance of the vessel lead to a change in vessel diameter and consequently a change in blood volume. This correlates with a change in light detected by a photodiode after illumination and hence a change in the voltage or current generated by the photodetector. Changes in erythrocyte orientation can also lead to changes in optical transmittance, further modifying light detected by a photodiode as a function of blood volume [15]. Over an entire cardiac cycle, if the quasi-DC baseline light signal from the other tissue parameters is removed, this leads to the AC PPG waveform, which is attributed primarily to the cardiac pulse. This pulse is often inverted and displayed as seen in Figure 1a. In addition to the possibility of gathering clinical information from the PPG waveform itself, some have used its derivatives to gather information including the first derivative known as the velocity plethysmograph (VPG, Figure 1b) and the second derivative known as the second derivative photoplethysmograph or acceleration plethysmograph (SDPPG or APG, Figure 1c) [16]. Biosensors 2021, 11, x FOR PEER REVIEW 2 of 36 the compliance of the vessel lead to a change in vessel diameter and consequently a change in blood volume. This correlates with a change in light detected by a photodiode after illumination and hence a change in the voltage or current generated by the photodetector. Changes in erythrocyte orientation can also lead to changes in optical transmittance, further modifying light detected by a photodiode as a function of blood volume [15]. Over an entire cardiac cycle, if the quasi-DC baseline light signal from the other tissue parameters is removed, this leads to the AC PPG waveform, which is attributed primarily to the cardiac pulse. This pulse is often inverted and displayed as seen in Figure 1a. In addition to the possibility of gathering clinical information from the PPG waveform itself, some have used its derivatives to gather information including the first derivative known as the velocity plethysmograph (VPG, Figure 1b) and the second derivative known as the second derivative photoplethysmograph or acceleration plethysmograph (SDPPG or APG, Figure 1c) [16]. Beyond fitness and heart rate monitoring, the primary medical use of the PPG has been focused on obtaining information about the cardiovascular system towards cardiovascular disease (CVD) diagnosis and treatment [17,18]. CVD is a class of chronic conditions and is a general term for those diseases that affect the heart or blood vessels, and include (but are not limited to): coronary artery disease, cardiomyopathy, heart failure, arrhythmia, myocardial infarction, and peripheral artery disease [19]. CVD is often associated with a build-up of fatty deposits inside the arteries (atherosclerosis) and an increased risk of blood clots. It is the number one cause of death globally, contributing to more than 17 million deaths [20]. Cardiovascular disease is currently diagnosed or monitored through noninvasive means using a variety of approaches depending on the specific manifestation. These include: PPG, pulse oximeter, blood pressure cuff, Holter monitor electrocardiagram (ECG), ECG during a stress test, computerized tomography (CT) scans, ultrasound imaging, and magnetic resonance imaging (MRI) [21]. These approaches are Beyond fitness and heart rate monitoring, the primary medical use of the PPG has been focused on obtaining information about the cardiovascular system towards cardiovascular disease (CVD) diagnosis and treatment [17,18]. CVD is a class of chronic conditions and is a general term for those diseases that affect the heart or blood vessels, and include (but are not limited to): coronary artery disease, cardiomyopathy, heart failure, arrhythmia, myocardial infarction, and peripheral artery disease [19]. CVD is often associated with a build-up of fatty deposits inside the arteries (atherosclerosis) and an increased risk of blood clots. It is the number one cause of death globally, contributing to more than 17 million deaths [20]. Cardiovascular disease is currently diagnosed or monitored through noninvasive means using a variety of approaches depending on the specific manifestation. These include: PPG, pulse oximeter, blood pressure cuff, Holter monitor electrocardiagram (ECG), ECG during a stress test, computerized tomography (CT) scans, ultrasound imaging, and magnetic resonance imaging (MRI) [21]. These approaches are often used in combination with monitoring blood biomarkers [22,23]. PPG systems developed for remote and wearable use are typically for general wellness and fitness. This precludes it from being prescribed for medical use at home. The blood pressure cuff, ECG patch, and Holter monitor are also not often used for long-term remote monitoring [24]. Additionally, these tests are rarely administered preventatively, despite research which concludes that preventative testing could reduce deaths by up to 25% [1]. PPG can fill this gap if a sufficiently accurate and precise device is developed. As depicted in Figure 1, a tremendous amount of information can be extracted from the PPG and its derivative waveforms. Every feature labeled in Figure 1 has been proposed for use to assess cardiovascular health [22,[25][26][27]. Specifically, the systolic peak can be used for heart rate, the dicrotic notch and the area of the curve before and after the notch are used for stroke volume, slope transit time can be used for hypertension, the first derivative parameters are largely used to assess blood velocity, and the five points in the second derivative are used ratiometrically to assess vascular health and risk for cardiovascular disease [22,[28][29][30]. Additionally, some parameters in the literature such as pulse transit time (PTT), which is used to determine pulse wave velocity (PWV) and estimate blood pressure without a cuff, requires extraction of the time delay from two PPG waveforms or from an ECG and PPG waveform [31,32]. Overall, the literature has demonstrated the potential diagnostic and prognostic strength for the PPG; however, the PPG features can only be utilized if the waveform is of a high quality with high signal-to-noise ratio (SNR). PPG-based RCIM devices that are U.S. Food and Drug Administration (FDA) cleared or approved and can accurately and consistently record clinical parameters in a sufficiently diverse population for true health monitoring are scarce. Table 1 summarizes the existing PPG-based RCIM devices and their FDA status. All six identified devices with FDA status appear to be able to provide patients and providers with data for oxygen saturation (SpO2), respiration rate, and pulse rate. The oldest device in Table 1 is Equivital™'s EQO2 Lifemonitor, a device worn in a chest harness that also uses ECG. In some cases, FDA approval is only for when the wearable is used within a software suite or healthcare framework. This is the case with the Samsung Gear S2, which is FDA cleared to monitor heart rate toward detecting atrial fibrillation when done with the LIVMOR Halo™ Detection System. The indications for use in these devices are very significant advancements in remote monitoring, but still lag in the potential prognostic capabilities of the PPG. Numerous non-FDA cleared/approved fitness devices exist and can estimate heart rate by quantifying the number of systolic peaks in a period of time, but a single parameter limits the amount of extractable information. Additionally, there are many reports of inaccuracy in these devices [33]. The most popular of these devices are listed in Table 1. The difficulty in determining features in PPG devices lies in the numerous sources of noise that can impede the output of the PPG. These sources of error pertain to variation within and across individuals (e.g., skin tone, obesity, age, and gender), physiology (e.g., respiration, venous pulsation, body site of measurement, and body temperature), and external perturbations of the device itself (e.g., motion artifact, ambient light, and applied pressure to the skin). In addition, the hardware and software within the device itself can contribute to the noise. These many sources of noise create limitations in the application of PPG to derive advanced physiological parameters. To the authors' knowledge, there is no work that comprehensively summarizes the literature surrounding these sources of noise and how they affect the waveform of the PPG and its derivatives. Thus, the factors identified in this report may be useful to guide future PPG system designs for true health monitoring. True health monitoring should consider not only the obvious noise sources for commercial fitness devices such as motion artifacts and ambient light, but some of the sources of variability found in diverse patient populations that are prone to cardiovascular disease. These often-overlooked disparities with diversity (e.g., skin tone and obesity) are now becoming more documented in the literature [34][35][36]. Furthermore, this work could assist in defining the parameters that would be needed for human trials to validate the efficacy of constructed devices across variable populations. Individual Variations in the Human Population This section consists of a discussion surrounding works that have explored normal variation within the human population as a source of error or variance within PPG measurements. The variations within the human population to be discussed are skin tone, obesity, age, and gender. These categories largely exist as a spectrum, such as skin tone or age. As such, the effect these categories have on PPG accuracy can be similarly broad. Skin Tone The most common way to characterize skin tone is via the Fitzpatrick Scale [37]. Shown in Figure 2, the Fitzpatrick Scale ranges from 1 to 6, where 1 is near-albino and 6 is highly pigmented skin. An individual's skin tone, and thus Fitzpatrick category, is correlated to the amount of eumelanin in their epidermis [38]. While this scale was devised to discuss skin UV-sensitivity, it is often used within the biophotonics community due to the effect eumelanin has on how light travels through skin. This is due to the high absorbance of eumelanin with a peak in the ultraviolet wavelength (220 nm) and a steady decay through the visible wavelength region. Figure 3a illustrates not only this decay across the visible range but also the high, two to three orders of magnitude offset in absorption of epidermal melanin as it compares to the absorption of bulk dermis, which has no melanin. Since the absorption of epidermal melanin is much higher in the visible region of light and much lower in the near infrared region (NIR), the NIR range of light will travel further through pigmented skin. However, many PPG devices use green light (~550 nm). The decision to use green light in most PPG systems is primarily driven by the relatively high absorption spectrum of hemoglobin in this range (Figure 3b), which is the main absorber in blood and thus can potentially give a strong pulsatile signal with changes in blood volume. For those with a lighter skin tone, this enables a higher signal-to-noise ratio for determining heart rate: the primary parameter derived by PPG. However, the wavelength range needs to be optimized for both skin tone and blood absorption, particularly as more advanced parameters are derived from PPG signals, as the absorption of green light by melanin in individuals with a darker skin tone limits the light penetration to the subcutaneous tissue where the blood is located. This section consists of a discussion surrounding works that have explored normal variation within the human population as a source of error or variance within PPG measurements. The variations within the human population to be discussed are skin tone, obesity, age, and gender. These categories largely exist as a spectrum, such as skin tone or age. As such, the effect these categories have on PPG accuracy can be similarly broad. Skin Tone The most common way to characterize skin tone is via the Fitzpatrick Scale [37]. Shown in Figure 2, the Fitzpatrick Scale ranges from 1 to 6, where 1 is near-albino and 6 is highly pigmented skin. An individual's skin tone, and thus Fitzpatrick category, is correlated to the amount of eumelanin in their epidermis [38]. While this scale was devised to discuss skin UV-sensitivity, it is often used within the biophotonics community due to the effect eumelanin has on how light travels through skin. This is due to the high absorbance of eumelanin with a peak in the ultraviolet wavelength (220 nm) and a steady decay through the visible wavelength region. Figure 3a illustrates not only this decay across the visible range but also the high, two to three orders of magnitude offset in absorption of epidermal melanin as it compares to the absorption of bulk dermis, which has no melanin. Since the absorption of epidermal melanin is much higher in the visible region of light and much lower in the near infrared region (NIR), the NIR range of light will travel further through pigmented skin. However, many PPG devices use green light (~550 nm). The decision to use green light in most PPG systems is primarily driven by the relatively high absorption spectrum of hemoglobin in this range (Figure 3b), which is the main absorber in blood and thus can potentially give a strong pulsatile signal with changes in blood volume. For those with a lighter skin tone, this enables a higher signal-to-noise ratio for determining heart rate: the primary parameter derived by PPG. However, the wavelength range needs to be optimized for both skin tone and blood absorption, particularly as more advanced parameters are derived from PPG signals, as the absorption of green light by melanin in individuals with a darker skin tone limits the light penetration to the subcutaneous tissue where the blood is located. Preejith et al. confirmed that skin tone matters when using green light only. The analysis of their PPG biosensor (single source 535 nm LED) showed a significant direct correlation SNR of the heart rate with skin tone, indicating that a darker skin tone yields higher error in the measurement [40]. Specifically, Preejith et al. developed a dorsal wrist-based heart rate monitor that utilizes 535 nm light and validated its performance against 256 subjects (54 with "fair" skin, 181 with "moderate" skin, and 21 with "dark" skin). The device features a single green LED and two photodetectors on opposing sides. Ground truth was obtained using a Masimo Radical-7 which is a commercial handheld fingertip pulse oximeter that uses seven wavelengths of light across the visible/NIR range to determine cardiovascular parameters. The Masimo Radical-7 was placed on the index finger of the same hand where their device was worn. Their results indicate a greater than 10 times increase in absolute error with the darker skin tone, calculated by taking the absolute value of the difference between their sensor and the Masimo Radical-7. There was an error of 1.04 beats per minute (BPM) for "fair" skinned individuals and an error of 10.90 BPM Preejith et al. confirmed that skin tone matters when using green light only. The analysis of their PPG biosensor (single source 535 nm LED) showed a significant direct correlation SNR of the heart rate with skin tone, indicating that a darker skin tone yields higher error in the measurement [40]. Specifically, Preejith et al. developed a dorsal wrist-based heart rate monitor that utilizes 535 nm light and validated its performance against 256 subjects (54 with "fair" skin, 181 with "moderate" skin, and 21 with "dark" skin). The device features a single green LED and two photodetectors on opposing sides. Ground truth was obtained using a Masimo Radical-7 which is a commercial handheld fingertip pulse oximeter that uses seven wavelengths of light across the visible/NIR range to determine cardiovascular parameters. The Masimo Radical-7 was placed on the index finger of the same hand where their device was worn. Their results indicate a greater than 10 times increase in absolute error with the darker skin tone, calculated by taking the absolute value of the difference between their sensor and the Masimo Radical-7. There was an error of 1.04 beats per minute (BPM) for "fair" skinned individuals and an error of 10.90 BPM for "dark" skinned participants, citing the lack of usability of their device for dark skinned individuals. Hermand et al. determined the same trends as Preejish et al. while using the PPG heart rate monitor Polar OH1. They analyzed its performance across 70 subjects ranging from Fitzpatrick 1 to Fitzpatrick 6 during various levels of exercise and motion. Hermand et al. had participants run, bike, and walk while wearing the Polar OH1 on their upper arm and the Polar H7 chest strap (ECG based device) paired with the Polar M400 watch as the ground truth. The Polar OH1 consists of 6 green LEDs forming a circle around a single photodiode. In determining heart rate, it was found that bias (defined as the mean difference between the OH1 and ground truth) increased with darker skin (p < 0.001), and heart rate accuracy was positively correlated to skin tone (p < 0.05). However, the authors also mentioned that the lack of environmental control leads to increased humidity and possibly increased vasodilation [41]. In addition to these examples, numerous other studies that stratify results against Fitzpatrick classification identify the same trend; errors in determining heartrate from wearable PPG devices that use primarily green light as their source is increased in individuals with dark skin tones due to the high absorption caused by increased amounts of epidermal melanin [42][43][44][45]. Lastly, Bent et al. published an experimental analysis of error observed in optical heart rate sensors manufactured by Apple (Apple Watch 4), Fitbit (Fitbit Charge 2), Garmin (Garmin Vivosmart 3), Xiaomi (Xiaomi Miband 3), Empatica (Empatica E4), and Biovotion (Biovation Everion) as a function of skin tone [46]. By collecting data from 56 patients (34 female, 22 males, 18-54 years old, and at least 8 participants in each Fitzpatrick classification) when they are at rest and exercising (elevating heart rate to 50% of maximum via a treadmill), it was found that there was no statistically significant relationship between measured heart rate from the wearable and an ECG (Bittium Faros 180) reference. Interestingly, this conclusion contrasts with the previous discussion. It is possible that the lack of increase in error observed is due to the already large error present in the results reported-the mean absolute error is approximately 9 BPM across all skin tones. The authors do not discuss why the results presented conflict with those previously reported. Additionally, the wearable devices used in this study utilized red and near infrared light, which can more easily penetrate the epidermis. watch as the ground truth. The Polar OH1 consists of 6 green LEDs forming a circle around a single photodiode. In determining heart rate, it was found that bias (defined as the mean difference between the OH1 and ground truth) increased with darker skin (p < 0.001), and heart rate accuracy was positively correlated to skin tone (p < 0.05). However, the authors also mentioned that the lack of environmental control leads to increased humidity and possibly increased vasodilation [41]. In addition to these examples, numerous other studies that stratify results against Fitzpatrick classification identify the same trend; errors in determining heartrate from wearable PPG devices that use primarily green light as their source is increased in individuals with dark skin tones due to the high absorption caused by increased amounts of epidermal melanin [42][43][44][45]. Lastly, Bent et al. published an experimental analysis of error observed in optical heart rate sensors manufactured by Apple (Apple Watch 4), Fitbit (Fitbit Charge 2), Garmin (Garmin Vivosmart 3), Xiaomi (Xiaomi Miband 3), Empatica (Empatica E4), and Biovotion (Biovation Everion) as a function of skin tone [46]. By collecting data from 56 patients (34 female, 22 males, 18-54 years old, and at least 8 participants in each Fitzpatrick classification) when they are at rest and exercising (elevating heart rate to 50% of maximum via a treadmill), it was found that there was no statistically significant relationship between measured heart rate from the wearable and an ECG (Bittium Faros 180) reference. Interestingly, this conclusion contrasts with the previous discussion. It is possible that the lack of increase in error observed is due to the already large error present in the results reported-the mean absolute error is approximately 9 BPM across all skin tones. The authors do not discuss why the results presented conflict with those previously reported. Additionally, the wearable devices used in this study utilized red and near infrared light, which can more easily penetrate the epidermis. The lower absorption of melanin using light sources at higher wavelengths can improve the signal for individuals with a high Fitzpatrick classification. Mohapatra et al. demonstrates that this is the case with a multiwavelength PPG device placed on the central dorsal wrist. The device comprised 2590 nm (yellow/orange) LEDs and a single 520 nm LED symmetrically opposite to the 590 nm LED on the vertical axis. There was approximately 0.7 cm center to center source/detector separation for each LED. With 20 subjects ranging from Fitzpatrick II to Fitzpatrick IV, the perfusion index (AC/DC), pulsatile The lower absorption of melanin using light sources at higher wavelengths can improve the signal for individuals with a high Fitzpatrick classification. Mohapatra et al. demonstrates that this is the case with a multiwavelength PPG device placed on the central dorsal wrist. The device comprised 2590 nm (yellow/orange) LEDs and a single 520 nm LED symmetrically opposite to the 590 nm LED on the vertical axis. There was approximately 0.7 cm center to center source/detector separation for each LED. With 20 subjects ranging from Fitzpatrick II to Fitzpatrick IV, the perfusion index (AC/DC), pulsatile strength, and SNR were all found to increase when data collected with the 590 nm LED were analyzed. For Fitzpatrick IV individuals, the perfusion index increased between 1.2 and 7.1 times with the use of the 590 light, while pulsatile strength increased by 1.1 to 3.1 times and SNR increased by 1.3 to 2.6 times, although no statistical significance analyses were performed [51]. Fallow et al. performed a similar study by using more wavelengths and including exercise [52]. Specifically, Fallow et al. measured reflectance PPG above the radial artery (4 cm from the wrist), that likely included a signal from both the arterioles and possibly the artery depending on the wavelength of light used, in 22 individuals with varying skin tone from Fitzpatrick scales I to V. By obtaining a resting signal then having participants exercise via forearm flexion and extension, researchers were able to determine the SNR of participants with four different wavelengths (blue-470 nm, green-520 nm, red-630 nm, NIR-880 nm) at rest and after exercise. Mean modulation, often termed the perfusion index, defined as the ratio of AC/DC, was significantly lower in Fitzpatrick V individuals than others for all wavelengths (p < 0.001). In the resting condition, green light had larger mean modulation (p < 0.001) than other wavelengths, and after exercise, blue and green had greater SNR ratios than red or infrared (p < 0.001). These results indicate that overall, the mean modulation goes down with increasing Fitzpatrick scale, and with exercise one would expect greater blood volume changes causing a higher signal for the more absorbing blue/green wavelengths, at least for the source to detector distances used in the study. Further research should be conducted that analyzes the fidelity of the PPG waveform under these conditions, which would allow interpretation of which wavelengths can be used to derive more complicated parameters from PPG [52]. Furthermore, since no details are given on the source/detector alignment of these devices, different source/detector alignments should be assessed as that can dramatically affect the performance of PPG devices depending largely on the wavelength of light due to light-tissue interactions. This is seen in Mendelson et al., who analyzed the effect of source-detector separation on pulse oximetry with red (660 nm) and infrared (950 nm) light [53]. By collecting the SpO2 of seven Caucasian, lower Fitzpatrick scale, individuals via a Hewlett-Packard HP47201A transmittance eight-wavelength ear oximeter and reflectance (red and NIR) oximeter on the left thigh, the effect of source/detector separation on the reflected light was determined [53]. It was found that relative amplitude of the AC component to the entire signal (AC + DC) increased as LED/photodiode spacing increased from 4 to 11 mm, although no statistical analyses were presented. Subsequent studies should analyze the relative performance of various wavelengths on different skin types, but ensure that data at each wavelength are collected at an optimized source/detector separation. This could be facilitated first in silico via modeling using Monte Carlo Simulation [54,55]. Furthermore, one component of optimizing source/detector separation is the depth of the target vessel and intervening pulsating arterioles, which may vary across individuals and within individuals with body location and due to factors such as level of obesity [56]. Obesity Obesity, determined by having a body mass index greater than 30, affects 40% of the United States and leads to cardiovascular disease, hypertension, and type 2 diabetes [57]. It is caused by a combination of physical, behavioral, environmental, and genetic factors that lead to the accumulation of body fat [48,58]. In particular, obesity can lead to changes in skin thickness, blood flow, and oxygen saturation. This affects the optical properties of skin in addition to the distance light has to travel to reach a target vessel or vessels [48,59,60]. The variation of BMI across individuals is thus a potential source of variation for PPG measurements. While, to the authors' knowledge, publications that experimentally and explicitly demonstrate the effect of obesity or BMI on the PPG waveform are limited, we will explore works which suggest that there would be a substantial effect [56,[59][60][61][62][63][64][65][66][67][68][69][70]. Blood flow regulation and oxygen saturation are both known to deviate with respect to obesity and BMI [71]. Individuals with obesity experience increased cutaneous blood flow to meet the oxygenation needs of tissue. However, the blood flow of adipose tissue generally decreases with obesity both after a meal and during a fasting state [61,72]. Conversely, Chin et al. used laser doppler flowmetry and dynamic capillaroscopy to measure cutaneous blood flow at the nailfold of children of comparable age, sex, and skin temperature, but with different levels of obesity, finding significant increases in baseline cutaneous flow with obesity [61]. Dermal blood cell flow has also been shown to significantly increase in the forearm of overweight individuals (BMI 29.1 ± 2.7 kg/m 2 ) compared to non-obese (BMI 20.4 ± 1.9 kg/m 2 ). In adults, while some studies show that at rest there is no significant change in dermal capillary density, the majority of findings find that dermal capillary density does negatively correlate on average with increasing BMI [62][63][64][65]. This effect of obesity could lead to a decrease in the dominant "DC" component of a PPG waveform due to the increased blood volume of the obese [61][62][63][64]. The literature has confounding results in capillary recruitment, defined as the percentage increase in capillary density during venous congestion, in the obese, but this could be due to the populations studied. For instance, Czernichow et al. reported capillary recruitment in the skin after adjustment for age, sex, mean arterial pressure and fasting glucose was higher in overweight (defined as BMI 27.9 ± 2.7 kg/m 2 ) as compared with lean individuals, and that obese individuals were normotensive, nondiabetic, male and female subjects [63]. However, the findings of De Jongh et al. showed capillary recruitment to be decreasing rather than increasing with obesity (defined as BMI > 30 kg/m 2 ), but the study was done only on women with a mean age of 38.9 ± 6.7 years and the obese subjects were both hypertensive and had impaired insulin sensitivity [65]. In this study, it was acknowledged that capillary recruitment was negatively correlated with blood pressure but positively correlated with insulin sensitivity [57]. Although higher capillary density on average appears to negatively correlate to BMI, the subject's BMI level, gender, age, as well as their metabolic syndrome need to be considered when assessing capillary recruitment. A hypothetical increase in capillary recruitment due to an increase in BMI would similarly decrease PPG signal intensity, as is hypothesized for dermal capillary density. Lastly, oxygen saturation of hemoglobin has consistently been shown to be inversely associated with BMI across various populations [66,67,73]. The increased absorption of deoxygenated hemoglobin compared to that of oxygenated hemoglobin in the far-red region could decrease SNR of PPG measurements that use those wavelengths of light. The cumulative effect of various vascular changes in the obese on the PPG waveform remains to be seen. One such change where the impact on PPG has been observed in silico is that of skin thickness. Perhaps most detrimental to the PPG waveform is skin thickness, as it is directly correlated to BMI and can dampen PPG signal amplitude [56,70]. For example, the epidermal thickness of the volar forearm has been shown to be higher in overweight normotensive nondiabetic individuals compared to age-and sex-matched healthy controls [72]. Note that skin thickness in the literature has always showed an increase with obesity, but the increase is body site-dependent. To the authors' best knowledge, literature does not exist that demonstrates this relationship for the finger, a common location for pulse oximeters. Elsewhere, this thickening leads to a dramatic effect on the physiology and structure of the skin, consequently reducing the signal strength and resolution as photons encounter more possibilities for scattering, absorption, and autofluorescence with thicker tissue [74]. For example, the increase in skin thickness affects the vessel depth, and Boonya-ananta et al. used Monte Carlo simulations that predicted a 40% loss of PPG signal amplitude due to this effect in obese individuals, specifically on the wrist as the simulated radial artery increases in depth from 2.5 to 3.5 mm [74]. This loss of PPG signal amplitude is significant as it makes it more difficult to quantify and identify features within a waveform. One observation regarding the skin of the obese which may serve to increase PPG signal intensity is the increase in trans epidermal water loss (TEWL) as BMI increases [69]. At longer wavelengths, the absorption of water is more dominant than hemoglobin ( Figure 3b). Overall, the morbidly obese exhibit a reduction in water, measured as an increase in the Biosensors 2021, 11, 126 9 of 36 TEWL compared to normal BMI subjects [69]. This in turn could increase the signal component in the NIR and IR range due to the reduction in water molecules. Interestingly, however, the TEWL values in the epidermis of the face, forehead and abdomen decrease from normal to overweight but then increase from overweight to obese and morbidly obese [69]. Due to these inter-dependencies, measuring capillary densities, blood flow, and oxygen saturation in the obese using optical signals should be validated using other non-optical modalities before a definite conclusion can be made on how these parameters impact the PPG signal. While the exact cumulative effect of these BMI-related parameters on PPG remains unknown, Ferdinando et al. were able to identify obesity from PPG waveforms originating from the Liang et al. dataset [75,76]. Using k-nearest neighbor and support vector machines, Ferdinando et al. were able to identify five classes of obesity from PPG waveforms using 17 parameters derived from the decomposition of the PPG waveform into five lognormal functions. While arterial stiffness is mentioned as a potential cause of variations in the PPG waveform, the work does not discuss characteristics of the waveform itself, or how a waveform originating from an obese individual compares to a waveform originating from a non-obese individual. Overall, the literature has shown that obesity dramatically affects physiological factors associated with PPG signal intensity and quality, including capillary density, capillary recruitment, blood flow, SpO2, TEWL, and skin thickness. These changes are summarized in Table 2. There are also in silico works suggesting that obesity can dampen PPG signal intensity. We believe that the aforementioned literature provides strong evidence that obesity, when assessed in combination with a subject's metabolic state, body location, gender and age will likely influence PPG systems and can manifest themselves as significant noise sources. The next section will go into more detail about the later, chronological age. Age Aging leads to various anatomical and physiological changes that impact the ability to use PPG to assess cardiovascular health. These changes mostly occur in the vascularization. As arteries age, the tunica intima and tunica media layers within the arteries thicken with an increase in the number and density of collagen [78]. The cross-linking of these fibers, along with fractured and fatigue elastin, leads to a loss of compliance and increased artery stiffness [78]. Along with calcification, the result of these age-related changes is an increase in blood pressure, observed in older populations [79]. Additionally, the endothelium of the vessels thickens and develops irregularly shaped cells over time which increases blood flow resistance [78]. PPG principally measures the response/elasticity of arteries to blood flow, and thus will change if the properties governing artery compliance changes. Beyond vascular changes, skin thickness is another parameter that has a relationship with age. Once adolescence is reached, skin is known to thin as age increases [70,[80][81][82]. This relationship is maintained in the three primary layers of skin: epidermis, dermis, and hypodermis/subcutis. Thinning represents a decrease in distance for light to travel before it interacts with vessels, possibly affecting PPG [70,[80][81][82]. Overall, these changes can manifest themselves in variations observed across PPG waveforms either in the shape or amplitude of the waveform. There are many components to a PPG signal and its derivatives that literature records having variation as a function of age [83]. There are differences in the timing of events, manifested as changes in parameters such as PTT, and relative amplitude of features such as the dicrotic notch. Many of these result from changes in vessel compliance, as the decrease in distensibility of arteries leads to different changes in blood volume when compared to younger individuals. Using data from 93 individuals of various ages, Ahn et al. extracted features from fingertip-based PPG and the second derivative PPG, acceleration plethysmography (APG, Figure 4), and correlated them to chronological age in order to determine what they defined as the vascular age index [83]. However, this is not to be confused with vascular age, which is a specific term used to guide risk assessment for CVD, and is well known from D'agostino et al.'s work, often referred to as the "Framingham study" [84]. The work of Ahn et al. more closely resembles a chronological age index, as they correlated PPG features to chronological age and not CVD risk [85]. The parameters listed in Table 3 attributed to Ahn et al. yielded a statistically significant, albeit poor, correlation with age [83]. Others have also reported on correlations involving the APG. The ratio b/a has been found to correlate positively to age, while c/a, d/a and e/a have all been correlated negatively to age [29,59]. Jayasree et al., Dutt and Shruthi, and Yousef et al. similarly report changes in the PPG and APG, such as an increase in the area under the systolic peak with increasing age, a decrease in time between the systolic and diastolic peaks with increasing age, and an increase in crest time as age increases [86][87][88]. While these works relate features within a waveform to age, they do not examine characteristics that are derived from multiple PPG waveforms such as PTT and PWV. Biosensors 2021, 11, x FOR PEER REVIEW 10 of 36 increased artery stiffness [78]. Along with calcification, the result of these age-related changes is an increase in blood pressure, observed in older populations [79]. Additionally, the endothelium of the vessels thickens and develops irregularly shaped cells over time which increases blood flow resistance [78]. PPG principally measures the response/elasticity of arteries to blood flow, and thus will change if the properties governing artery compliance changes. Beyond vascular changes, skin thickness is another parameter that has a relationship with age. Once adolescence is reached, skin is known to thin as age increases [70,[80][81][82]. This relationship is maintained in the three primary layers of skin: epidermis, dermis, and hypodermis/subcutis. Thinning represents a decrease in distance for light to travel before it interacts with vessels, possibly affecting PPG [70,[80][81][82]. Overall, these changes can manifest themselves in variations observed across PPG waveforms either in the shape or amplitude of the waveform. There are many components to a PPG signal and its derivatives that literature records having variation as a function of age [83]. There are differences in the timing of events, manifested as changes in parameters such as PTT, and relative amplitude of features such as the dicrotic notch. Many of these result from changes in vessel compliance, as the decrease in distensibility of arteries leads to different changes in blood volume when compared to younger individuals. Using data from 93 individuals of various ages, Ahn et al. extracted features from fingertip-based PPG and the second derivative PPG, acceleration plethysmography (APG, Figure 4), and correlated them to chronological age in order to determine what they defined as the vascular age index [83]. However, this is not to be confused with vascular age, which is a specific term used to guide risk assessment for CVD, and is well known from D'agostino et al.'s work, often referred to as the "Framingham study" [84]. The work of Ahn et al. more closely resembles a chronological age index, as they correlated PPG features to chronological age and not CVD risk [85]. The parameters listed in Table 3 attributed to Ahn et al. yielded a statistically significant, albeit poor, correlation with age [83]. Others have also reported on correlations involving the APG. The ratio b/a has been found to correlate positively to age, while c/a, d/a and e/a have all been correlated negatively to age [29,59]. Jayasree et al., Dutt and Shruthi, and Yousef et al. similarly report changes in the PPG and APG, such as an increase in the area under the systolic peak with increasing age, a decrease in time between the systolic and diastolic peaks with increasing age, and an increase in crest time as age increases [86][87][88]. While these works relate features within a waveform to age, they do not examine characteristics that are derived from multiple PPG waveforms such as PTT and PWV. The timing of blood transit through vessels is an important measurement used to determine clinically relevant parameters such as heart rate variability and blood pressure. Caceres and Hasan used the first derivative to create the "waveform transit time" index. The timing of blood transit through vessels is an important measurement used to determine clinically relevant parameters such as heart rate variability and blood pressure. Caceres and Hasan used the first derivative to create the "waveform transit time" index. This index utilizes an inverse linear relationship to correlate age to the manually estimated difference between the first local minimum (occurring at the systolic peak) and the next local maximum (occurring at the dicrotic notch) of the first derivative. This time difference was termed TT W . The index was created by using data from the right index finger of 230 Spanish subjects (134 male) ranging in age from 8 to 89 years old. It was found that TT W decreases with age [89]. The authors state that this is a function of the time delay associated with incident and reflected light in a PPG, and is an easy way to determine PWV, which has been shown to decrease as a function of age. Using the ears, fingers, and toes, Allen and Murray analyzed the changes in PPG signal normalized width and PWV as a function of age. Corroborating with other literature, an increase in pulse wave velocity with age is related to a diminished dicrotic notch. They also note the increase in time of the systolic rising edge (and thus decrease in slope of the systolic rising edge) as a function of age. This is explained by the increase in resistance and decrease in compliance of older arteries [90]. PTT is inversely related to PWV-the same authors note in a different work that as age increases, PTT decreases. By analyzing the change in PTT as a function of age within a population of 134 healthy, Caucasian subjects (median age 43, total range: 13-72), it was found that there is a statistically significant decrease at the toes (r 2 = 0.48), fingers (r 2 = 0.26), and ears (r 2 = 0.15) [91,92]. Most of the listed changes are due to the cardiovascular system, as arteries change how they respond to a cardiac pulse as the vessel ages. However, the integumentary system also responds to aging. Skin thins as age increases beyond adolescence [70,80,93]. Keratinocytes become shorter, water content decreases, lipid content decreases, and most notably, collagen synthesis and turnover decrease [93]. This change in skin thickness decreases the necessary propagation of light to obtain a PPG signal, which can contribute to an increase in PPG signal strength, along with capillary depletion. Leveque et al. studied the skin thickness and PPG signal of 69 individuals from 8 to 89 years old [94]. A Holtain skin caliper applied to the dorsal forearm found a decrease in skin thickness, and an infrared PPG was used to determine PPG amplitude. The authors attribute the positive, direct relationship between PPG amplitude and age to capillary depletion, due to similar skin thicknesses observed between older participants and children, despite an increased PPG amplitude in the older participants. However, the skin thickness data presented in this study are not consistent with those in other literature, so further work is required to be conclusive. Another study by Hartmann et al. determined that there is no statistically significant change in PPG amplitude as a function of age; however, this work featured 36 individuals ranging in age from only 33 to 58. It is possible that 25 years may not be long enough to discern a significant change [95]. In a study with participants ranging in age from 30 to 60+, Yousef et al. found that there is no significant increase in pulse magnitude measured at the index finger with an increase in age. This work suggests that the decrease in vessel compliance will contribute more to variations in a PPG waveform than other effects of aging [88]. Overall, the impact of age-related changes in skin thickness on PPG may be dependent on the body site of measurement, and its significance is yet to be directly studied. However, if this effect is delineated and determined to be significant, it can be used in models that aim to determine blood pressure from PPG. Artificial intelligence and machine learning have been used to determine blood pressure from PPG and mitigate the effects age have on the estimate. In 2010, Monte-Moreno utilized machine learning to determine blood pressure from a PPG obtained from subjects ranging from 9-80 years old. An algorithm that incorporates age into systolic and diastolic blood pressure determination was created. While this work utilized a population of individuals with good cardiovascular health, it was found to correlate with both systolic and diastolic blood pressure, with an r 2 of approximately 0.90 [92]. Suzuki and Oguri similarly used artificial intelligence to determine blood pressure from a cuffless monitor and incorporated age into their algorithm [96]. While the wide spectrum of ages can make it a difficult parameter to incorporate into models, gender is binary, making it easier to incorporate. Gender The physiological differences between men and women extend into cardiovascular health and are thus noticeable in the PPG waveform [97,98]. Phuong and Maibach noted that gender differentiation in average blood pressure and average heart rate indicate that baseline differences in vascular and cardiovascular parameters must be considered to determine cardiovascular health between males and females, which also extends into analyses of the PPG waveform [98]. These physiological baseline differences can skew PPG signals along with interpretation algorithms for the determination of cardiovascular health per gender. Proctor et al. reported that the average heart rate for males was 70-72 BPM and the average for females was 78-85 BPM in 16 endurance-trained men and 14 endurance-trained women [99]. Males have a 15-30% increase in heart mass compared to females resulting in females' hearts having to beat faster to maintain the same output [100]. This leads to dramatic changes in PWV and subsequently PTT. Regarding blood pressure, Reckelhoff reported the mean blood pressure was 6-10 mmHg higher in males than in pre-menopausal females. However, post-menopausal females have higher blood pressure than males, which can be attributed to arterial stiffness [101]. Additionally, most vessels have gender-dependent diameters. For example, the radial artery diameter also differs between males and females, with males having a diameter of 2.76 ± 0.009 mm and females a diameter of 2.32 ± 0.07 mm at the same segment [102]. As expected, a larger target vessel will yield a PPG with a greater signal resolution. These significant differences in artery diameter lead to significantly different radial artery flow rates as well, with a flow rate of 21 ± 4 and 10 ± 1 mL/min for males and females, respectively [102]. Skin thickness is another gender-dependent variable known to affect signal strength (as mentioned previously), and is observed to be higher in men than women across all age ranges [97,103]. Shuster et al. measured skin thickness in 90 Caucasian men and 107 women and found that forearm skin thickness is greater in men than women, but no statistics were presented to explore statistical significance [103]. Each of the aforementioned parameters can influence the hardware chosen and utilized when designing a PPG system to ensure a normalization between measurements for males and females, as they directly translate to a difference in PPG signal. PWV is highly dependent on vascular stiffness and is observable through PPG. Ahimastos et al. studied the changes in arterial stiffness in pre-and post-puberty males and females by measuring arterial compliance and PWV [104]. It was discovered during prepuberty, females have lower arterial compliance and higher central and peripheral PWV yielding stiffer arteries compared to males. Post-puberty, both males and females have an increase in arterial compliance, determining no gender-dependent differences between the measurement. However, for females, PWV stayed roughly the same, while there was an increase in PWV for males. This increase in PWV originates from an increase in arterial stiffness for males corresponding to a higher pulse pressure while the female pulse pressure remained constant throughout pre-and post-puberty. Pulse pressure is dependent on both arterial stiffness and cardiac output. On the other hand, post-menopausal females experience an increase in arterial stiffness and pulse pressure. Dehghanojamahalleh et al. demonstrated variations in PPG morphology attributed to gender differences [105]. Interestingly, direct influence of gender variation was only significantly different at the upper peripheral measurement site such as the hand and fingers as opposed to lower extremities such as the ankle and feet. The study measured the pulse arrival time and PTT, where both measurements showed dependence on gender. Pulse wave propagation delay between the genders indicates baseline differences in arterial stiffness with women displaying lower degrees of pulse arrival latency delay indicating higher vascular stiffness [106,107]. A study on heart rate variability using PPG by Antelmi et al. shows changes between gender across different age ranges [108]. The results presented show men as having greater low-frequency signal components and women as having greater high-frequency components. When comparing the accuracy of commercial devices as done by Shcherbina et al., a significantly higher device measurement error is seen in males than females for all devices [109]. The devices under analysis include the FitBit, Apple Watch, Microsoft Band, Samsung Gear, and Basis Peak watch, and the measurement metrics showing higher error for males include heart rate and maximal oxygen uptake. This provides insight into PPG signal variation due to varying deep internal vasculature leading to peripheral measurement sites. Accounting for the manifestation of these differences in a PPG is known to be an underresearched area [110,111]. For respiratory measurements, Nilsson et al. have reported that respiratory synchronous variation in the PPG signal is irrespective of gender, and thus no action is required [111]. Nowara et al. report insignificant differences in blood volume pulse SNR in iPPG (imaging PPG) between males and females [112]. More work should be conducted to evaluate accountancy methods for the propagation of gender-induced variation in applications of PPG beyond respiration. Table 4 summarizes the physiological data supporting the presence of differences. Physiology While the previous section discussed variations across individuals, the next section will discuss the effect that physiology can have on the PPG waveform. We will discuss respiration, venous pulsations, body site of measurement, and local body temperature. Underlying physiology can affect the baseline values or periodicity of the PPG waveform or even change the waveform shape entirely [115,116]. Thus, it is important to explore and identify these sources of error so that they do not propagate to cardiovascular parameter values. Respiration While the most commonly examined component of the PPG signal is the AC component relevant to pulsatile blood volume, there are various factors which can modulate the baseline of the PPG signal; one such factor is respiration. This is one of the most significant sources of error in heart rate measurements using PPG, even though respiratory rate is the most sensitive vital sign, often used as an indicator of clinical deterioration [117]. PPG optical signal modulation by respiration is most commonly manifested as baseline and amplitude modulation [117]. Physiologically, respiration and cardiac output are inherently linked, as an increase in respiration rate can directly affect the variation of heart rate through the nervous system reduced inhibitory control [118]. Modulation of the baseline due to respiration presents itself as a superposition of the PPG signal corresponding to the cardiac cycle and a lower-frequency sinusoidal waveform [117]. It is reported that the lower-frequency wave manifestation in the total PPG signal can be attributed to the venous vascular system [119]. As respiration occurs, the venous system is more compliant to smaller changes in pressure as opposed to the less distensible arterial system [118]. Shifts in blood volume in the venous system due to changes in respiratory behavior will cause a corresponding change to the baseline amplitude of the PPG signal as blood volume increases and decreases [120]. Changes in thoracic volume and pressure cause an alternating pressure gradient in the venous vascular system [118]. It is also suggested that the coupling of the respiratory system and the autonomic nervous system is a contributor to mechanical effects on the vascular system which is detected in the PPG waveform as a direct result of respiration. Removal of this signal contribution is often done via filtering and post processing. There are a wide variety of studies looking to extract respiration rate from obtained PPG signal. However, it is, first, critical to investigate the quantifiable changes and errors to the PPG signal caused by respiration rate [118]. Dehkordi et al. and Addison et al. identified the three significant variations to the PPG signal by respiratory rate after analyzing PPG from 139 healthy adults (67-18 years old) and the Capnobase dataset [115,121]. These variations are respiratory-induced intensity variation (RIIV), respiratory-induced amplitude variation (RIAV), and respiratory induced frequency variation (RIFV), as shown in Figure 5 [115,121]. RIIV modulates the DC baseline of the PPG curve superimposing the wave on top of a low frequency sinusoid. RIAV causes significant changes in each peak amplitude. RIFV induces a phase shift between each cycle by elongating or squeezing each wave. These changes to the PPG signal can occur in any combination. the coupling of the respiratory system and the autonomic nervous system is a contributor to mechanical effects on the vascular system which is detected in the PPG waveform as a direct result of respiration. Removal of this signal contribution is often done via filtering and post processing. There are a wide variety of studies looking to extract respiration rate from obtained PPG signal. However, it is, first, critical to investigate the quantifiable changes and errors to the PPG signal caused by respiration rate [118]. Dehkordi et al. and Addison et al. identified the three significant variations to the PPG signal by respiratory rate after analyzing PPG from 139 healthy adults (67-18 years old) and the Capnobase dataset [115,121]. These variations are respiratory-induced intensity variation (RIIV), respiratory-induced amplitude variation (RIAV), and respiratory induced frequency variation (RIFV), as shown in Figure 5 [115,121]. RIIV modulates the DC baseline of the PPG curve superimposing the wave on top of a low frequency sinusoid. RIAV causes significant changes in each peak amplitude. RIFV induces a phase shift between each cycle by elongating or squeezing each wave. These changes to the PPG signal can occur in any combination. Li et al. conducted an investigation on the respiratory induced variations to the PPG signal with a total of 28 subjects, 14 male and 14 female, with age ranging from 18 to 45 years [122]. The experiment was conducted with various controlled breathing conditions, comparing period and amplitude of systolic, diastolic, and overall wave cycle. Across the different positions, amplitude correlation coefficient shows distinctly larger difference between diastole and reference versus systole and reference. However, amongst the various conditions, the three factors that present the strongest correlation with respiration rate influence are pulse period frequency variation, diastole amplitude variation, and peak intensity variation [122]. From experimentation results, it is concluded that respiration signal amplitude has the strongest influence on respiratory-induced variation of the PPG signal. Moreover, as the respiratory rate increases, the respiratory signal decreases, indicating that higher rates of respiration leads to lower fluctuations in respiratory influence changes to PPG [122]. Through understanding the main influence of respiratory rate on overall PPG signal, techniques can be developed to target these changes to separate the two signals. As a method of isolation of the PPG signal from noise caused by respiration, some different mathematical methods of signal analysis and conditioning have been proposed. The basic method of frequency filtering has been used to eliminate the frequency components contributed by respiration [118]. Typically, high-pass filters with cut-on values ranging from Li et al. conducted an investigation on the respiratory induced variations to the PPG signal with a total of 28 subjects, 14 male and 14 female, with age ranging from 18 to 45 years [122]. The experiment was conducted with various controlled breathing conditions, comparing period and amplitude of systolic, diastolic, and overall wave cycle. Across the different positions, amplitude correlation coefficient shows distinctly larger difference between diastole and reference versus systole and reference. However, amongst the various conditions, the three factors that present the strongest correlation with respiration rate influence are pulse period frequency variation, diastole amplitude variation, and peak intensity variation [122]. From experimentation results, it is concluded that respiration signal amplitude has the strongest influence on respiratory-induced variation of the PPG signal. Moreover, as the respiratory rate increases, the respiratory signal decreases, indicating that higher rates of respiration leads to lower fluctuations in respiratory influence changes to PPG [122]. Through understanding the main influence of respiratory rate on overall PPG signal, techniques can be developed to target these changes to separate the two signals. As a method of isolation of the PPG signal from noise caused by respiration, some different mathematical methods of signal analysis and conditioning have been proposed. The basic method of frequency filtering has been used to eliminate the frequency components contributed by respiration [118]. Typically, high-pass filters with cut-on values ranging from 0.25 to 0.5 Hz are used to eliminate low frequency noise which is most often representative of respiration rate. Various other studies have been conducted to extract and separate respiration rate from PPG signals using different algorithms or complex neural networks [117,123,124]. Charlton et al. confirmed that the extraction of respiration rate from PPG and ECG measurements is possible by testing 314 different extraction algorithms operating both in the time domain and frequency domain [117]. However, the results indicate that respiration rate extraction is more precise when performed on ECG data as opposed to PPG. It is suggested that this is due to the physiological mechanisms which generate these two signals and their unique interaction with respiratory rate. The mechano-physiology factors that lead to the behavior measured by PPG appears to be more sensitively influenced by respiratory modulation. Interestingly, algorithms operating on the time domain provided more accurate results than frequency domain extraction [117]. It is indicated that time domain algorithms do not require quasi-static respiratory rate, unlike frequency domain algorithms, which may contribute to their superior performance [117]. As mentioned previously respiration is often filtered out and classified as noise. However, in cases where this is desirable information, one looks towards the AC component of the venous network. Venous Pulsations As discussed at length, PPG sensitivity to blood volume changes yields an "AC" signal component that relates to the mechanical properties of a corresponding vessel and even the larger cardiovascular system. The venous system similarly contributes periodicity to the PPG signal, but this is often considered noise. This noise originates from the vascular network of small vessels transporting deoxygenated blood from the capillaries to the heart. However, it is a recognizable waveform (Figure 6), and has been studied previously [116]. Previous works have demonstrated that the venous system exhibits mechanical changes in accordance with cardiac, respiratory, and autonomic physiological functions [119]. Muscular contraction and relaxation are the major functions contributing to the movement of blood from the veins back to the heart, as well as venous valves preventing back flow of blood. The difference in compliance between the arterial and venous systems, as well as the lower-pressure gradient, translates to the relative amplitude of the AC venous component of the PPG being smaller than the AC arterial component [125]. While this relationship is largely maintained over the body, PPG measurements at different body sites can result in stronger relative contributions from the venous system, such as the forearm versus the finger. The finger is arterially dominated while the forearm has a larger venous component [120]. Thus, the venous system can potentially add noise to a PPG due to vein pulsations and the variation of the amplitude of those pulsations across the body. Shelley et al. demonstrates diastole variability in the PPG signal is related to peripheral venous pulsation [116]. The investigation conducted utilized a PPG sensor on the index finger alongside a radial catheter on the same hand. Measurements were taken on three patients under general anesthesia: a 72-year-old woman with osteoarthritis and nadolol-treated hypertension, a 40-year-old woman with a ruptured ectopic pregnancy, and a 57-year-old woman undergoing a suboccipital craniotomy with nifedipine-treated hypertension. Data were presented observing the variation in arterial waveform, central venous pressure, peripheral venous pulsation, and PPG signals. Peripheral venous pres- Figure 6. Venous pulse within a PPG. The blue waveform is the venous waveform, and the red waveform shows the PPG waveform. Adapted from Shelley et al. [116]. Created with BioRender.com. Shelley et al. demonstrates diastole variability in the PPG signal is related to peripheral venous pulsation [116]. The investigation conducted utilized a PPG sensor on the index finger alongside a radial catheter on the same hand. Measurements were taken on three patients under general anesthesia: a 72-year-old woman with osteoarthritis and nadolol-treated hypertension, a 40-year-old woman with a ruptured ectopic pregnancy, and a 57-year-old woman undergoing a suboccipital craniotomy with nifedipine-treated hypertension. Data were presented observing the variation in arterial waveform, central venous pressure, peripheral venous pulsation, and PPG signals. Peripheral venous pressure was monitored through an intravenous catheter with pressure. PPG signal was monitored both pre-and post-catheterization to verify that the catheter did not significantly affect the overall recorded signal. Continuous monitoring of both venous and artery contributions to the PPG waveform indicate strong correlation between the diastolic peak in the plethysmograph and peaks in the venous pulse at the peripheral location of the hand [116]. In two specific cases, a qualitative test was performed by applying light pressure to a site proximal to the measurement location on the upper arm and observing changes in three different stages of the venous pulse and PPG. Pressure in the venous system is observed to be significantly lower than the arterial system: below 20 mmHg has been reported [126]. Vascular compliance between the venous and arterial system can differ significantly, up to 10-fold [126]. Light pressure application upstream is used to occlude the low-pressure pulsation in the peripheral venous system. As the pressure is applied and released, the amplitude of the venous pulsation can be observed as being superimposed at the diastolic phase of the PPG curve with a slight time delay when there is no applied pressure, and no superposition when there is low pressure applied proximal to the hand. It appears that changes to diastolic amplitude due to the presence of the venous waveform can increase diastolic phase amplitude up to 40% of the total alternating amplitude of the PPG signal. In the case reports studied by Shelley et al., the significant relationship of the influence of the venous pulsation to the PPG waveform is observed at the peripheral location [116]. Although the specific changes to the waveform itself are not quantified, the observable effect of the venous system can alter conclusions made solely on the PPG signal without accounting for venous pulsation. The presence of the venous pulsation associated peak in the diastolic phase of the PPG waveform can present errors with identification and quantification of the dicrotic notch and diastolic features. Noninvasive measurement of venous pressure can help with diagnosing clinically relevant conditions such as congestive heart failure or valvular heart disease; however, separation of these signals is desirable [116]. [102]. Nijland et al. also confirmed these results in fetal lambs, where reflectance pulse oximetry SpO2 readings were significantly lower than fiberoptic SaO2 values. However, when the vessel was coagulated, this difference became negligible [128]. By having deoxygenated blood in the veins, a standard pulse oximeter will interpret there to be a higher ratio of deoxygenated blood to oxygenated blood, which yields a lower SpO2. All authors were able to eliminate the problem by applying pressure or otherwise occluding veins, a common practice in commercial oximeters. Within a PPG waveform, venous pulsations can also be detrimental. They have been shown to artificially inflate the systolic peak amplitude and interfere with the rising systolic edge of the typical PPG waveform; an important parameter used in cardiovascular health determination [116]. The frequency components contributing to the perturbations in the signal are similar to the cardiac frequency. Venous pressure can also add low-frequency noise to the PPG waveform, the same effects seen by respiration, which is caused by venous anatomy [129][130][131]. Often between 0 and 0.5 Hz, this noise contributes to be baseline oscillations. Eliminating venous contributions to PPG, as mentioned previously, is often done by application of contact pressure to the measurement site. However, it is possible that this will eliminate the arterial pulsation in hypotensive patients as well as increase pressure-induced injury risk to the measurement site [125]. This, along with the other effects of applied pressure, is explored in Section 4.3. Grabovskis et al. noted that the amount of pressure applied is often underreported and can manipulate the resultant PPG waveform [132]. This then affects clinical parameters similar to those reported by Takazawa [29]. They determine that the appropriate amount of pressure is such that the b/a ratio of the SDPPG is equal to 0.70. This means that the external pressure on the arterial wall is equal to the mean arterial pressure [132]. However, this is detrimental, since manipulating the waveform to a specific shape limits the useful data that can be extracted. Beyond contact pressure, the respiratory component has been eliminated via adaptive and non-adaptive filters allowing signal from approximately 0.5 to 4.0 Hz [133]. Additionally, previous works discuss variable-frequency complex demodulation, continuous wavelet transformation, and autoregressive modeling to decouple this signal from the arterial component of the PPG [22,120,127,132]. Body Site of Measurement PPG systems have been developed for the fingers, wrist, brachia (upper arm), earlobe, ear cartilage, superior auricular region, esophageal region, and the forehead [68]. For fitness applications, the PPG system developed for the brachia is often used to monitor heart rate, but devices that can be placed on the wrist have increased in popularity due to them being commercial availability, their ease of use, cost, and portability. In clinical settings, the fingertip or ear lobe are more common locations due to the high vascularization found in these areas. While it is advantageous to have different devices for different locations, it is problematic when algorithms, processing techniques, and indices to assess cardiovascular risk are applied without considering the various anatomies at each of these locations. Anatomical variations in parameters such as skin thickness and blood basal perfusion will lead to changes in the AC and DC amplitude, and the duration of a PPG waveform. While a thick epidermis will attenuate light more than a thin epidermis will, the vascularization and perfusion of a given anatomy is particularly important in governing the magnitude of the AC component of the PPG. Thus, this section will discuss the impact of body site of measurement on the PPG waveform. Changing skin thicknesses across the body lead to changes in the amount of light attenuated before it reaches microvasculature, such as the arterioles or an artery. This influences the signal amplitude and SNR of the resultant PPG waveform. The thickness of individual skin layers has been well characterized in a variety of ways; however, the skin as a function of anatomical location has been less well studied [134][135][136]. In general, skin found on tactile anatomies such as the finger will be thinner than skin found on anatomies such as the palm of the hand or the sole of the foot. In vivo high frequency ultrasound is the current standard for monitoring skin thicknesses, as old practices such as stained samples from biopsies or results postmortem often yield values with poor intersample agreement [137,138]. Overall, the literature has shown that the fingertips have the thinnest skin, followed by the forearm, dorsal hand, cheek, and forehead [139]. The skin thickness for 38 anatomical locations has been ranked based off 5 separate works (Figure 7). However, there exists tremendous variation even within a given body site caused by factors such as age, gender, BMI, sun-damage, and experimental methodology, leading to coefficients of variation up to 40% [139][140][141]. Skin thickness is not the only body site-dependent factor causing variations in PPG results. In order to quantify the effect of body site on PPG, one must also analyze changes in blood supply and basal perfusion. The literature has analyzed their cumulative effect on PPG. there exists tremendous variation even within a given body site caused by factors such as age, gender, BMI, sun-damage, and experimental methodology, leading to coefficients of variation up to 40% [139][140][141]. Skin thickness is not the only body site-dependent factor causing variations in PPG results. In order to quantify the effect of body site on PPG, one must also analyze changes in blood supply and basal perfusion. The literature has analyzed their cumulative effect on PPG. Relative total skin thicknesses across body sites for a sample of studies, from thinnest to thickest. Each anatomy also has (1), (2), or (3) next to its name, indicating the number of studies this body site was measured in (out of 5 considered works). Created with BioRender.com [80,81,140,142,143]. A common difference across anatomies is basal perfusion, which directly affects PPG signal amplitude. Tur et al. utilized laser doppler velocimetry (LDV) and PPG to assess variations in blood perfusion across the body for 10 healthy men between 20 and 30 years old. Both tools in combination provided assessment of both blood velocity and blood volume. It was found that the hand and the face had significantly higher (p < 0.01) perfusion values than the trunk, upper, and lower limb. Additionally, the side of the trunk was found to have very low perfusion. Finally, sites within the face and hand did not yield widespread statistical difference between themselves. The back of the ear and earlobe Figure 7. Relative total skin thicknesses across body sites for a sample of studies, from thinnest to thickest. Each anatomy also has (1), (2), or (3) next to its name, indicating the number of studies this body site was measured in (out of 5 considered works). Created with BioRender.com [80,81,140,142,143]. A common difference across anatomies is basal perfusion, which directly affects PPG signal amplitude. Tur et al. utilized laser doppler velocimetry (LDV) and PPG to assess variations in blood perfusion across the body for 10 healthy men between 20 and 30 years old. Both tools in combination provided assessment of both blood velocity and blood volume. It was found that the hand and the face had significantly higher (p < 0.01) perfusion values than the trunk, upper, and lower limb. Additionally, the side of the trunk was found to have very low perfusion. Finally, sites within the face and hand did not yield widespread statistical difference between themselves. The back of the ear and earlobe yielded significantly higher PPG signal amplitude than the hand and postauricular region (i.e., neck behind the ear), but LDV values and comparisons to the fingertips and the rest of the face were not significant. These data support the hypothesis that the ear and the fingertips will yield the largest PPG signal amplitude [144]. This study did not look at the wrist, but as mentioned previously, authors found the forearm to have lower perfusion levels as compared to the face and hand. While these results provide information about superficial vasculature 1-2 mm deep in the skin, it did not provide information about accompanying variations in skin thickness across sites. Overall, it has been found that locations on the head and finger provide the greatest PPG amplitude [95,145]. Locations around the ear are less uniformly reported in the literature; however, this likely is caused by inconsistencies across works, as some state "ear" and others provide more detail. Fallet et al. found via iPPG that for all wavelengths of light, the forehead is superior to the cheeks and the whole face for determining heart rate. The forehead yielded the greatest power at a frequency matching a reference ECG, likely due to the increased perfusion to the area [146]. Beyond PPG amplitude, the aforementioned anatomical variations lead to changes in PPG waveform. First, it is important to note bilateral symmetry. Allen and Murray have noted that PPG waveforms are largely similar between bilateral anatomies at the ear, finger, and toe at rest, according to their study featuring 116 (68 male, 48 female) healthy Caucasian subjects with a median age of 43 [90]. Additionally, PTT is also similar bilaterally at the ears and finger, while a small difference exists at the toes [91]. This is important as studies often do not note the bilateral location of the sensor (if relevant) or place the sensor on the dominant hand [147,148]. These studies demonstrate that a given left and right extremity will give the same signal, save for differences such as motion artifacts. Rajala et al. analyzed the waveform and pulse arrival time of PPGs taken at the wrist and the finger of 30 subjects (19 males, 11 females, average age of 42) [26]. It was found that the wrist significantly (p < 0.01) had a greater full-width half-max than the finger. The authors ascribed this difference to the flat shape of the wrist PPG, whereas the finger PPG more closely resembles a traditional PPG with a noticeable dicrotic notch. Additionally, the authors noted a consistent increase in signal amplitude in the wrist PPG compared to the finger PPG. They hypothesized that this effect could be due to an increase in wrist temperature due to the fabric component of the PPG sensor. Hartmann et al. performed a similar study in 36 healthy subjects (12 male, 24 female, mean age of 33), looking at the peak point position, dicrotic notch time, and reflection index at the forehead, earlobe, arm, wrist (upper, under), and the finger [95]. The finger yielded a significant peak point position, meaning a shorter systolic rising edge (p < 0.001 for all sites except under the wrist, which had p = 0.04). Dicrotic notch duration at the finger was not significantly different from measurements under the wrist and forehead, but was different from the earlobe, arm, and wrist. Finally, the finger was shown to have a significantly lower (p < 0.001) reflection index. These differences across anatomies are crucial to note when using PPGs to derive clinical diagnoses/prognoses. For example, Alty et al. demonstrated the use of crest time to classify CVD, where the crest time is directly related to the systolic rising edge which is significantly shorter in the fingertip [149]. Takazawa et al. named numerous ratio metric parameters as a function of the APG, the second derivative of the PPG. Many of these parameters involve components of the reflection index, which the above study showed can be different at various anatomies [29]. Thus, not only should researchers be consistent to the same anatomy across studies, but it is possible that different anatomies are better at providing diagnostic/prognostic-friendly waveforms for various applications. Nillsson et al. demonstrated that this is the case for respiratory rate. It was found that the forearm had a high respiratory rate spectral power content compared to anatomies such as the finger, which had signal primarily derived from heart rate [130]. Overall, the literature has demonstrated that different anatomies demonstrate PPGs with varying waveforms. Additionally, the thickness of skin and blood perfusion will play a role in the strength of the signal able to be obtained. Due to the presence of a distinct dicrotic notch and low skin thickness, the finger is likely to be a successful anatomy that can be used with consistency. It remains to be seen if diagnostic predictor parameters such as those presented by Takazawa retain their statistical and predictive power across anatomies and local variables such as body temperature [29]. Local Body Temperature Thermoregulation is an important component of homeostatic function. Large shifts in temperature typically occur in response to external stimuli, such as exercise and contact. Subtle temperature changes can follow regular patterns associated with normal physiological functions. For example, Allen et al. studied three adjacent fingers of the hand of 15 healthy males measuring PPG, laser doppler flow (LDF), and skin temperature changes following a deep inspiratory gasp of air. They reported a 2.6-times increase in PPG amplitude, 93% decrease in LDF flux, and median decrease of 0.089 degrees Celsius [150]. The body's thermoregulatory response to stimuli includes vasoconstriction/vasodilation, which could lead to a delayed response of temperature in the skin. Thus, while the pulsating components of PPG are related to arterial blood volume, the non-pulsating component is a function of the average blood volume, respiration, the sympathetic nervous system, and thermoregulation [151]. As such, typically filtered out PPG components can provide information surrounding thermoregulatory blood flow, one example being through arterio-venous anastomosis shunt vessels [151]. Furthermore, while studying the effectiveness of PPG on identifying limb ischemia among men and women of an average age of 70 years old, Carter and Tate reported that PPG amplitude is significantly correlated (r = 0.550; p < 0.001) to skin temperature of the toe, as body cooling leads to reductions in PPG wave amplitude [152]. Lindberg et al. found that PPG amplitude showed a direct response to skin temperature elevation, especially in the finger skin of 10 Caucasian young adults (aged 22-30) using three different arrangements of PPG probes with different source/detector separations [153]. These correlations of PPG with body temperature should be noted among varying demographic groups. For example, Iacopi et al. found the skin of the foot of obese patients to be about 7 degrees Celsius higher than the foot of nonobese patients, and thus caution should be noted if PPG is measured at the foot [60]. Khan et al. investigated the effects of finger temperature on PPG signal on 20 healthy adult volunteers (24.5 ± 4.1 years of age), reporting a reduction in PPG Root Mean Square values as ambient temperatures went from warm to cold temperatures [154]. The authors conclude that PPG quality is improved with warm temperatures, yet do not mention the effect of thermoregulatory responses [155]. Hahn et al. studied the effect of cold exposure on arterial PPG among 10 heathy volunteers and 10 individuals with systemic sclerosis. They report significant reductions in PPG pulse wave amplitudes (p < 0.0001) between the groups both at rest and after cooling the finger to 16 degrees Celsius [156]. Askaraian et al. reported that while measuring PPG with a submerged finger of 20 healthy volunteers aged 18 to 80 years old, they found that a drop in 40 degrees Celsius contributed to a 12 dB drop in PPG signal amplitude [157]. Temperature drops also seem to result in decreased accuracy of heart rate estimation. Jeong et al. found that local skin surface temperature changes affect PPG components of 16 healthy subjects aged 23-30 years (26 ± 2.1) and BMI ranging between 20 and 26 (23 ± 4.8), hence recommending that temperature be monitored in order to reliably evaluate cardiovascular parameters [158]. Zhang et al. demonstrated how PPG reproducibility is affected by cold exposure and acclimatization after middle finger submersion in 9 ± 2 • C water for 2 min and comparison with index finger as a reference [159]. Significant changes in DC and AC amplitudes of the PPG pulse indicate that the mild cold exposure has a substantial effect on finger blood circulation. The authors suggested that mild cold exposure may have a delayed effect on PTT due to cold-induced vasodilatation and could be a potential source of error [159]. Thus, while physiological temperature changes due to thermoregulation may impact a signal, studies detailing changes caused by external temperatures also illustrate significant PPG impact. Aside from temperature changes due to the subjects' physiological state, abrupt changes in ambient temperature could affect the PPG system components, and thus the PPG signal. There are also PPG instrumentation aspects that have strong temperature dependence. For example, many of the photodetectors are subject to creating artifacts from sensor-tissue motion and sensor deformation changes [160]. For silicon photodiodes, the absorption coefficient increases with temperature, and thus the detectors will absorb less light at higher temperatures, inducing an apparent shift in PPG amplitude [161]. Higher temperatures also generate higher thermal noise in the detector. Drift current varies directly with temperature when photodiodes are used in photovoltaic mode [161]. It has been shown that the temperature of the body site where PPG data are being collected can influence the resultant waveform amplitude, and even the time between waveforms. This is summarized in Table 5. However, many of these studies suggested that external temperature has a much more significant effect on the PPG waveform. Thus, beyond variation found within individuals and changing physiologies, external factors can also significantly impact RCIM PPG. External Factors The previous two sections discussed sources of error pertaining to the subjects of cardiovascular monitoring. This next section explains sources of error that originate from the environment and factors outside of the patient's characteristics. These are motion artifacts, ambient light, and applied pressure to the measurement site. While the previous sections are relevant to discerning and evaluating the PPG waveform, this section more heavily discusses obtaining an accurate signal. Motion Artifacts PPG sensors are used in settings where a person is sedentary or in motion. Within all settings, motion artifacts are picked up by the sensor and cause fluctuations within the collected signal. During sedentary moments, respiration rate, thermoregulation, and sympathetic nervous system activity cause the DC baseline to wander [162]. Adjusting positions or tapping a finger can be considered a micro-motion, while a macro-motion is performing an exercise such as walking or jogging-both types of motion cause more significant fluctuations to the signal than sedentary motions. With macro-motions, there are different grades of intensity, and each affects accurate signal acquisition. All these motions range between 0.1 and 20 Hz, which is within the frequency range for heart rate (1-4 Hz) [163]. With the addition of motion artifacts within a PPG signal, results for SpO2, heart rate, and other PPG dependent measurements can be skewed. This can create false alarms or inaccurate readings. Another issue is when motion is cyclical or periodic. Heart rate tracking devices will pick up on the cyclical motion and mistake it for the cardiac cycle, causing false readings as well [46]. With the increase in commercial heart rate monitoring devices, there is an increased need for motion artifact identification and elimination. Motion artifacts can affect the acquired signal differently based on the location of the sensor along with what wavelength of light is used. Maeda et al. measured both IR and green wavelengths along the upper arm, forearm, wrist, and finger [162]. They found the upper arm had the lowest artifact ratio, which is the ratio of the magnitude of the PPG signal after a motion and the magnitude of the PPG signal before the motion. The green sensor also had a lower artifact ratio for all locations compared to the IR sensor. The highest correlation coefficient between the PPG signal and a chest ECG was that of the green sensor on the upper arm. Lee et al. saw similar results regarding the wavelength used [164]. They looked at the correlation coefficient and ∆SNR for blue (470 nm), green (530 nm), and red (625 nm). With the longer wavelength, red had a significantly bigger ∆SNR than green or blue. It was determined that green was the best option due to its higher correlation coefficient, but blue is an alternative option due to its significantly similar ∆SNR as green. With their longer wavelengths, red and IR have deeper penetration depths compared to blue and green, making them prone to more motion artifacts such as the factors that directly affect the DC baseline [162,164]. Aside from location and wavelength, the intensity of motion also plays a role in how accurate heart rate monitoring is. In order to understand how motion artifacts affect the accuracy of heart rate monitoring, researchers have tested off-the-shelf health and fitness trackers and compared them to a chest ECG during different intervals of motion. All the discussed trackers are designed to be worn on the wrist. Jo et al. compared the accuracy of the Basis Peak (BP) and Fitbit Charge HR (FB) during high-intensity events where the mean ECG HR was >116 bpm, and low-intensity events where the mean ECG HR was <117 bpm [165]. Overall, they discovered that the BP performed better, in relationship to the ECG, (r = 0.92, p < 0.0001) than the FB (r = 0.83, p < 0.0001). During the low-intensity events, the BP provided accurate heart rate readings (r = 0.84, p < 0.05), while the FB had a moderate correlation (r = 0.73, p < 0.05). High-intensity events caused the BP to have a weaker correlation (r = 0.77, p < 0.05), whereas the FB had the weakest correlation (r = 0.58, p < 0.05). These results align with an assessment done by Stahl et al., where they measured the mean absolute percentage error values (MAPE) of BP and the FB to be 3.6% and 6.2%, respectively [166]. Stahl et al. also looked at the Scosche Rhythm, Mio Alpha (MA), Microsoft Band, and TomTom Runner Cardio (TT). When going from a resting phase to a low-intensity walking phase, they found that the correlation coefficient went down but had slowly rebounded when moved into a more intense walking phase. They also reported that during the high intensity running stage, they saw some of the lowest MAPE from the MA (0.82%), the TT (0.97%), and the BP (3.28%). This conclusion showed that during intermediate and low intensity motions, the accuracy is dependent on the device. In another study, performed by Dooley et al., the comparison was made between the Apple Watch (AP), FB, and Garmin Forerunner 225 (GF), where the AP was considered the best in terms of MAPE (1.14-6.70%), while the FB and the GF had a MAPE of 2.38-16.99% and 7.87-24.38%, respectively [167]. The AP (p = 0.84) and GF (p = 0.35) had no significant difference in HR readings from the ECG during vigorous intensity. All had a significant difference during low-intensity events. This shows that, depending on the exercise, each tracker will have a different accuracy, which could be attributed to the different noise-canceling algorithms used on board. Bent et al. confirmed that the wearable device and activity condition are significantly correlated in their study [46]. There was a significant difference between research-grade and consumer-grade wearables. The mean absolute error (MAE) for all motion phases was better for the consumer wearables than the research wearables. This study also showed that the Apple Watch 4 had the highest accuracy with an MAE of 2.7 BPM at rest and 4.6 BPM during the walking phase, confirming the results from Dooley et al. All the studies have shown that motion artifacts cause varying accuracies depending on motion intensity and the type of heart rate monitor used. Being able to detect the motion artifact regardless of motion intensity is crucial for more accurate heart rate measurements. There are several ways of detecting noise or motion artifacts that do not rely on secondary sensors, which include using filters with cross-correlation, analyzing the morphology of the signal, and higher-order statistics in both the frequency and time domain. Karlen et al. created an algorithm with 96.21% sensitivity and 99.2% positive predictive value for good pulses using repeated Gaussian filters and cross-correlation [168]. The signal is analyzed in the time and frequency domains and then cross-correlated to determine whether the pulse has a high or low signal quality index (SQI). The correlation utilizes the rising slopes of at least three previous pulses along with applying repeated Gaussian filters to predict where the next pulse should be. If the pulse does not fall within the range of prediction, the pulse is deemed as bad (low SQI) and could be eliminated from HR calculations. With this algorithm, Karlen et al. were able to distinguish where motion artifacts occurred in a beat-to-beat manner. Another form of predicting motion artifacts is to dissect the signal using morphology. Sukor et al. focused on locating the pulse amplitude, the differences in trough depth between successive pulses, and the pulse width [169]. They found their algorithm to have an accuracy of 83 ± 11%, a sensitivity of 89 ± 10 %, and a specificity of 77 ± 19%. When comparing their algorithm to an ECG, they were able to lower the error in heart rate readings to 0.49 ± 0.66 bpm, whereas without the algorithm, the error was 7.23 ± 5.78 bpm. Chong et al. also worked on identifying four time-domain characteristics while using a support vector machine to build a decision threshold to classify clean and corrupt PPG signals [170]. They were able to achieve an accuracy of 93.7% during a daily motion experiment, and they saw a significant reduction in error for SpO2 and heart rate measurements. Karlen et al., on the other hand, identified time-domain characteristics by using pulse segmentation to determine the up-slopes of the PPG signal [171]. Their application was created for mobile PPG measurements instead of wearable. They were able to achieve a positive predictive value of 96.68% and a sensitivity of 98.93%. Like the others, Coucerio et al. used time-domain analysis, but they also implemented period domain analysis to identify 26 features across both domains [172]. They were able to achieve a specificity of 92.7% and a sensitivity of 82.7%. The last detection method is to use higherorder statistics done by Krishnan et al. and Selvaraj et al. [173,174]. Kurtosis and Shannon Entropy were the measurements used by Selvaraj et al. to determine motion artifacts, and they were able to perform an accuracy of 88.8% in a laboratory setting [174]. Krishnan et al. used kurtosis and skew in the time domain while also analyzing frequency domain kurtosis and quadratic phase coupling [173]. They discovered skew and kurtosis measurements in the time domain to be higher when there was corruption, and the frequency-domain kurtosis was smaller when there was corruption. The quadratic phase was present in corrupted signals. All the parameters could then be used to eliminate sections of the PPG signal that were corrupted with motion artifacts. The discussed detection methods are all real-time ways to detect and eliminate sections of acquired PPG signals that contain corrupted areas due to motion artifacts. This will help to keep heart rate and SpO2 measurement error lower. As implemented by consumer-grade HR monitors, secondary sensors can also be integrated into PPG sensors to detect motion. These include accelerometers [175][176][177][178][179], gyroscopes [179,180], piezoelectric transducers [181], or the utilization of another wavelength of light as a motion reference [182,183]. Each of these sensors is used to detect and then reduce or eliminate noise so a clean signal can be reconstructed. Accelerometers and gyroscopes are the standards, but when trying to measure micro motions such as finger tapping, accelerometers and gyroscopes do not always pick up on the motion because the wrist remains still. Both the piezoelectric transducer and another wavelength of light can pick up these finer motions [181,182]. The algorithms used to detect motion with a secondary sensor also use adaptive filters which have been used independently of sensors as noise reduction techniques but are better suited to work in conjunction with secondary sensors. Table 6 lists adaptive filters used to reduce motion artifacts. Table 6. Adaptive filters utilized to eliminate or reduce motion artifacts independently and in conjunction with secondary sensors. This is not an all-inclusive list. Filter Relevant Work Reference Ambient Light PPG signals are low-amplitude signals and have a normal pulse frequency within the range of 0.5 to 4 Hz [195,196]. The interference of ambient light with the PPG signals can lead to an inaccurate heart rate estimation. The ambient light intensity can be a zero frequency (e.g., DC), such as sunlight, or at a variable frequency, as from room lights (e.g., 60 Hz in the US) and is generally multiple orders of magnitude larger than the pulsatile (AC) component of the PPG, which can lead to signal saturation. Thus, ambient light rejection is important to preserve the efficacy of PPG sensors [197,198]. In 1991, it was found that commercial pulse oximeters had measurement error caused by ambient light [199]. Since then, development for RCIM has trended towards PPG devices that operate with applied pressure on the body and/or in body sites that help block other light, ensuring that the detectors are receiving maximum transmitted/reflected light from only the device. An approach to reduce the effect of ambient light in PPG measurements can be seen in the work of Wang et al., which proposed an ear-worn sensor operating in reflective mode with multiple light sources and detectors. The optical components in this sensor include light sources which are DLED-660/905, DLED-660/940 and PDI-E835. It is important to note that the photodetectors, BPW34 and BPW34FS from Siemens (Munich, Germany), used in this sensor come with day-light filters. In addition to that, the components of the sensors were optically shielded by embedding them into the base of the sensor and separating them with an opaque medium [200]. This method was found to be effective in shielding from noise due to ambient light, as the DC level of the PPG signal was considerably low (measured to be less than 2 nA) when the LEDs were switched off. Patterson et al. proposed a flexible PPG sensor with a design that can minimize the effects of ambient light and other electromagnetic interference. This system with an API PDI-E832 light source, which is a dual LED emitting at 660 and 905 nm, and an API PDV-C173SM photodetector, measures the PPG signal from auricular region. The opto-electronic modulation scheme in this flexible PPG sensor helps to eliminate ambient light noise through the multiplexing technique and sampling of light level by keeping the LEDs off for a fixed time period and later subtracting it out from the desired signals. Apart from its electronic system design, the arrangement of the source and detector in this sensor also contributes to eliminate the interference of ambient noise to the signal. This includes encasement of the active area of the photodetector in a red plastic to filter out ambient light and addition of 2 mm wide foam between the source and detector to prevent direct coupling of light from the LED to the detector [201]. Selective filtering of unwanted signals (above and below 0.5 to 4 Hz frequency range) is another method followed to eliminate the noise due to ambient light. With a low bandwidth of around 5 Hz, PPG signals can be extracted out from high frequency noise using a low-pass filter and from low frequency noise using a high-pass filter (e.g., to reject 60 Hz room lights and to reject DC sunlight). Various attempts are also made in the PPG sensing circuitry level to effectively reject DC photocurrent. A transimpedance amplifier (TIA) associated with PPG sensors converts and amplifies the weak photocurrent from the PD to a differential voltage. An effective DC photocurrent rejection circuit proposed by Wong et al. by developing an integrated TIA with bandpass response in a NIR wearable sensor is an example. The dual loop TIA in this design acts as a high pass filter by achieving a reduction in its cutoff frequency as low as 0.5 HZ, while the other loop adaptively adjusts the DC photo current and prevents sensor saturation [202]. The accuracy of PPG measurements could be limited due to the interference of environment noise such as ambient light. This can be effectively compensated by various methods like optical shielding of the transducer, selective filtering of noise outside the PPG bandwidth, and with correlated double sampling. Modifications in the signal processing method at the signal amplifier can also successfully reject DC photocurrent in PPG sensors. Applied Pressure to Measurement Site Discrepancies in PPG signals can arise where variations in applied pressure influence the resulting waveform. This can result in an increase or decrease in amplitude along with a shift in offset. With a low applied external pressure from the sensor, the waveform exhibits lower SNR with a low AC amplitude primarily because of a longer optical path length through the tissue and a lower reflectance of photons due to high tissue absorption [203]. With increased applied pressure, the AC amplitude begins to increase due to a decrease in optical path length through the tissue and the approach of the transmural pressure to zero. The transmural pressure is defined as the pressure difference between the mean intraarterial pressure on the vessel (e.g., artery or arteriole) wall and the contact pressure. When the transmural pressure reaches zero, the AC amplitude reaches its maximum [204,205]. As the external pressure exceeds the point where the transmural pressure is no longer zero, the vessel begins to occlude, and the AC amplitude begins to decrease until there is no longer a signal where the vessel is occluded. Different vasculature will occlude at different amounts of pressure, also contributing to perturbations in the signal. With the change in transmural pressure, there is also a related compliance change in the affected vessel. The vessel reaches maximum compliance when the transmural pressure is zero [205]. Several characteristics of the PPG waveform are affected by changes in contact force, including; the AC amplitude, the DC offset amplitude, the ratio of AC/DC amplitudes, and the normalized pulse area. As reported by Teng and Zhang, with an applied force at the finger of 0.2-1.8 N, there was an increase in AC amplitude and AC/DC ratio until the parameters reached a peak and then decreased, while the pulse area decreased, and the DC offset increased [206]. The change in these fundamental features can affect other measurements such as the b/a ratio, derived from the a and b peaks of the second derivative of a PPG signal, used to characterize arterial stiffness [132], the relationship between the frequency response of a PPG signal and a blood pressure signal [207], or the pulse transit time (PTT) which is used as an indicator for fluctuating stiffness or elasticity in arteries and blood pressure [208]. Grabovskis et al. reported with an applied pressure range of 0-15 kPa, there was a variation of over 300 percent in the b/a ratio [209]. When an optimal pressure was reached, the variation coefficient dropped to less than 5 percent. Using the frequency response of the PPG signal taken at the finger, Hsiu et al. worked to understand how the relationship between the first five harmonics of the PPG signal and corresponding blood pressure signal varied with an applied pressure of 0-200 mmHg [207]. At 60 mmHg, there were the greatest R 2 values for four out of five harmonics, and with the different applied pressures, the R 2 values ranged from 0.13 to 0.77 for the first harmonic. Regarding PTT, Teng and Zhang discovered that, with a contact force of 0.1-0.8 N applied at the finger, the PTT measured from the R peak of an ECG and the peak of the second derivative of the PPG increased significantly (p = 0.014) until the estimated transmural pressure approached −0.1 N, after which the PTT remained roughly constant [208]. For the PTT measured from 50% of the pulse amplitude, there was a significant increase in PTT (p = 0.038) before reaching a constant at −0.1 N transmural pressure. Lastly, for the PTT calculated at 90% of the pulse amplitude, there was a similar increasing trend for the PTT but without a significant difference (p = 0.107). The PTT leveled out to a constant value for a transmural pressure of around 0.1 N. Within all the studies, infrared light-emitting sources were used to study the effects of contact pressure. Spigulis et al. worked to understand at what pressure the signal amplitude would be at maximum and when the vasculature would occlude at wavelengths of 405, 532, 645, 807, and 1064 nm. Since longer wavelengths going from the visible to NIR penetrate deeper, it took larger pressures before occlusion occurred, since the deeper arteries with higher mean arSPOterial pressure (MAP) were the ones probed with the longer wavelengths [210]. For shorter wavelengths, the occlusion pressure was lower, as more superficial arterioles with low MAP were in the light path at these wavelengths. Overall, the above studies show that inaccurate PPG signal acquisition due to varying contact pressures can lead to inaccurate secondary measurements. Therefore, integrating a solution to produce constant contact pressure or being able to measure the pressure in PPG devices will produce more consistent and repeatable measurements. Integrating a sensor that can measure the applied force could help to standardize PPG measurements such that the applied pressure does not skew the signal. For example, Nogami et al. designed a PPG sensor with an optical displacement sensor comprised of a vertical-cavity surface-emitting laser, a mirror, and a photodiode [211]. Specifically, there is a compressible frame with the mirror mounted to it and as a force is applied, the intensity of light reflected from the mirror on to a photodiode changes in accordance with the applied pressure. Another solution is to integrate a flexible thin-film force transducer (FlexiForc A201, Tekscan Inc., Boston, MA, USA) between the PPG sensor and the designed fastener, as done by Grabovskis et al. and Sim et al. [209,212]. Grabovskis et al. reported a less than 3% coefficient of variation within a subject at the single measurement site when determining the optimal pressure needed to determine an unloaded artery. Sim et al. utilized the force transducer as a feedback mechanism within their system. A force regulator consisting of a thermo-pneumatic actuator would heat a layer of expandable fluid that pushed down upon the force transducer and PPG sensor. Without force regulation, the coefficient of variation between five posture stages was calculated at 50.9%. The addition of the force regulator dropped the coefficient of variation to 1.8%. Rhee et al. created a finger-ring system that utilized a polyester braided elastic band to mount their sensor [213]. This band would apply a skin pressure of about 75 mmHg and, due to its compliance, it would hold steady the applied pressure while the finger was in different positions. Another force measuring modality to integrate with a PPG sensor is a Force Sensing Resistor (FSR, Interlink Electronics, Camarillo, CA, USA), as done by Santos et al. [214]. Liu et al. implemented a fiber Bragg gating to measure applied pressure [215]. The Bragg wavelength will measurably shift in relationship with the applied pressure to the sensor. Utilizing this modality, they demonstrated that, when pressures are kept within the range of 5-15 kPa, there is less than 2% error in the SpO2 measurements. Summary This review focused on the impediments to remote and continuous PPG devices. PPG is a common tool used to monitor cardiovascular health in controlled environments. However, only a limited number of RCIM PPG devices have FDA approval. A review of the literature shows that the difficulty in creating such a device partially originates from the many sources of noise that researchers encounter. Here, we compiled and evaluated sources of noise found in published works and utilize understandings of photoplethysmography and light/tissue interactions to summarize findings in order to provide guidance for future PPG-based devices. As shown in Table 7, we found that sources of noise can be divided primarily into three categories: individual variation, physiological processes, and external perturbation. While many sources of noise had documented potential solutions, few had a comprehensive solution and even our understanding of how conditions such as obesity affect the skin and cardiovascular monitoring are still developing. Future research towards a RCIM PPG device for true regulated health monitoring should incorporate larger studies that are inclusive of the noise sources across diverse populations discussed herein.
2021-04-29T05:16:38.689Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "2df7e761b6da403eadacff163bb89428964c6d0a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6374/11/4/126/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2df7e761b6da403eadacff163bb89428964c6d0a", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
149679372
pes2o/s2orc
v3-fos-license
Assessment of the social attitude of primary school students The implementation of Curriculum 2013 at primary school level brings about its own problems to teachers. A serious problem emerges in the assessment, especially the assessment of core competence for the social attitude aspect. This problem arises because social attitude has many dimensions and requires judgments in diverse forms. In addition, the assessment of social attitude is focused on the affective sphere. The objective of this research is to assess the social attitude of grade IV and/or V students of primary school using three integrated instrument models: self-assessment (SA), peer assessment (PA), and observational assessment (OA). This research employed qualitative approach. The respondents were 58 students chosen by using cluster random sampling and purposive sampling techniques. The data were collected through direct disclosure questionnaire and observation, and analyzed descriptive quantitatively. The results of the research are as follows: (1) the component of honesty attitude is in category A (entrusted); (2) the component of discipline is in category A (entrusted); (3) the responsibility component is in category B (developing); (4) the politeness component is in category B (developing); (5) caring component is in category B (developing); (6) confidence component is category A (entrusted); and (7) students' social attitude is mainly in category B (good) which indicates that most students have good social attitude. Introduction There are three domains of learning outcomes that a student achieves in a learning process, namely: cognitive, affective, and psychomotor domains (Krathwohl, Bloom, & Masia, 1973, pp. 6-7).Cognitive domain is the result of learning that has something to do with memory, ability to think, or intelligence.In addition, affective domain refers to learning outcomes in the form of sensitivity and emotion that deals with attitude, values, and interests, meanwhile, psychomotor domain is related to a certain skill or ability of motion (Kurniawan, 2014, pp. 10-12).As a result of learning, these three domains require assessment, including integrated thematic approach model.A successful learning is defined by behavior (affective) as well as environment (Retnawati, 2016). One aspect that requires assessment is affective domain.The characteristics of the affective domain are attitude, values and interests (McCoach, Gable, & Madura, 2013, pp. 7-24).The attitude referred to in this study is the social attitude of elementary school students.Social attitude is an affective domain that needs to be assessed using an appropriate instrument. Social attitude can be seen as something associated to the attitude which is related to social conditions.It is an acquired tendency to evaluate social things in a specific way.It is characterized by positive or negative beliefs in, feelings of, and behaviors on a particular entity.It has three main components: emo-ISSN 2460-6995 Assessment of the social attitude… -13 Ari Setiawan & Siti Partini Suardiman tional, cognitive, and behavioral components.The emotional component is the feeling experienced in evaluating a particular entity.The cognitive component implies thoughts and beliefs adopted towards the subject, while the behavioral component is the action that results from a social attitude (Bernann, 2015, p. 13). LaPierre in Azwar (2015, p. 5) contends his idea that social situation is an anticipatory pattern of behavior, tendency or readiness, predisposition to conformity in social situations, or simply social attitude is a response to conditioned social stimuli.In other words, social attitude is a pattern of behavior regarding conditioned social situations.Ahmadi (2002, p. 163) writes that social attitude is the consciousness of an individual who determines the real, repetitive actions of the social object.Thus, social attitude represents a person's response to social objects.In line with this idea, Gerungan (2004, p. 161) proposes that social attitude is the same and repeated ways of responding to social objects.It leads to the repeated ways of behaving toward a social object.As stated by Soekanto (Supardan, 2011), social objects relate to interpersonal behavior or social processes.It involves relationships between people or groups in social situations. Social attitude is a tendency to evaluate social things in a certain way.It plays an important role in children's development, because it shapes children's perceptions of the social environment and has a significant effect on behavior (Crano & Prislin, 2011, p. 19).Children who start interacting with the social environment will begin to have social attitude, and this also occurs in primary school-aged children. Considering the various understandings above, the writer concludes that social attitude is the awareness of a person in acting repetitively in real life to determine the response to social objects in his or her relation with others.Social attitude encourages a person to do things in a certain way as a form of his or her reaction to social objects. The evidence of children's behaviors these days is quite concerning.Primary school students are now generally less disciplined than they used to, and they have low care and responsibility.It is not in accordance with the ideal affective development of primary students.Ekowarni (2009) contends that there are some values related to social condition that should be instilled in primary school students, including: politeness, caring, cooperativeness, discipline, humility, even-temperedness, tolerance, independence, honesty, confidence, toughness, positivity, fairness, peacefulness, perseverance, creativity, citizenship, responsibility, and sincerity. In today's education practice, where social attitude actually becomes the core of education, the assessment has not yet been conducted.This is due to the teachers' limitations, especially in the assessment process.Teachers are more likely to spend their time on teaching regardless of the importance of making appropriate assessment.Stiggins's study shows that teachers should spend a third to a half of their available time to engage in assessment activities (Stiggins, 1999, p. 3).They are constantly making decisions about how to interact with their students, and decisions that are based on part of information that they collect about their students through classroom appraisals.In fact, they do not spend much time on assessment. The results of a study conducted by Zuchdi, Prasetyo, and Masruri (2012, p. 68) show that the practice of assessing the learning outcomes especially in elementary schools, so far, is mainly focused on the cognitive assessment.The students' appreciation is shown by the rank and score in their examination.Although all educators know that the realm of education is cognitive, affective, and psychomotor (behavioral) aspects, in practice, the affective and psychomotor aspects are not given adequate attention, especially in assessing students (Khilmiyah, Sumarno, & Zuchdi, 2015).Teachers are not accustomed to assessing changes in the social attitude (affective spheres) of students of primary schools.This happens not because of the unwillingness of the educator, but because of the lack of educators' ability to describe the affective field of achievement indicators.As a consequence, the assessment does not reflect the students' overall abilities.REiD (Research andEvaluation in Education), 4(1), 2018 ISSN 2460-6995 14 -Assessment of the social attitude... Ari Setiawan & Siti Partini Suardiman It is clear that the assessment of social attitude cannot be done in the same way as that of the cognitive domain (such as by giving questions).Assessment of social attitude is more directed to recording physical activities related to social interaction, not merely the ability to answer a number of questions given. In the primary school education system which applies thematic approach, the social attitude aspect that is part of the affective domain must be assessed.This refers to the content standards in elementary schools that contain competence in social attitude reflected by the students showing honesty, discipline, responsibility, politeness, care, and confidence in interacting with family, friends, teachers, and neighbors and showing love to their own nation. The existing assessment system is simple without sufficient indicators.The teachers have put more focus on the assessment of the cognitive aspect which has clearer construct and criteria, while the affective aspect has more complicated construct and the teachers have insufficient competence in designing the instruments of the assessment.Another obstacle is the fact that designing learning objectives in terms of affective aspects is more difficult than designing the cognitive and psychomotor aspects (Mardapi, 2012).In other words, the affective domain is difficult to define and assess because it is latent. Based on the data collected by the researchers related to the assessment employed to assess the existing social attitudes, the models include observation methods (Syamsudin, 2015, p. 109;Waryadi, 2013, pp. 1-5), selfassessment of social attitude at the end of learning, and assessment developed by the teacher by referring to the technical guidance.These three assessments focus only on one method and tend to assess the apparent aspect of the student based on one point of view (teacher or student).This assessment also does not cover all of the aspects suggested in the core competencies of the social attitudes that the curriculum suggests.In addition, assessment which uses only one method will produce inaccurate conclusions on the social attitudes assessed. Assessment of social attitude is often done at the end of an instruction, regardless of the process.This is done by the teacher as a routine and an attempt to execute the obligation.This kind of assessment produces only a visible social attitude at the end of learning.This will result in insufficient information, in which the results obtained are only viewed from one section of the lesson.Assessment should be done during the teaching-learning process, from the start to the end based on real or authentic condition. In addition, an assessment applying three assessment methods (integrated) has not been conducted.Thus, this research is very important to do because by doing the assessment integrating self-assessment, peer assessment, and observational assessment, the results will be more adequate. Method This research is explorative descriptive research that describes the social attitude of elementary school students using three forms of self-assessment (SA), peer assessment (PA), and observational assessment (OA) instruments.The instrument validity was done using the confirmatory factor analysis (CFA), seen from the estimated loading factor per item.The result of the grain loading factor is between 0.31-0.99(> 0.30) which means that the item in social attitude instrument (SA, PA, and OA) is valid.The use of validity criteria was seen at the loading factor of at least 0.30 as the consideration referring to Azwar (2015, p. 143).The Alpha Cronbach approach was used to estimate the reliability of the instrument, obtaining the reliability value between 0.788 and 0.886 (> 0.70).This requirement refers to Nunally (1981), Sunyoto (2012), and Mardapi (2017) who state that an instrument is said to be reliable when the combined coefficient of grains (alpha reliability) is 0.70 or more. The population in this research was the students of elementary schools in Yogyakarta which have been implementing Curriculum 2013 for two years.A sample of 58 students of Kaliagung Elementary School in Sentolo, Kulonprogo Regency and Pakel Elementary School was established using the cluster The data were collected using questionnaires for SA and PA, and observation sheets for OA.The questionnaire and observation data were complementary and integrated.The data obtained were analyzed to describe the students' achievement in social attitude.The achievement in social attitude was divided into two parts: (1) achievement based on honesty, discipline, responsibility, politeness, care, and confidence components, and (2) the achievement of social attitude as the combination of all social attitude components, referring to the results of the social attitude of elementary students.There is also a categorization of social attitude as a whole by combining all of the three forms of assessment used in this research. The data analysis was done through categorization of assessment results using score, average, and standard deviation.The data were derived from overall scores obtained by the respondents.The data obtained were analyzed using the categorization suggested by Mardapi (2012) as stated in Table 1. This categorization was used to assess the social attitude in detail based on the honesty, discipline, responsibility, politeness, care, and confidence components.This categorization also helps the teacher in monitoring the students' ability to absorb thematic learning outcomes especially in the affective aspect.The assessment result of each component was then continued with the assessment of the students' social attitude, which was the integration of all components. To understand and interpret the assessment results of the social attitude using the three models in this research, the researcher made a description to get the understanding of the social attitude components performed by the students.The description helped the teacher to reveal the achievement of social attitude, as stated in Table 2. Students are always honest during the learning process and social interaction, disciplined in daily life, show responsibility for the tasks and duties.The students are also polite to the teachers and friends, show care to others and environment, and also show high confidence in the class.All of those aspects are entrusted. B (good) Students are often honest during the learning process and social interaction, disciplined in daily life, show responsibility for the tasks and duties.The students often show polite behavior to the teachers and friends, show care to others and environment, and also show high confidence in the class.All of those aspects are developed. CB (fair) Students sometimes show honesty during the learning process and social interaction, discipline in daily life, and responsibility for the tasks and duties.The students are sometimes polite to the teachers and friends, show care to others and environment, and also show high confidence in the class.All of those aspects start to emerge and be seen. KB (poor) Students have not shown honesty during the learning process and social interaction, have not been disciplined in daily life, and have not shown responsibility for the tasks and duties.The students are less polite to the teachers and friends.They also have not given care to others and environment or performed high confidence in the class.All of those aspects are not yet seen or observed. Students' social attitude (honesty, discipline, responsibility, politeness, care, and confidence) was derived from the categorization presented in Table 3.To figure out the meaning of the results of the social attitude assessment, Table 4 presents the description of each achievement. The next assessment was a test to know the effectiveness of the assessment done.The effectiveness is based on the criteria suggested by four experts at psychometrics, assessment, thematic learning of primary education, and psychological counselor.The consultation also involved three primary teachers.The data obtained were categorized and presented in Table 5 (Mardapi, 2012). Findings and Discussion The assessments were conducted in two qualified primary schools; they were Pakel Elementary School and Kaliagung Elementary School, involving 58 students.The data obtained were analyzed using the descriptive method and categorization.The assessment of these values was done using SA, PA, and OA instrument models.The results were analyzed to know the description of the assessment. Figure 1. Social attitude viewed from six components The results of the assessment were analyzed in two phases.The first phase presents each component.The honesty component or value of the primary school students is presented in Table 6.Table 8 and Figure 4 indicate that generally from the sample students, it can be seen that there are: 32 students (55.17%) in category B (responsibility is developing), 14 students (24.14%) in category A which means that responsibility is entrusted, 10 students (17.25%) in category C where responsibility starts to emerge, and two students (3.44%) in category D. 6 show that the results of the assessment of students' care are as follows: 32 students (55.17%) are in category B, 17 students (29.31%) are in category A, eight students (13.79%) are in category C, and one student (1.73%) is in category D. In addition, Table 11 and Figure The second phase of analysis in this research dealt with the description of the assessment results of students' social attitude in the thematic learning.The results are the integration of the three assessment models employed in this research (SA, PA and OA).The results are presented in Table 12.From Table 12 and Figure 8, it can be seen that the students' social attitude in thematic learning is as follows.Eleven students (18.96%) are categorized as SB or very good.There are 38 students (65.52%) included in category B or good.There are nine students (15.52%) considered as CB or fair in terms of their social attitude.There is no student categorized in category D or poor.An example of SB (very good) category is when the students are able to show honesty during a teachinglearning process and social interaction, they are disciplined in daily activities at school, they show responsibility for their tasks and duties, they show polite behavior to their teachers and peers, they care about others and environment, and they show confidence in class.All those aspects have already entrusted and instilled in students' daily life. As previously mentioned, the results of this research are divided into two parts.The first result is the assessment based on the social attitude components, covering honesty, discipline, responsibility, politeness, care, and confidence.The second result deals with the social attitude value along with the description which can be used to fill out the report of the learning outcome.Based on the components of assessment results, it can be generally said that confidence is included in category A or entrust (46 out of 58 students or 79.31%).In addition, 35 students show discipline as how it is described in category A, while honesty is reflected by 23 students and is considered as being instilled.There are 32 students showing responsibility, 30 students showing care, and 32 students reflecting politeness.These three values are in category B (developing). Another interesting result is that there are seven students (12.06%) who are categorized in category D. They have not shown honesty in their daily life and social interaction at school.The dishonesty is shown when they copied other students' work.It is in line with the idea of Koellhoffer (2009, p. 27) that honesty deals with avoiding plagiarism, including taking others' idea or answers without permission during the learning process, test, etc. The results also present that the social attitude assessment is integrated components developing the attitudes such as honesty, discipline, responsibility, politeness, care, and also confidence.From the sample of 58 students, 11 (18.96%) are included in SB, or, in other words, their social attitude is very good.In addition, 38 students (65.52%) are considered to be good.The social attitude is the result of responses to the social stimuli contained in thematic learning.This is supported by LaPierre in Azwar (2015, p. 5) who proposes that social situation is a pattern of behavior, anticipative tendency or readiness, predisposition to adapt to social situation, or, simply social attitude is a response towards conditioned social stimulus. From the assessment results of the students' social attitude, it can also be inferred that their social attitude turns out to be varied.There are 36 (65.52%) students in SB (very good) category and 11 students (18.96%) in B (good) category.From that result, SB (very good) category has deep meaning. The results can also be used in the report of the learning outcomes of core competence in social attitude aspect or Kompetensi Inti (KI)-2 (Core-Competence 2) and become the evaluation material for thematic learning.The assessment results obtained are also used by teachers to fill out the report of the learning outcomes in the mid semester and the end of the semester. This research also yields effectiveness from the assessment conducted.There are 79% of the teachers who claim that the assessment involving three different models in this research is effective.This indicates that more varied and integrated methods can result in more accurate assessment results.This shows that this instrument is useful in helping teachers to assess social attitude as an affective component of integrated thematic learning outcomes in primary school. Conclusion The results of this research are divided into two parts.The first result is the assessment based on the components of social attitude covering honesty, discipline, responsebility, politeness, care, and confidence.The second result deals with the social attitude value along with the description which can be used to fill out the report of the learning outcome. For teachers, this assessment can be used to fill in the report of students' learning outcomes in the affective domain or KI 2 (Core-Competence 2).For parents and students, the assessment results are helpful in finding out the description of social attitude that has been achieved by students.This description can be used as an introspection and improvement of students' social attitude. Suggestion The comprehensive results of this research may become a guidance for the teachers to assess students' social attitude.The existing assessment can also become an evaluation towards the learning practice.The future research should reveal other components of social attitude as the results of learning process. the average score of all students in a class SBx: standard deviation of the overall score of students in one class X: score achieved by students Figure 4 . Figure 4. Histogram of results of the students' responsibility assessment 7 indicate that from the sample students involved, the results of the confidence assessment are as follows: 46 students (79.31%) are in category A or instilled, nine students (53%) are in category B or developing, one student (1.72%) is in category C, and two students (3.44%) are in category D or not showing self-confidence. Figure 7 . Figure 7. Histogram of the results of the students' confidence assessment Figure 8 . Figure 8. Histogram of the results of students' social attitude assessment Assessment of the social attitude... Ari Setiawan & Siti Partini Suardiman Table 1 . Categorization of components of students' social attitude Table 2 . Description of students' social attitude achievement Students have not yet shown show social attitude (honesty, discipline, responsibility, politeness, care and confidence*) in daily life and interaction at school.*choose one based on the component being assessed.16 -Assessment of the social attitude... Ari Setiawan & Siti Partini Suardiman Table 3 . Categorization of students' social attitude Table 4 . Description of the achievement of students' social attitude Table 5 . Categorization of the instruments effectiveness Assessment of the social attitude… -17 Ari Setiawan & Siti Partini Suardiman Table 6 . Social attitude value: Honesty Figure 2. Histogram of results of the students' honesty assessment Table 6 and Figure2show that generally the value of honesty in thematic learning from the sample of 58 students is as follows: there are 23 students (39.66%) who are in category A or entrust, 16 students (27.58%) who are in category B or honesty is developing, 12 students (20.68%) in category C or honesty starts to be observed, and seven students (12.07%) in category D which means that their honesty has not been shown.The next value is discipline.The detailed results can be seen in Table7. Table 7 . Social attitude value: Discipline Figure 3. Histogram of results of the student's discipline assessmentTable 7 and Figure 3 indicate that from the sample students, their discipline in thematic learning is categorized as follows: there are 35 students (60.34%) who are in category A or entrust, 19 students (32.76%) who are in category B or developing, four students (6.90%) who are in category C, and no student in category D. Table 9 In addition, there are 10 students (17.25%) who are in category C, and two students (3.44%) who are in category D, which means that the students have not shown polite behavior in thematic learning. and Figure5indicate that from the sample students involved, it can be seen that there are 30 students (51.73%) who are in category B or developing, 16 students (27.58%) who are in category A which means that po-liteness is already instilled. Table 10 . Social attitude value: Care Table 12 . Description of the students' social attitude assessment
2019-05-12T14:24:42.644Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "f1b8023dba00e23fb656b407e89e8a2507e3fe6f", "oa_license": "CCBYSA", "oa_url": "https://journal.uny.ac.id/index.php/reid/article/download/19284/11404", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f1b8023dba00e23fb656b407e89e8a2507e3fe6f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
267980186
pes2o/s2orc
v3-fos-license
Effectiveness of point-of-use (POU) filter system for removal of contaminants from water Groundwater supplies most of the drinking water in the small settlements in the Danubian plain region of Bulgaria. The region is mostly agricultural, leading to chemical pollution, mostly with nitrogen, of the shallow groundwater. Direct health concerns, linked to nitrates contamination of water, determine the increased popularity of drinking water point-of-use (POU) filtration pitchers. The objective of this study was to evaluate the effectiveness of a commercial pitcher filtration system in removing select pollutants, such as metal ions, namely, copper and iron, nitrate nitrogen, and suspended solids (algal culture of Monoraphidium contortum) from augmented tap water. In addition, we have studied the effects on electrical conductivity and pH of water. The POU system of choice was a widely used brand name in Bulgaria, providing cartridges that fit most of the European pitcher filter brands. We have evaluated the performance of the filters at three different exploitation intervals – pre-washed new filter (0%), 50%, and 100% of the exploitation capacity set according to manufacturers’ claims of recommended volume of water, as well as at different contact times between the water and the filter media. The results indicate that the efficiency of the filters diminishes with aging and increases with increasing contact time – multiple filtrations. The efficiency in terms of electrical conductivity and amount of iron decreases proportionally with filter age, and in terms of phosphates, the maximum effect is observed in filters at 50% capacity. Water filtration reduced water conductivity by 12% in a single filtration and stabilized the pH values towards the neutral range. The effect on pH values is inversely proportional to the buffering capacity of the water. The filters removed 85–98% of copper ions and 20% of iron ions. Nitrate removal efficiency averages 40% and doubling the contact time increases the efficiency to 70%. The efficiency of removing suspended solids is on average 17%, mainly due to the small size of the particles. Overall, the POU systems are an effective way to purify water at home. The filter cartridges effectively reduce contaminants, such as metals and nitrates, which are particularly problematic in areas supplied by groundwater sources. Introduction Apart from the availability of water, its safety is a top priority and a challenge for the existence of modern society [1].Due to its chemical properties, water is often considered the "universal solvent", so it can easily get enriched with a variety of substances, of natural or anthropogenic origin.Water naturally cleans itself via self-purification, filtration through the ground, and evaporation via the water cycle, but the continuous disposal of wastes contaminates the waterways and may compromise its quality.Providing settlements with drinking water that meets health requirements according to modern regulatory documents is becoming an increasingly difficult task.This is due to the pollution of natural waters used for drinking water supply [2]. The term potable water refers to water for drinking and domestic purposes, regardless of its origin and whether it is supplied through a centralized water distribution network, or by any other means such as tankers, bottles, boxes, etc [3].Potable water is usually obtained from surface (springs, rivers, lakes, and dams) or underground sources and is considered safe when it does not contain 1305 (2024) 012012 IOP Publishing doi:10.1088/1755-1315/1305/1/012012 2 microorganisms, parasites, chemical, radioactive, and other substances in number or concentration that represent a potential danger to human health [3]. According to Bulgarian legislation, the water supply operators are responsible for the quality of drinking water supplied to the population, as well as for the maintenance and operation of water supply systems and the provision of water supply services to the designated territories [4,5].In recent decades, due to increased migration, the population of the smaller settlements in parts of the country has been decreasing, which slows down investments in infrastructure [2].Thus, water supply operators are facing very serious problems and challenges: use of outdated and depreciated water supply facilities; water sources located in unsuitable places in arable agricultural land; increased content of nitrates in groundwater used for drinking water supply; lack of established sanitary protection zones around the water sources.According to Arlosoroff [6], shallow groundwater sources, like wells, are most susceptible to contamination by the adjacent territories. Due to periodically occurring or permanently existing problems in the drinking water supply in the country, interest in devices and systems for decentralized additional purification of drinking water (point-of-use filters) has increased in recent years [7].They are used to optimize tap water concerning the fact that some people are in search of better-tasting water, while others look for a way to avoid health concerns related to tap water consumption [8].Addressing the quality of drinking water directly at the point-of-use is considered economically beneficial, given that only 3% of the total daily water consumption per person is used for drinking.The main element of point-of-use jugs is the filter cartridge containing a mixture of sorption materials, which can absorb certain impurities from the water.The point-of-use systems usually combine several principles of purification, such as mechanical filtration for removing physical impurities such as fine sand, detached pipe scale, silt, and plankton found in the water; activated carbon filters; ion exchange resins; filters using the reverse osmosis process and ultraviolet light [9].Although most point-of-use filters have similar composition and arrangement, the selection of the proper filter should be based on the water quality and the specific pollutants in the particular water supply network [10].Despite Maletsky's conclusions that point-of-use filter pitchers represent the most cost-effective system for domestic drinking water treatment [11], previous studies indicate that the majority of these pitchers may fall short of achieving water quality standards, and that advertising information provided by manufacturers is not consistently accurate [12,13,14]. The purpose of this study is to determine the effect of the exploitation time of the point-of-use filters, and the contact time of the water and the filter loading (media) on the efficiency of water purification.To achieve this goal the following tasks were performed: 1. Study the influence of exploitation intervals (filter age) on the efficiency of water purification from suspended solids, metal ions and nitrates/phosphates. 2. Investigate the effect of the contact time between the water and the filter media on the removal efficiency of major impurities from the model water through sequential filtration. Materials and methods During the experiment, we tested single-brand sorption-type filters for their efficiency in the purification of water from impurities.While some general information was available from the filter manufacturers, the internal composition of the pitcher filter media was further investigated after the dismantlement of the devices.The cartridges contained a mixture of exchange resins, which visually appeared to be comprised of two components (differing in size and colour), and coconut shell activated carbon in a ratio of approximately 1:1.5 (w/w), evenly distributed in a fibrous material for prevention of compaction and maintenance of a uniform water flow through the filter.As a means of preventing the removal of filter material, the cartridges contain polypropylene mesh screens, at their top and bottom, with an eye opening of 80 μm.The water flow rate of the water through the cartridges was on average 220 ml/min (200-230 ml/min) and, according to the manufacturer, the service life of the cartridges is 200 litres. The experiments were based on filtering of the model water samples through filter pitchers at three different exploitation intervals (filter age)pre-washed new filter (0%), 50% (100 litres), and 100% (200 litres) of the exploitation capacity set according to manufacturers' claims of recommended volume of water.Additionally, we investigated the effect of the contact time between the water and the filter media.To address this issue, we have made 10 successive filtrations of the same model water sample through the filters.Water (50ml) was taken for analysis after 1 st , 2 nd , 3 rd , 5 th , 7 th , and 10 th filtration, corresponding to up to ten times increase in the contact time between the water and filter media.The sequential filtration was performed with each set of model water on different filters.Thus, we used a set of filters for the metal retention analysis, and other sets for the analysis of the nutrients and suspended solids retention efficiency.Simultaneously to the retention analyses, we have measured the pH and electrical conductivity of the water samples. In the present work we tested the purification efficiency by the following indicators: -Suspended solids (SS).Assessed by the retention of the alga Monorafidium contortum (Thuret) Komárková Legnerová 1969 grown in laboratory conditions.The retention efficiency was determined by the change in the amount of Chlorophyll-a in the phytoplankton, according to the international standard ISO 10260.-Metal ionscopper and iron.The amount of copper and iron in the water was determined spectrophotometrically on WTW Photo Lab 7100Vis, by the Spectroquant method 14767 and 14761 of Merck.Copper was selected as a representative of contamination of drinking water by heavy metals.Since the concentration of copper in tap water is below the detection levels of the method (0.01 mg/l), a model water was created with a copper concentration of 4.5 mg/l.As a source of copper, we used copper sulphate.The model water was prepared immediately before the experiment, from a stock solution with a concentration of 1000 mg/l.Dechlorinated tap water (aged 48 hours) was used as dilution water.In the iron experiment, we used naturally enriched tap water after a 2-day stagnation period in the old pipes of the distribution system.Subsequently, the measurements were done in the first-draw sample (1.5 litres) of the stagnant tap water.The concentration of iron in the first-draw tap water sample was on average 0.79 mg/l (0.45 to 1.12 mg/l).Copper was chosen to test the manufacturers' claims for reduction of heavy metals, and ironbecause it significantly affects the organoleptic characteristics of drinking water.-Nitrate nitrogen (NO3-N) and phosphate phosphorus (PO4-P).The retention efficiency of the anions was assessed in aged and naturally enriched with nutrients aquarium water with average concentrations as follows: NO3-N -9.5 mg/l and PO4-P -1.3 mg/l.The amount of the nutrients in the water was determined spectrophotometrically on WTW Photo Lab 7100Vis, by the Spectroquant method 14773 and 14848 of Merck.-Additionally, to determine the effect of filtration/ sequential filtration on pH and the electrical conductivity of the water, their values were determined before and after each filtration.A pre calibrated pH meter Combo check HI98129 (Hanna Instruments) was used for simultaneous reading of pH, electrical conductivity and water temperature.The efficiency of water purification from pollutants (α) was determined by the formula: where: Csthe concentration of a given pollutant in the source water; Cthe concentration of the pollutant in the filtrate. Results and discussions The water samples were analysed for selected water quality parameters before and after passing through the water filters.The effect of water filter age and contact times were evaluated separately based on the parameters of electrical conductivity and pH, concentration of metal ions, and anionsnitrates and phosphates.To assess the effect of filter aging on the physicochemical indicators of water and the efficiency of removal of pollutants (purification efficiency), we have used filters at three distinct agesnew filters (0% lifespan); filters at 50% lifespan (100 litres) and exhausted filtersat 100% of their projected lifespan, according to the recommended by the manufacturer exploitation time. Influence of the pitcher filters on the electrical conductivity (EC) and pH of water. The EC of the model waters ranged between 97 and 299 μS/cm.The differences in the EC values were due to the different compositions of the model waters, as some of them were prepared by augmentation of tap water and otherswere aged aquarium water.The EC of the filtered water was reduced to a different degree, depending on the age and contact time (Figure 1), due to the removal of a part of the conductive ions by the filter.A similar effect is reported by Skoczko & Szatyłowicz [15] who report a decrease in mineralization due to the precipitation of mineral substances in the pores of the filter media.The effect on EC of the filtrates ranged between 10% and 15% reduction for a single run through the filters.The new filters had a stronger effect on the EC of water with a subsequent reduction of efficiency with filter age.In the experiment with successive filtration of the model waters the results show increased retention of ions with each passing of the water through the filters.The lowest efficiency after the tenth filtration is observed in the test with 50% exploitation capacity of the filter -25.8% reduction of the initial EC.Although the new filter and the one at 100% exploitation capacity had more than 30% difference in the single run efficiencies, they showed approximately the same effect on the EC after ten runs of the model waters through the filters (Figure 1).The pH of the model waters ranged between 6.4 and 7.09 (average 6.76).The slightly acidic nature of the model waters was due to the addition of copper sulphate.An increase in pH was observed after each passing of the model waters through the filters up to the 7th consecutive filtration.We observed a stronger increase in the pH values in model waters with lower initial values of EC, supposedly due to the higher buffering capacity of the model waters with higher initial EC values.Thus, in the experiment with aquarium water, which had the highest EC, the effect on pH was negligible -0.06 pH units.In the rest of the experiments, the effect was an increase by an average of 0.4 pH units, from an average of 6.76 to 7.16 (maximum to 7.35).There were no significant differences in the influence of filter age on the pH values.Much stronger effects on pH are observed by Doré et al. [16], who reported both increases and decreases in the pH values of the treated water, depending on the studied type of faucet-mounted point-of-use filters, or pitcher filter jugs.According to the authors, the different changes in pH values are due to the different filtering media.Krolag et al. [17] also report a decrease in the pH of filtrates from pitcher filters designed to decrease water hardness.Apart from the difference in pH values, they also observed a stronger decrease in conductivity (35%) from groundwater and well samples compared to our results. Removal efficiency of copper and iron via pitcher filters Figure 2 shows the removal efficiency of copper and iron ions by filters with different ages.The pitcher filters have very high elimination efficiency of copper ions from water, with both new and exploited at 50% of their capacity filters, averaging 98% removal after the first filtration.At the end of the exploitation period (after 200 litres of filtered water) the efficiency of copper removal was just 55%.Our results are consistent with the reported range of efficiency (65% to 99.8%) for the elimination of copper ions with pitcher filters [18].The removal efficiency of iron is much lower and even with new filters the reduction of iron from the water is ≈ 25% in a single filtration.This effect gradually decreases with filters age, so that at 50% of their capacity the reduction is 16% and for the exhausted filters (100%)just 6%.The decrease in the removal efficiency of copper and iron over time is a consequence of the depletion of the sorption and ion exchange capacity of the media in the filters [19].Although the model water in the experiment has concentrations of copper that are unlikely to be encountered in tap waters, the approximately 50% loss of efficiency of the filters by the end of their exploitation lifespan, as well as the more prominent decrease in the case of iron, shows people should avoid using exhausted filtrates to prevent potential health problems. In the experiment with sequential filtration of the model waters, the efficiency of the new filter and the one at 50% lifespan reached 100% elimination of copper from the water after three filtrations.The lowest efficiency is observed for the exhausted filter, but still, they exceeded 95% reduction of copper ions after the 5 th filtration and reached 99.7% after the 10 th filtration.Different results were observed in the experiments with the sequential removal of iron in comparison to copper.The strongest loss of efficiency was observed in the 1 st and 2 nd filtrations -55% loss in comparison to the new filters.With sequential filtration, the old filters gain efficiency and by the 10 th filtration, the difference with the new filters is reduced to 33%.Doré et al. [16] report that all tested pitcher filters increased the iron, magnesium, sulphur, and zinc concentrations in the filtrate water, due to their presence in, and subsequent release from the filter media.The results from the sequential filtration experiment are in accordance with the previous research of Ndé-Tchoupé et al., [19] and Barkouch et al. [20], who showed that, apart from the material filling the cartridge, the efficiency of metal removal can be related to factors such as increased length of the filtration bed, i.e. the increased contact time between water and filter media.In the report of Doré et al. [16], where pitcher filters were used to remove lead from drinking water, the removal efficiency was in the range of 10.9% to 92.9%. Removal efficiency of phosphate phosphorus and nitrate nitrogen via pitcher filters Figure 3 shows the removal efficiency of PO4-P and NO3-N by filters with different ages.In our experiments, we observed an increase of 2.9% in the concentration of PO4-P after filtration through pre-conditioned new filters.This may be due to the release of phosphorus from the filter media in the initial period of its activation.The maximum amount of phosphorus is released by the 3 rd sequential filtration, after which it is sorbed again by the filter media, and by the 10 th filtration, a net removal of phosphates is observed.This shows that the prescribed initial soaking and two/three times rinsing/ washing out of the filter may not be enough for the preconditioning of the filters.To achieve the latter, it may be necessary to either prolong the soaking time or to increase the washing cycles of the filters. Due to the above-mentioned peculiarities of the new filters, the highest removal efficiency during single filtration was observed in the experiments with filters at 50% of their lifespan -22%.The efficiency decreases over time, and for exhausted filters, it is just 8.2%.The efficiencies of nitrates removal after single filtration through a new filter, as well as through exhausted filters are similarapproximately 40%.For this reason, we have not run an experiment with filters at 50% of their lifespan.In the sequential filtration experiments the filter with a 50% lifespan, reached an efficiency of 42% phosphates reduction by the 10 th filtration.The efficiency of the exhausted filters (21%) by the 10 th filtration barely reaches the reduction of phosphates in the single filtration experiment with filters at 50% lifespan.In the sequential filtration experiment with nitrates, the new filter reached an efficiency of over 75% after just two filtrations, while the exhausted filter exceeded 70% efficiency only after the 7 th filtration.Contrary to our results Al-Haddad et al. [21] observed a reduction in the concentrations of major cations and anions, including NO3, only in the case of reverse osmosis, but not in the case of pitcher filters.Krolag et al. [17] report more than 80% reduction in the amount of nitrates in the treated waters after filtration with pitcher filters.Their results are close to the value we obtained after two consecutive filtrations of the model water.The authors did mention that the filters they used contain, as in our experiment, a mix of activated carbon and ion exchange resin, but judging on the results, they might have used filters with higher contact time or higher volume of the filter media. A common problem in the drinking water supply is the occurrence of NO3 in the shallow ground waters situated above the impermeable rocky formations in regions with intense agriculture.Most vulnerable are the flat terrain areas with impaired drainage, where the applied artificial fertilizers are predominantly transferred down, eventually reaching the first groundwater layer.Our results show that the pitcher filters can be used to improve the drinking water quality in villages with increased concentrations of nitrates in the supplied drinking water, or in the cases of drinking water supply from home wells. Removal efficiency of suspended solids (SS) via pitcher filters Figure 4 shows the removal efficiency of SS during filtration through filters with different ages.A laboratory culture of M. contortum was used as model water in the experiment.After single filtration of the model water, the new filters had the highest removal efficiency, reducing the amount of the SS by ≈25%.The efficiency gradually decreases, and for filters with a 50% lifespan it is ≈20%, and for the exhausted filters it is just 6.5%.This corresponds to an overall reduction in efficiency of about 19% by the end of the prescribed lifespan of the filters. In the experiment with successive filtration of the waters, the efficiency of the filters differed significantly and abnormally from the single filtration effects on the SS values in the water.Strong fluctuations in the retention efficiencies of SS were observed, even for successive filtrations through a single filter.As we have used an algae culture of M. contortum as suspended sediments in our experiment, and because the mesh screens at the top and bottom of the filters are with an eye-opening of 80 μm it is not surprising that the efficiency is variable and inconsistent.Factors such as different sizes of algae, packing density, and loading with suspended solids from previous filtrations affect the filter's suspended solids retention efficiency.Thus, the retention efficiency of the new filter was just above 25% and increased until the 5 th sequential filtration, with a consequent sharp decline in the efficiency of Chl-a retention which by the 10 th filtration fell below 10%.Similar results were observed with the exhausted filters (100%), with a decrease to approximately 10% by the end of the sequential filtration.The 10% retention of the algae seems to correspond to the size fraction of the population of M. contortum which are permanently retained by the filters, while the smaller cells pass through the filters with some being retained by the fibrous material in the filter.Only in the experiment with the filters at 50% lifespan, we observed a steady increase in the retention efficiency after the second filtration.This may be due to the different packing densities of the filter media.Al-Haddad et al. [21] report that measurable values of turbidity and TSS were observed in the outlet samples of all types of water filters except for the filter cartridge with 5 μm mesh screens.This shows that the efficiency of the pitcher filters to retain suspended solids depends on the mesh screen size and not so on the filter media. Conclusions This study demonstrates that the use of pitcher filters results in a notable decrease in conductivity values and concentrations of various pollutants, more specifically, the concentrations of metals such as copper decreased by 98%, and iron by 25%.Additionally, major anions exhibit reductions, with NO3-N decreasing by 40% and PO4-P by 9%.The application of pitcher filters also shows a moderate impact on suspended solids, with a 17% decrease in SS.Furthermore, the filtration process contributes to the stabilization and elevation of pH values, and this effect is inversely proportional to the water's buffering capacity.The removal efficiencies/observed effects increase with prolonged contact time between water and filter media, while they decrease with the aging of the filters.The decline in removal efficiency with filter age is lowest for nitrates, while for the other investigated indicators, it ranges from 30% to 60%, especially in the second half of the filters' exploitation lifespan.In summary, the utilization of pitcher filters proves to be an effective method for enhancing water purification at home.These filters have the potential to regulate pollutants, notably metals and nitrates, which pose particular challenges in regions relying on water from underground sources. Figure 1 . Figure 1.Effect of filtration (percent reduction) on electrical conductivity and pH of water.Left panel -relationship between the effect on EC and number of sequential filtrations.Right panel -average values of water pH after different number of sequential filtrations. Figure 2 . Figure 2. Removal efficiency of metals: left panelremoval efficiency of copper with filters with different age and after sequential filtrations; right panelaverage efficiency of iron removal with standard deviations after sequential filtrations. Figure 3 . Figure 3. Removal efficiency of dissolved ions: left panelremoval efficiency of phosphates with filters with different age and after sequential filtrations; right panelaverage efficiency of nitrates removal with filters with different age and after sequential filtrations. Figure 4 . Figure 4. Removal efficiency of suspended solids with filters with different age and after sequential filtrations.
2024-02-27T17:18:08.509Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "823c9bd81cb8c2303e06b489bc2ad931b6273896", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1305/1/012012/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2fe6be24a53a4f0d8b906ac4607e93d8129700a8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
46970609
pes2o/s2orc
v3-fos-license
Olfactory Ensheathing Cells for Spinal Cord Injury Olfactory ensheathing cells (OECs) are glia reported to sustain the continuous axon extension and successful topographic targeting of the olfactory receptor neurons responsible for the sense of smell (olfaction). Due to this distinctive property, OECs have been trialed in human cell transplant therapies to assist in the repair of central nervous system injuries, particularly those of the spinal cord. Though many studies have reported neurological improvement, the therapy remains inconsistent and requires further improvement. Much of this variability stems from differing olfactory cell populations prior to transplantation into the injury site. While some studies have used purified cells, others have used unpurified transplants. Although both preparations have merits and faults, the latter increases the variability between transplants received by recipients. Without a robust purification procedure in OEC transplantation therapies, the full potential of OECs for spinal cord injury may not be realised. Active lifelong neurogenesis is a remarkable feature of the mammalian olfactory system. Primary olfactory neurons are continually replenished by neural stem cells lining the basal layer of the olfactory epithelium [1][2][3][4][5] . This neural regeneration, particularly the guidance of axons from their origin in the peripheral nervous system to their targets in the central nervous system (CNS), has been accredited, at least in part, to a unique type of glia called olfactory ensheathing cells (OECs) 3,6,7 . These cells are present in the lamina propria ( Figure 1) of the olfactory mucosa (OM) [8][9][10][11] , as well as the outer layers of the olfactory bulbs, the inner and outer nerve fibre layers 3,9,12,13 . OECs ensheathe multiple nonmyelinated primary olfactory axons, in bundles known as fascicles, as they exit the peripherally-located olfactory epithelium ( Figure 1). Spinal Cord Injury In contrast to the olfactory system, the spinal cord is limited in its regenerative capacity. Spinal cord injuries not only result in a loss of sensation and movement control, but also frequently in loss of bladder, bowel, and sexual function, as well as thermal regulation and blood pressure control. In high-level injuries (e.g. cervical 3-5), breathing may not be possible without an external aid. Injuries of this nature confine its victims to wheelchairs with the need for carers to assist them. However, with advances in research and OEC transplantation emerging strongly as a potential treatment, a cure for spinal cord injury is possible. OECs in Spinal Cord Repair Over the years, OEC transplantation has advanced to the forefront of therapeutic innovation for spinal cord repair 36,37 . Although they may be appropriate for the treatment of spinal cord injury, transplantation studies have reported variable findings. While many studies have reported improved neuroanatomical and functional outcomes 22,38,39 , their findings have also identified limitations in the cell survivability and functionality of transplanted OECs within damaged nervous tissue [40][41][42] . While some have likened OECs to meningeal fibroblasts and bone marrow stromal cells in their capacity for neural repair 43 , others have observed OECs to exhibit similar myelinating abilities to Schwann cells 44 . Conversely, a few authors have also stated that OECs from adult rats do not form myelin nor exhibit a Schwann cell-like relationship with axons 45 . These variable outcomes may be due to a number of reasons, one of which pertains to cellular purity, the proportion of OECs within a cell culture preparation prior to transplantation. Cell Types in OM and Bulb Biopsies When biopsies are derived from the OM or olfactory bulb, other cell types residing in the anatomical niche of OECs appear in subsequent cultures. In order to separate these heterogeneous cells from OECs, an in vitro method for OEC identification is required. However, this can only be accomplished with a clear understanding of the OM and the olfactory bulb, and their respective cellular constituents. In the OM, various cell types can be found in its two layers; the olfactory epithelium and lamina propria. The olfactory epithelium includes olfactory receptor neurons, globose and horizontal basal cells (neural stem cells), sustentacular cells (non-neuronal supporting cells), and Bowman's gland and duct cells. The lamina propria includes olfactory nerve fibroblasts, mesenchymal stem cells [46][47][48] , OECs, and Schwann cells of the trigeminal nerve [49][50][51][52] . Resident macrophages may also be present within both the olfactory epithelium and lamina propria. In contrast, cultures derived from the olfactory bulb typically contain fewer cell types. Although OECs are most dominant, meningeal fibroblasts and astrocytes are also present 53 , along with branches of the trigeminal nerve with its Schwann cells passing adjacent to the nerve fibre layer 54 (Figure 2). OECs from the OM Versus Olfactory Bulb The differences in cellular populations have given proponents of olfactory bulb biopsies reason to support their Cell Transplantation 27 (6) preference, since the alternative can strain the OEC purification process. However, harvesting biopsies from the bulb requires major intracranial surgery and presents a risk of partial to total anosmia post-operation. Even a small reduction in odorant sensitivity results in a substantial loss of function 55 . As such, most researchers find this approach unacceptable 33,56 , and prefer the less invasive procedure of intranasal endoscopy, which is used routinely to obtain mucosal biopsies [57][58][59][60][61][62] . Not only is the use of OM-OECs advantageous from a surgical and patient olfactory health perspective, there is evidence that these cells may be more beneficial for cellular therapeutic application than their olfactory bulb counterpart. OM-OECs have demonstrated longer proliferation duration in vitro 63,64 , higher secretion levels of neurotrophic factors (e.g. brain-derived neurotrophic factor, nerve growth factor (NGF), and neurotrophin-3 (NT-3)) in vivo 65 , as well as increased capacity for migration, cavity prevention, and axonal growth in spinal cord injury rat models 25 . Moreover, cadaveric OM was shown to be a more reliable source of human OECs than the olfactory bulb, with the efficacy of culturing OM-OECs being similar to that of living patients, even when procured 180 minutes following cardiac arrest 66 . Unfortunately, despite these positive characteristics, OECs remain difficult to identify in mixed culture populations due to the potential presence of other cell types, particularly when derived from the mucosa 67 . Purity of OEC Preparations To date, a number of methods have been developed to identify and purify heterogeneous cultures to obtain highly purified OEC cultures. Such methods include, but are not limited to: immunopanning, fluorescence-activated cell sorting (FACS), differential adhesion, differential trypsinization, and selective media 68 . However, these processes often rely on immunocytochemistry to identify OECs after purification of any given olfactory cell culture or transplant preparation, a technique where specific cell populations are identified by unique markers expressed at distinct levels and/ or patterns. Thus, for this method to be successful, at least one, if not more, markers unique to OECs are necessary to assess their degree of purity in any olfactory cell culture. At present, three markers are considered to be the benchmark for OEC identification in vitro: glial fibrillary acidic protein (GFAP), S100b, and p75 neurotrophin receptor (p75NTR) [69][70][71] . Among them, p75NTR is the most widely used, whether it be for mouse 26,64 , rat 72,73 , canine 74,75 , porcine 76 , primates 77 , or human OECs 78,79 . Unfortunately, several problems exist with such a reliance on this neurotrophin receptor, the most concerning of which is that olfactory fibroblasts 58,69 , astrocytes [80][81][82] , lamina propria mesenchymal stem cells 46,48 , and Schwann cells have all been reported to express p75NTR in situ and/or in vitro under certain conditions 60,[83][84][85][86] . Aside from the fact that p75NTR is not expressed by OECs of the inner nerve fibre layer of the mouse or rat olfactory bulb in situ 8,85 , a number of research groups have found that the majority of freshly dissociated OECs do not appear to express p75NTR, whether it be from the olfactory bulb or OM 87 Although the expression of p75NTR in OECs appears rather inconsistent, other cell types, particularly Schwann cells, seem to have little to no problem. In fact, some purification protocols have gone so far as to implement p75NTR specifically for Schwann cell selection 89 . Therefore, markers that are commonly used to identify OECs may not be as specific as once thought, since the two remaining OEC phenotypic markers, GFAP and S100b, also appear to immunolabel Schwann cells [90][91][92] . Therefore, there appears to be a paucity of defined markers that can unequivocally and consistently distinguish OECs from other cells in vitro. Of course, there are always two sides to an argument. In the case of Lakatos et al. (2000) 93 , purified olfactory cells maintained the ability to intermingle with astrocytes using purification protocols involving either the O4 antibody, p75NTR antisera by FACS, or magnetic nanoparticles conjugated to anti-p75NTR. This result may seem to support the argument that current OEC identification and purification techniques are indeed sufficient. However, from a clinical perspective, a sufficient method may not necessarily be an effective method. If a more effective and reliable identification and purification method of OECs could be developed, cells of high purity can be consistently produced to increase patient safety and perhaps reproducibility of clinical outcomes. Will OECs alone suffice? There are many questions that cannot be answered until an effective OEC identification and purification method is developed. One question of paramount importance is whether or not OECs are the optimal cellular composition for transplantation. If not, then can the addition of other cell types be used to enhance their biological performance? With a number of different cell types existing alongside OECs in situ, it is possible that the repair capacity of OECs may be influenced by the presence of other cells. Geoffrey Raisman and colleagues 94 , as well as others 95 have argued that olfactory nerve fibroblasts should not be perceived as contaminants targeted for removal. Instead, they claim that the cells are actually of great importance due to their critical roles in assisting the growth-promoting abilities of OEC transplants in rats 63,96,97 . The fibroblasts are thought to provide structural support by producing a semi-solid gel-like matrix in which the transplant cells become embedded 94 , and associate with the OECs in a manner similar to a perineural-like outer sheath 98 . Interestingly, the findings of an OEC transplantation study in dogs suggested that the extent of recovery did not appear to depend on the proportion of p75NTR-positive cells (OECs) 74 . From this, they postulated that the effects of OM cell transplants may not solely be elicited by the OEC component of the transplant, or that only a threshold number of OECs, which may be quite low, is required in the transplantation suspension for a therapeutic effect to be observed. However, whether or not olfactory nerve fibroblasts, or other olfactory cells, assist human OECs in their reparative endeavours remains uncertain. Nevertheless, due to the perceived necessity of olfactory nerve fibroblasts, purification procedures were waived in a recent human clinical trial, resulting in the co-transplantation of other cell types, mainly fibroblasts, alongside the OECs 78 . Thus, the degree of recovery that can be attributed solely to OECs cannot be ascertained. To resolve the question of which cells are required for therapeutic efficacy, purified cultures of OECs and fibroblasts must first be attained before the question of cellular composition can be addressed. This will allow the contribution of each cell type to be systematically tested. Only then can the potential of the various olfactory cells to induce functional recovery be realised. Inconsistencies Within and Between OEC Studies To complicate matters further, variations in cell preparations make results of comparative analyses difficult to interpret. Some studies have attempted to directly compare the genetic expression profiles of OECs and Schwann cells when each were cultured under different conditions 99 , while others have attempted to compare their efficacy in lesion paradigms using cell preparations containing differing purities 100,101 . Others still, endeavoured to find differences by comparing OECs and Schwann cells isolated at different developmental stages 102 . Although each respective approach may address questions important to their relevant study, without a uniform set of parameters, any observed differences may, in fact, be attributed to differing conditions, rather than to cell type-specific characteristics. Perhaps these inconsistencies may have also contributed to the findings of other studies that report contrariety, or lack thereof, between OECs and other cell types in vitro 93,103,104 . Despite the variable findings of OEC studies to date, a recent systematic meta-analysis of 62 transplantation studies in rodent spinal cord injury models demonstrated that OEC transplants elicit a mean locomotor recovery of 19.2% 105 . Thus, by adjusting for publication bias and missing data, this study has provided evidence to further support the clinical development of OEC transplantation for spinal cord injury. The Need for Reproducibility in Human OEC Transplantation Studies OEC research has already advanced into human investigations worldwide, including pilot surgical studies and clinical trials (Table 1) 78,79,[106][107][108][109][110][111][112][113] . Such efforts have gleaned vital data points on the safety and efficacy of the surgeries and cellular components involved. Although some participants have experienced modest functional recovery, the therapy still necessitates improvement. As mentioned previously, researchers have developed and tested various OEC purification methods in non- Unknown There was no significant improvement in any of the neurological, electrophysiological or urodynatic efficacy variables. (continued) human species. However, only the selective media approach, which uses media supplemented with NT-3, has been used in the field of human OEC transplantation. This approach was developed 60 and used in the first human OEC transplantation clinical trial 79 , where OEC purities of >95% and 76-88% were achieved 7 to 14 days prior to transplantation. Each respective purity was defined by GFAP and p75NTRimmunoreactivity, the resulting purified cultures of which were then injected into their participants. Unlike the initial trial, subsequent human studies have omitted the purification steps entirely. Instead, mixed suspensions of olfactory cells containing OECs and olfactory nerve fibroblasts 78,114 , or in some cases, whole, undissociated pieces of mucosal tissues 106,107,112 , have been grafted into spinally injured patients without any descriptions on purification or cellular composition analysis ( Table 1). Some authors argue that OECs may be more likely to survive in the transplant site when they are supported by other cells like olfactory nerve fibroblasts or substances like the ECM, which would normally exist alongside them in their natural milieu. Although these conditions may be ideal, where minimal in vitro intervention is involved, results from such studies become difficult to replicate due to unknown cellular compositions and their respective proportions in the transplanted graft. Without this knowledge, study outcomes may be irreproducible, and may also lead to unexpected consequences. Such was the case of a transplant recipient, who developed a tumor-like growth 8 years after receiving an OM autograft in an attempt to treat her paralysis 115 . The mass was found to contain large amounts of thick mucous-like material. Upon histological examination, multiple cysts lined with respiratory epithelium and submucosal glands with goblet cells, interspersed with nerve twigs, were detected. This case highlights the importance of cell identification and purification, without which the identity and purity of transplanted cells remains ambiguous. This may not only expose individuals to unknown risks, but also makes the standardization of transplants across multiple subjects difficult. For example, in the 2013 phase I clinical trial conducted by Raisman and colleagues 78 , the percentage of S100b-positive cells, deemed to be OECs, varied from 10%, to 12%, to 25.7% between the three treated patients. The authors even stated that the total cell numbers between patients, as well as OEC to olfactory nerve fibroblast (ONF) ratios in each case, was very difficult to control owing to the absence of a purification step. Without a purification step, the cellular composition of transplantation cultures will likely differ each time, leading to large variability within and between different studies. Consequently, results from such studies become difficult to reproduce, let alone be improved upon by others in the field. A robust OEC identification and purification method is therefore the key to advance the development of the therapy. Perspective A clinically viable OEC transplantation therapy needs an identification and purification method for two main reasons: safety and consistency. Although OEC transplants in human studies has witnessed relative procedural safety in the past 79,106 , reports like Dlouhy et al., 2014 demonstrate the consequences that may arise when undesirable cell types are involved in the transplantation process 115 . Yet, despite the perceivable benefits to patient safety, most human studies to date have not exercised enough control over their cell purities 78,112,114 . This makes the development of a cell purification step imperative for clinical application, where treatments must be standardized to account for the inherent variability between patients. By establishing such a protocol, treatments will not only have higher safety metrics, but also see an improvement in outcome interpretation with the transplantation purity of each cell type clearly defined. Together, these improvements will help prepare OEC transplantation for clinical application as a more reliable therapy for spinal cord injury. Conclusion The translation of human OEC grafts into human subjects requires a judgement on whether or not OECs alone possess sufficient neuroregenerative capacity. Without a reliable OEC-specific marker, or a robust method of identifying OECs from a heterogeneous population, OEC proportions within cell cultures remain difficult to accurately estimate. As it stands, there appears to be no effective means of differentiating between OECs and other cell types in human olfactory cultures. This is one of the major obstacles that ought to be addressed before the full potential of OECs can be understood. It is therefore imperative that a reliable method of purification and identification be developed to yield highly enriched populations of human OECs in culture. However, what if this idealistic OEC purification and identification method cannot be ascertained? Then a method that can, at the very least, achieve OEC cultures with consistent purity and viability should be attained; one with a rapid execution speed so that cells do not deviate substantially from their original phenotype due to culture conditions. Without one or the other, the clinical future of OEC transplantation remains uncertain and may advance no further in becoming a potential therapy for spinal cord injury. Declaration of Conflicting Interests The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by an Australian Postgraduate Award to RY, a grant from the Motor Accident Insurance Commission to JS and JE, a grant from the Clem Jones Foundation to JS, and a grant from the Perry Cross Spinal Research Foundation to JS and JE.
2018-06-21T14:10:36.022Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "34c9730225206d7353dd1bd283bfa3c960c63180", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0963689718779353", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34c9730225206d7353dd1bd283bfa3c960c63180", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118700250
pes2o/s2orc
v3-fos-license
Tunable spin-polaron state in a singly clamped semiconducting carbon nanotube We consider a semiconducting carbon nanotube (CNT) laying on a ferromagnetic insulating sub-strate with one end depassing the substrate and suspended over a metallic gate. We assume that the polarised substrate induces an exchange interaction acting as a local magnetic field for the electrons in the non-suspended CNT side. Generalizing the approach of I. Snyman and Yu.V. Nazarov [Phys. Rev. Lett. 108, 076805 (2012)] we show that one can generate electrostatically a tun-able spin-polarized polaronic state localized at the bending end of the CNT. We argue that at low temperatures manipulation and detection of the localised quantum spin state is possible. We consider a semiconducting carbon nanotube (CNT) laying on a ferromagnetic insulating substrate with one end depassing the substrate and suspended over a metallic gate. We assume that the polarised substrate induces an exchange interaction acting as a local magnetic field for the electrons in the non-suspended CNT side. Generalizing the approach of I. Snyman and Yu.V. Nazarov [Phys. Rev. Lett. 108, 076805 (2012)] we show that one can generate electrostatically a tunable spin-polarized polaronic state localized at the bending end of the CNT. We argue that at low temperatures manipulation and detection of the localised quantum spin state is possible. Nanoelectromechanics with suspended carbon nanotubes evolved rapidly in last few years [1][2][3][4][5][6][7]. Recently I. Snyman and Yu.V. Nazarov [8] considered a semiconducting CNT laying on an insulating substrate with one end of it suspended. A metallic gate below both the insulating substrate and the suspended part of the CNT generates an homogeneous electric field (cf. Fig. 1 of Ref. [8]). The mechanical bending of the suspended part of the nanotube induces then a spatial inhomogeneity of the electrostatic potential along the CNT forming a minimum at the deformable end of the wire. The competition between such an electrostatic bending with both the elastic potential of the CNT and the quantum rigidity of the electronic wave function makes the mechanical bending as well as the formation of the localized polaronic state at the movable end of the CNT to occur as a first order phase transition as a function of the electric field. The estimate for the critical field for a realistic experimental set up was predicted in Ref. [8] to be 0.01 V/µm. An impressive effort of the nanoelectronics community is currently deployed to manipulate and exploit the electronic spin degrees of freedom in transport devices (spintronics) [9]. In this context the possibility of magnetic gating, i.e. the use of ferromagnetic leads inducing magnetic exchange fields E ex /µ B (with µ B the Bohr magneton) on the electronic spin is currently actively investigated [10][11][12][13]. More surprisingly such exchange fields can have remarkable consequences also on the dynamics of a nano-mechanical system for which dynamical (shuttle) instabilities, strong spin-polarized currents, and cooling have been predicted [14][15][16] In this paper we show that the system discussed by Snyman and Nazarov [8] in presence of a magnetic dielectric substrate allows the formation of a localized fully polarized polaronic state. The experimentally observed exchange energy E ex (see Ref. [13,14]) turns out to be as large as tens of Kelvins, thus being of the same order of magnitude of the localization energy for an electron in a CNT on the scale of the micrometer. This allows for a high tunability of the polaronic state by means of two electric gates, below the suspended and non-suspended FIG. 1. Schematic of the system considered: a CNT laying on a magnetic substrate (C) and protruding out of a length L. Two independently adjustable gates (VG1 and VG2) are shown and a contact (C) on the substrate side. The potential for spin up and down (U+ and U−) is also sketched above. part of the CNT (see Fig. 1). As a result a continous electrostatic tuning of the localization length and the bound state energy can be achieved, forming a stability diagram of spin-up and spin-down polaronic states. Detection of the state of the system can be envisaged by use of a nearby single-electron transistor, for which the CNT tip acts as a gate [17]. Fully electric manipulation of the mechanical and electronic spin state of the CNT is thus possible in this system. The system. Following Ref. [8] let us consider a CNT laying on a substrate with a suspended part protruding out of a length L (see Fig. 1). In Ref. [8] it has been shown that the wavefunction ψ(x) of the electrons in the valence band of the CNT can be describled by a standard one-dimensional Schroedinger equation with an effective mass m * = 0.6m e a 0 /r, where m e is the electronic mass, a 0 the Bohr radius, and r the radius of the CNT. The variable x parametrizes the position along the CNT, its arXiv:1505.05742v1 [cond-mat.mes-hall] 21 May 2015 value is 0 at the edge of the substrate and L on the tip of the suspended part. The length of the CNT on the substrate is supposed to be L, and is taken infinity for simplicity. Then vanishing boundary conditions apply at x = L and x = −∞. As in Ref. [8] the CNT can bend with a displacement y(x) (for 0 < x < L) in the direction orthogonal to the substrate. The elastic energy cost reads IY dx [y (x)] 2 /2, where I = 6.4πa 0 r 2 is the second moment of area of the tube cross section and Y the CNT Young modulus (of the order of 1.2 TPa). Single clamping implies that y(0) = y (0) = 0 and y (L) = y (L) = 0. In this paper we will restrain to the classical description of the deflection. The tip of the CNT on the substrate is in tunneling contact with a metal whose chemical potential can be tuned close to the valence band of the CNT by adjusting the electric potential. Up to now the description followed closely Ref. [8]. We introduce now the main difference: We will assume that the substrate is a magnetic insulator that induces an exchange interaction term −E ex 0 −∞ dxσ|ψ σ (x)| 2 for the electrons being in the CNT over the substrate (x < 0). The variable σ indicates the spin projection in the z direction. This creates a spin-dependent potential so that the spin-up electrons (σ = +) are attracted in the x < 0 region. In order to tune the potential we assume that two different gates are present, one below the magnetic substrate and an other one under the suspended part. By changing independently the potentials on the two gates it is possible to modify the electrostatic potential V and the electric field E acting on the electrons on the suspended part (taking the non-suspended region as a reference for the potential, cf. Fig. 1). We can then write the full Hamiltonian for the problem as follows (θ x is the Heaviside function): The first term in Eq. (1) gives the quantum kinetic energy, the second the elastic energy, and the third is a sum of three parts: the exchange energy, the electrostatic potential and its variation induced by the deflection [y(x)] of the CNT. In Ref. [8], for V = 0 and for E ex = 0, it has been shown that it exists a critical value of the electric field E c for which the ground state is an electronic localized state on the CNT suspended part. The formation of the bound state is a first order transition: the CNT starts to bend only for E > E c and a metastable bound state exists for E c1 < E < E c . At E = E c the localization length is thus finite and typically much shorter than L. In order to have a tunable bound state it is necessary to have a smooth transition from the delocalized to the localized state. This is actually the typical case in quantum mechanics, by decreasing the depth of a potential well that allows a bound state one can delocalize progressively the wave function. The bound-state radius then diverges at the threshold for its appearance. We will thus see that the presence of V and E ex allows to create a spin-dependent tunable bound state, that is associated to a displacement of the CNT tip. Electronic problem. Let us begin with the purely electronic problem [y(x) ≡ 0 for all x]. The ground state can be found by solving the Schroedinger equation: for each spin projection. The presence of a bound state is signaled by the existance of a solution of Eq. (2) with σ < −σE ex the bottom of the relative band. Taking σ < −σE ex as a reference in energy the problem for each spin species reduces to that describled by Eq. (2) with E ex → 0 and eV → eV − σE ex = U . The solution can then be found by matching the wave function ψ(x) = Ae κx for x < 0 with ψ(x) = Be ikx + Ce −ikx for x > 0 at x = 0 asking the continuity of the wave function and of its derivative. The boundary conditions lead to the eigenvalue equation and diverges as anticipated. By changing U it is then possible to adjust the spread of the wave function on the magnetic substrate. Since the two spin species feel a different potential only on the substrate, this allows to change continuously the energy difference of the up and down bound states. The bound state energy for each spin state reads: with the threshold value for V : eV σ = U t + σE ex . A typical picture of the eV dependence of the two bound states for E = 0 is shown dashed in Fig. 2. For V − < V < V + a unique bound state exists for spin down. Let's define V c the value for which the down-spin energy crosses the up-spin bottom band: − (V = V c ) = −E ex . For V + < V < V c two bound states exist, but only the lowest one (spin up) is stable, since the spin-down lays above the bottom of the spin-up band, and any spin-flip perturbation allows its decay into the spin-up continuum. Finally for V > V c two stable bound states exist. Their energy splitting has a maximum at V c and then monotonically decreases as a function of V . This is due to the reduction of the localization length reducing the effect of the exchange interaction that acts only for x < 0. Although both spin-up and spin-down polaronic states are stable at V > V c only one of them can be occupied due to the Coulomb blockade, whose repulsion energy turns out to be much larger than the polaronic bound state energy (∼ E K ) at L 1 nm. This fact allows the formation of a controllable single-electron fully spin polarized state at the protruding part of the CNT. Nanomechanical effects. We now consider how the system behaves when we let the CNT bend. It is not possible any more to find the ground state energy analytically, we will thus follow closely the variational method used in Ref. [8] to which we refer for more details. We introduce the dimensionless variables z = x/L, h = H/E K , f = yY I/eEL 3 , φ σ = ψ σ √ L, and the coupling parameter α = (eE) 2 L 3 /(Y IE K ). The problem can then be completely determined by giving only three independent coupling parameters: α, µ = E ex /E K , and ν = eV /E K . The functional to be minimized reads: (4) By writing φ(z) = M n=1 a n (1 − z) n for z > 0 and M n=1 a n z n e κz for z < 0, and f (z) = M n=1 b n z n+1 one can enforce the boundary conditions and minimize numerically the functional in order to find the parameters {a n , b n , κ} and thus the ground state energy σ with explicit expressions for the bending and the wavefunction. The charge accumulated on the suspended part of the CNT in presence of an electric field induces a force that bends the tip of the CNT. The effective electronic potential deepens and bending lowers the bound state energy. In particular it favors a stronger localization of the wave function on the tip (measured by κ −1 ). For E ex = eV = 0 in Ref. [8] it is shown that the bound state forms with a first order phase transition for α > α c = 312.03. In order to keep a smooth transition we will consider the case α < α c and investigate the dependence of the bound state energy and wavefunction on eV /E K for given values of Before considering the results of the numerical calculation it is useful to estimate analytically the typical range of the displacement of the CNT tip induced by the localization of the charge. Let us assume that the fraction this functional has a minimum at a = −n/12 = f (1). It gives a rough estimate of the dimensionless displacement of the tip by taking into account only the competition between the electric field and the elastic stiffness. The effect of the other two parameters is hidden into the value of n, that cannot be larger than 1. We present on Fig. 3 the numerical results for α = 175 and µ = 1. One can see that the energy splitting of the two spin states is of the order of E K = E ex (top-left panel). Defining n σ = 1 0 dzφ 2 σ the fraction of charge (and spin) localized on the suspended part of the CNT one finds that for V = V c both bound states present a finite value of n σ and n − − n + ≈ 0.17. The difference is slowly reduced for larger values of the gate voltage. The same can be said for the deflection of the tip of the CNT (f σ = f (1) for each spin state, bottom-left panel). Finally the bottom-right panel shows that the ratio f ± /n ± is actually close to the rough estimate 1/12. The plots of Fig. 3 show that a particularly important quantity is the value of the physical parameters ( σ , n σ and f σ ) at the threshold V c . The dependence on V is always monotonic and the maximum or minimum values are observed at V c . In view of manipulating the spin state, the value at V c gives thus a very good indication of the range in which the state can be accessible. We thus show in Fig. 4 as a function of α and for different values of µ the threshold V c , the energy splitting − − + , the difference in the occupation n − −n + , and in the deflection f − − f + . As expected the critical voltage V c decreases as a function of α, and in particular, for sufficiently small µ, it vanishes when α approaches the critical value α c . The bound-state energy splitting is monotone in α, since the electric field increases the localization of the bound state, and thus reduces the difference of the two states. Its αdependence is rather weak. Even for µ 1 the energy splitting remains of the order of E K , that thus sets the main energy scale of the problem. Quite surprisingly the difference in the fraction of localized charge (n − − n + ) is not monotonic for small µ as a function of the electric field. This is due to the fact that the transition region is approached at different values of α for each spin state. A similar behavior is observed in f − −f + . One can conclude that the optimal value of α to observe a well defined bound state is between 100 and 200. Estimates. In order now to consider the possibility to observe the two bound states we discuss the typical scales of the problem. Expressing the radius in nm and the length in µm E K ≈ 13.9(r/L 2 ) mK. The typical value of L ranges between 0.1 and 1µm, leading to a range for E K between few K to tens of mK, thus always accessible with standard cryogenics. The thermal and quantum fluctuations of the displacement of the tip plays also an important role, since they define the distinguibility of the displacement of the two bound states. From Eq. (5) one can write an approximate potential for the tip displacement δf = f (1) − f 0 (with f 0 = n/12 the equilibrium value): h α = 2α(δf ) 2 . The equipartition theorem then gives for the thermal fluctuations δf T = [k B T /(4αE K )] 1/2 . Quantum fluctuations -δf Q -has the same expression with k B T → ω m . Since ω m /E K = 0.0332 independently of L or r [8,18] then δf Q = 0.09/ √ α. Expressing as above T in mK, L in µm, and r in nm δf T = 0.13L[T /(rα)] 1/2 . Those values have to be compared with f − − f + that are at best 0.04. f Q is thus 5 times smaller of this value already for α = 100, while in order to keep δf T small one needs T .09rα/L 2 . This is realizable for instance choosing L = .5µm, r = 2nm, α = 200 and working at temperatures T ≈ 20 mK (E K is 111 mK in this case). Conclusions. We have shown that combining electrostatic and magnetic gating the formation of a spinpolaronic state in a singly clamped CNT becomes possible. Electric, magnetic, and mechanical tuning provides an effective manipulation of such spin-polaron states offering a controllable magneto-electro-mechanical transduction with single electronic charge and spin sensitivity involving sub nanometer mechanical displacement.
2015-05-21T13:59:45.000Z
2015-05-21T00:00:00.000
{ "year": 2015, "sha1": "5f7044300ea4b078106cd29582ebc7695c943a40", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1505.05742", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5f7044300ea4b078106cd29582ebc7695c943a40", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4227075
pes2o/s2orc
v3-fos-license
Uterus preserving surgery versus hysterectomy in the treatment of refractory postpartum haemorrhage in two tertiary maternity units in Cameroon: a cohort analysis of perioperative outcomes Background Little evidence exists on the efficacy and safety of the different surgical techniques used in the treatment of postpartum haemorrhage (PPH). We aimed to compare uterus preserving surgery (UPS) versus hysterectomy for refractory PPH in terms of perioperative outcomes in a sub-Saharan African country with a known high maternal mortality ratio due to PPH. Methods This was a retrospective cohort study comparing the perioperative outcomes of all women managed by UPS (defined as surgical interventions geared at achieving haemostasis while conserving the uterus) versus hysterectomy (defined as surgical resection of the uterus to achieve haemostasis) for PPH refractory to standard medical management in two tertiary hospitals in Cameroon from January 2004 to December 2014. We excluded patients who underwent hysterectomy after failure of UPS. Comparison was done using the Chi-square test or Fisher exact test where appropriate. Bonferroni adjustment of the p-value was performed in order to reduce the chance of obtaining false-positive results. Results We included 24 cases of UPS against 36 cases of hysterectomy. The indications of surgery were dominated by uterine rupture and uterine atony in both groups. Types of UPS performed were seven bilateral hypogastric artery ligations, seven hysterorraphies, six bilateral uterine artery ligations, three B-Lynch sutures and one Tsirulnikov triple ligation with an overall uterine salvage rate of 83.3%. Types of hysterectomies were 26 subtotal hysterectomies and 10 total hysterectomies. UPS was associated with maternal deaths (RR: 2.3; 95% CI: 1.38–3.93.; p: 0.0015) and postoperative infections (RR: 1.96; 95% CI: 1.1–3.49; p: 0.0215). The association of UPS with maternal death was not attenuated after Bonferroni correction. Hysterectomy had no statistically significant adverse outcome. Conclusion Hysterectomy is safer than UPS in the management of intractable PPH in our setting. The choice of UPS as first-line surgical management of PPH in resource-limited settings should entail diligent anticipation of these adverse maternal outcomes in order to lessen the perioperative burden of PPH. Background Evidence from a recent systematic review suggests that postpartum haemorrhage (PPH) is the leading cause of maternal mortality worldwide, claiming 480,000 global maternal deaths between 2003 to 2009, of which 41.6% of these PPH-related deaths occurred in sub-Saharan Africa (SSA) [1]. Likewise, in Cameroon, a SSA country, much efforts still need to be done to reduce the current maternal mortality ratio (MMR) from 596 per 100, 000 live births to the targeted global MMR of less than 70 per 100,000 live births by the year 2030 [2]. The way forward partly entails tackling PPH which has been reported as the primary cause of maternal deaths in several recent hospital-based audit reports of the country [3,4]. In this resource-challenged setting, a composite of factors further contribute to the burden of maternal mortality and include: inadequate attendance of antenatal care [3], poverty [4] and late hospital referral [4]. Efforts to curb PPH-related maternal mortality have targeted various medical measures, non-medical measures, uterus preserving surgical interventions or hysterectomy [5]. Historically, peripartum hysterectomy has been the ultimate surgical management reserved for intractable PPH associated with haemodynamic instability. However, this radical surgery is associated with an inability to carry a future pregnancy and thus, considerable psychological trauma [6,7]. In order to preserve the uterus for subsequent pregnancies, various uterus preserving surgeries were proposed and consist of either selective ligation of pelvic arteries [8][9][10] or uterine compression suturing [11]. There are indisputable valid ethical issues hindering the conduct of a randomized controlled trial comparing the efficacy and safety of uterus preserving surgery (UPS) to hysterectomy as first-line surgical management of refractory PPH. Consequently, the highest level of evidence stems from pooled case series and case reports without control groups, carried out in highincome countries suggesting 62 to 100% success rates for various uterus preserving surgical procedures in averting hysterectomy [5]. Although this pooled evidence is low, WHO guidelines recommend UPS as the first-line surgical option in view of its "preserved" reproductive capacity [5]. Meanwhile, other publications mainly in the form of case reports have discussed the cons of UPS for PPH, namely postoperative pyometrium [12], uterine necrosis warranting hysterectomy [13,14], uterine rupture during subsequent pregnancies [15] and secondary infertility due to postoperative uterine synechia and pelvic adhesions [16,17]. Hence, we proposed this study to compare the perioperative outcomes of UPS versus hysterectomy for PPH in a selected sub-Saharan African population with a known very high MMR due to PPH. Study design, setting and participants This was a cohort study which retrospectively enrolled all women with a minimum gestational age of 28 weeks who underwent first-line surgical management by either UPS or hysterectomy for refractory postpartum haemorrhage following vaginal or caesarean delivery between January 1, 2004 to December 31, 2014 in two university teaching hospitals of Cameroon; the Yaounde Gynaeco-Obstetric and Paediatric Hospital, and the University Hospital Centre of Yaounde. We excluded patients with persistent bleeding managed by another UPS or hysterectomy after initial failure of a UPS. Patients with incomplete medical records were also excluded. Definition of terms PPH was defined as an estimated blood loss greater than 500 ml within 24 h after vaginal delivery and greater than 1000 ml following caesarean section [18]. Refractory or intractable PPH was defined as persistent PPH despite standard medical management (oxytocin, methyl-ergometrine, misoprostol), non-medical management (uterine massage, bimanual uterine compression and then compression of the abdominal aorta), repair of vaginal or cervical lacerations and manual uterine revision where appropriate [5]. Uterus preserving surgery was defined as any surgical intervention consisting of ligation of pelvic arteries or application of uterine compression sutures to achieve haemostasis while concomitantly conserving the uterus e.g. Bilateral hypogastric artery ligation, Uterine artery ligation, B-lynch uterine compression suture, Tsirulnikov triples ligation and Hysterorraphy. Tsirulnikov Triple ligation entailed bilateral ligation of the round ligaments, uteroovarian ligaments and uterine arteries [19]. B-lynch uterine compression suture consisted of making a lower segment transverse hysterotomy or removing the sutures of a recent caesarean section to apply lateral uterine brace sutures to envelop and compress the bleeding uterus in order to achieve haemostasis [11]. Hysterectomy was defined as a surgical procedure geared at achieving haemostasis through resection of the uterus e.g. subtotal abdominal hysterectomy or total abdominal hysterectomy. Total abdominal hysterectomy consisted of complete resection of the uterus and cervix, while the cervical stump was left insitu in subtotal abdominal hysterectomy. Management of postpartum haemorrhage Uniform and standard operating protocols for the management of PPH were in use in both study settings [5]. Noteworthy, sulprostone was not incorporated in the medical management of PPH due to its non-availability in Cameroon at the time of the study. The decision on whether to perform UPS or hysterectomy as the first surgical management was guided by each patient's clinical condition. Generally, when the patient presented with haemorrhagic shock, especially if she had four living children, hysterectomy was preferred over UPS. All surgeries were performed by four Consultant Obstetricians (JSD, PF, EN and EM) with more than 10 years of clinical experience after qualifying. Antibiotic prophylaxis was administered intravenously before surgery. Also, all surgical procedures were performed under general anaesthesia and via laparotomy. Data collection, variables and measurements The following study variables were retrieved from patients' medical records and postoperative notes; (i) Socio-demographic data: age, employment status, marital status and level of education. (ii) Pre-operative characteristics: gestational age, gravid formula, estimated blood loss, and haemodynamic parameters. (iii) Surgical management: type of UPS or hysterectomy and their indication. (iv) Intra-operative complications: intraoperative blood lost, ureteral injuries and maternal death. (v) Postoperative course till discharge: recurrence of haemorrhage, further blood transfusion, surgical site infection, length of hospital stay and maternal death. Data management and statistical analysis Data analysis was performed with Epi Info 3.5.1 software. Distribution of perioperative characteristics were compared between women managed by UPS and those managed by hysterectomy using the Chi-square test or Fisher exact test where appropriate. Relative risk (RR) and their corresponding 95% confidence intervals (95% CI) were calculated in order to measure associations. The original alpha-value was set at 0.05. In order to reduce the chance of obtaining false-positive results from the multiple analyses performed on the same dependent variable, the Bonferroni adjusted p-value was calculated by dividing the alpha-value by the number of comparisons. Hence, any comparison was statistically significant if it was inferior to the Bonferroni adjusted p-value. Variables with too much missing data precluding meaningful analyses were excluded. Ethics consideration The study was approved by the Institutional Review Board of the Faculty of Medicine and Biomedical Sciences, University of Yaounde I, Yaounde, Cameroon under the ethical clearance No 168/CIERSH/DM/2015. Administrative authorizations were equally obtained from the directorate of both hospitals involved prior to the beginning of the study. Results From January 2004 to December 2014, there were 42,944 deliveries for 1457 cases of PPH, corresponding to an incidence of PPH of 34 per 1000 deliveries. Likewise, 74 cases of PPH managed surgically were recorded, corresponding to an incidence of surgical management of PPH of 1.7 per 1000 deliveries. Of the 74 PPH managed surgically, eight were excluded because they were managed by hysterectomy after failed UPS. Two hospital files were incomplete while four were missing due to an inadequate recordkeeping system. Thus we retained 60 eligible cases of surgical management of PPH; 24 uterus preserving surgeries and 36 hysterectomies. Their mean maternal age was 32.6 ± 5.7 years. In both groups, majority were married (71.6%), unemployed (53.3%), multigravidae (90%), multiparous (78.3%), had a level of higher education (56.6%), were referred from another health facility (61.6%) and were at a term pregnancy (80%). The average number of antenatal care consultations of the entire study population was 2.1 ± 1.8. Indications for surgical management of PPH The main indications for both UPS and hysterectomy were uterine rupture, uterine atony and coagulopathy as depicted in Table 1. Types of surgeries The types of uterus preserving surgical interventions and their corresponding success rates in preventing hysterectomy are shown in Table 2. On the other hand, the types of hysterectomies performed were 26 (72.2%) subtotal hysterectomies and 10 (27.8%) total hysterectomies. Perioperative complications UPS was statistically significantly associated with maternal deaths (RR: 2.3; 95% CI: 1.38-3.93.; p: 0.0015) and postoperative infections, mainly endometritis (RR: 1.96; 95% CI: 1.11-3.49; p: 0.0215). The association of UPS with maternal death was not attenuated after Bonferroni adjustment. All cases of maternal death were related to late referral of patients in a state of haemorrhagic shock. Hysterectomy was not significantly associated with an adverse perioperative outcome (Table 3). Discussion This study aimed at comparing UPS versus hysterectomy for refractory PPH in terms perioperative outcomes in Cameroon, a sub-Saharan African country with a high maternal mortality ratio due to PPH. We found that uterus preserving surgical management of PPH doubled the risk of maternal mortality and post-operative infections. Furthermore, the association of UPS with maternal death was not attenuated once adjustment for potential false-positive results was made. Over the 11-year review period, we found an incidence of 1.7 per 1000 deliveries for surgical management of PPH close to the 2.3 per 1000 deliveries reported in France in 2011 [20]. The overall incidence of peripartum hysterectomy in our series was 0.8 per 1000 deliveries, less than the 4 per 1000 deliveries observed by in Ghana [21] and the 3.78 per 1000 deliveries in neighbouring Nigeria [22]. However, this incidence was higher than the 0.48-0.53 per 1000 deliveries reported in highincome countries [23,24]. The incidence of peripartum hysterectomy varies across the world and it is influenced by the socioeconomic status, standard of obstetric care, cultural values and the acceptability of family planning [25]. In our cohort, the low incidence may be explained by the prevailing local culture which favours fertility and resents hysterectomy. The incidence of conservative surgery was 0.6 per 1000 deliveries, in contrast to the 1.96 per 1000 deliveries reported in France [20], probably due to the fact that UPS for PPH is not yet an integral firstline surgical management of PPH in our setting. Like prior reports on either UPS [26,27] or hysterectomy [21,22] for PPH, we found uterine rupture and uterine atony to be the main indications for surgical management of PPH. In modern obstetrics, abnormal placentation (placenta accreta or placenta praevia) has replaced surgical indications like uterine atony or uterine rupture, because of the increased caesarean section rates, the improved medical management of uterine atony, a reduced incidence of uterine rupture owing to the preference of the lower uterine segment incision over the upper uterine segment incision for caesarean section [24,28]. Hence, this finding may imply inadequate obstetrical care in our setting with resultant higher complications of uterine rupture or atony. Contributing risk factors to the high frequency of uterine rupture and uterine atony in our cohort include the high proportion of multiparous women (78.3%), a known risk factor of PPH [29] and peripartum hysterectomy [23]. This highlights the need for an adequate family planning policy in these two hospitals. Noteworthy, previously reported risk factors for the high incidence of PPH in Cameroon [3,4] were prevalent in our study population as follows; poor antenatal care attendance (evident by a mean number antenatal consultations of two) and unemployment (53.3%). These findings pinpoint the deleterious effects of financial constraints on the accessibility to healthcare during pregnancy in this country. Hence, the subsequent risk of unanticipated PPH. The formulation of health policies targeting these risk factors for PPH cannot be overemphasized. The types of UPS performed were seven bilateral hypogastric artery ligation, seven hysterorraphies, six bilateral uterine artery ligations, three B-Lynch uterine compressive suturing and one Tsirulnikov triple ligation. The uterine salvage rate for bilateral hypogastric artery ligation was 86%, higher than previously reported rates of 42-75% [30,31]. We observed a uterine salvage rate of 66.7% for uterine artery ligation which was inferior to prior reports of 80-96.2% [32,33]. Meanwhile the success rates for uterine compressive suturing and Tsirulnikov triple ligation (100%) were similar to previously published data [11,19,34]. Bilateral hypogastric artery ligation is the oldest known UPS, first described in 1960 by Sagarra et al. [8]. This may explain the reason why there was a greater resort to this procedure by obstetricians compared to the B-Lynch compressive suturing which is the most recent in the armamentarium of UPS for PPH. With regards to hysterectomy, subtotal hysterectomy and total abdominal hysterectomies were performed in 26 and 10 cases, respectively. There is often a therapeutic debate on the benefits of subtotal versus total abdominal hysterectomy [25,35]. Our predilection for the former was due to its technical ease with shorter operative time, less blood loss and reduced risk of urologic injuries in an emergency situation with haemorrhagic shock. However, some authors observed similar outcomes for both types of hysterectomies [25,28]. UPS was associated with postoperative endometritis (RR: 1.96; 95% CI: 1.11-3.49; p: 0.0215), notably B-Lynch compressive sutures explained by the fact that this surgical technique entails a hysterotomy and endo-uterine manipulations which are known risk factors for endometritis. Contrary to past concepts [36,37], in our study, maternal deaths were more frequent in UPS compared to hysterectomy (29.1% vs. 5.6%; RR: 2.3; 95% CI: 1.38-3.93.; p: 0.0015). We attribute this to delay in deciding to perform UPS, the lack of a decisional clinical algorithm, hypovolemic shock and the irregular supply of blood products and oxygen for appropriate resuscitation in our resource-limited setting. Moreover, the absence of a national health insurance policy was a contributing factor to maternal mortality in the uterus preserving surgical group because management was delayed in three cases of maternal deaths where the patient's family could not immediately afford to pay for the cost of healthcare. With extensive literature search, to our knowledge, this study is one of the first comparative studies on UPS versus hysterectomy for refractory PPH in sub-Saharan Africa. Its strength lies in its cohort design over a wide review period of 11 years to assess this comparison. Its findings may guide obstetricians in making informed decisions on the various types of surgical techniques for the management of intractable PPH in resource-limited settings. Other similarly large studies were case series on the outcomes of UPS alone [10,26,27,32,36] or hysterectomy alone [21,22]. Its main limitation is the inability to assess the surgical expertise of the operating obstetricians, which is a paramount determinant of the success rates of the type of surgical intervention undertaken. However, all the obstetricians who performed either the UPS or hysterectomy in this cohort had a minimum of 10 years of clinical experience after qualifying and were familiar with all the surgical procedures performed. Also this study was not designed to identify long-term complications of UPS such as secondary infertility or uterine rupture during subsequent pregnancies, highlighting the need for further research in this domain. Conclusions This is one of the largest and first series comparing the perioperative outcomes of uterus preserving surgery versus hysterectomy for refractory PPH in Cameroon and perhaps sub-Saharan Africa at large. Its results suggest higher perioperative mortality for uterus preserving surgery than hysterectomy in this resource-limited environment, which persist even after adjusting for potential false-positive results. Hence, the choice of uterus preserving surgery over hysterectomy as first-line surgical management of intractable PPH in resource-limited settings should entail diligent anticipation of the aforementioned adverse maternal outcomes and vigorous scrutiny of the available health infrastructure in order to lessen the perioperative burden of PPH.
2018-04-03T00:11:01.792Z
2017-05-30T00:00:00.000
{ "year": 2017, "sha1": "689d800c729cc5fcff4da0ab068ed47ebe006931", "oa_license": "CCBY", "oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-017-1346-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "689d800c729cc5fcff4da0ab068ed47ebe006931", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
138451407
pes2o/s2orc
v3-fos-license
A numerical investigation of mesh sensitivity for a new three-dimensional fracture model within the combined finite-discrete element method http://dx.doi.org/10.1016/j.engfracmech.2015.11.006 0013-7944/ 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). ⇑ Corresponding author at: Department of Mechanical Engineering, University College London, Torrington Place, London WC1E 7JE, United K Tel.: +44 (0)20 3108 9514. E-mail addresses: liwei.guo@ucl.ac.uk (L. Guo), j.xiang@imperial.ac.uk (J. Xiang), j.p.latham@imperial.ac.uk (J.-P. Latham), b.izzuddin@impe (B. Izzuddin). Liwei Guo a,⇑, Jiansheng Xiang , John-Paul Latham, Bassam Izzuddin b Introduction In the field of numerical modelling of fractures in quasi-brittle materials, linear and non-linear elastic fracture mechanics based methods [1][2][3], the extended finite element method (XFEM) [4][5][6] and meshless methods, such as the element free Galerkin method (EFGM) [7,8] have traditionally been in the dominant positions. Due to the discrete nature of fracture and fragmentation behaviour, discontinuum-based numerical methods that are originally used for granular materials, such as the smoothed particle hydrodynamics (SPH) method [9][10][11] and the discrete element method (DEM) [12][13][14] have also become increasingly popular. In actual numerical simulations of engineering applications, the choice of modelling approach should be based on the likely failure mechanism of the material, i.e. whether it is a failure of material, discontinuity or a combination of both [15]. To fully explore and extend the potential of different numerical methods, there is an increasing interest in combining FEM-based and DEM-based methods to converge to a formulation that has the advantage of using the DEM to capture the discrete behaviour during fracture and fragmentation processes while retaining the accurate characterisation of deformation and stress fields using the FEM. It should be noted that the literature mentioned in this section is not meant to be a comprehensive review of numerical methods in fracture modelling, but a tailored one with the focus on using combined FEM and DEM formulations. In this category, different research groups have come up with various strategies in the development http The three-dimensional fracture model investigated in this paper [22] is a new development in the context of the combined finite-discrete element method (FEMDEM) [23,32]. A simple example of modelling an impact between a fragile sphere (breakable) and a rigid base (unbreakable) is shown in Fig. 1. It can be seen that the sphere is only slightly damaged and still reserves the kinetic energy to bounce off the base when impact velocity is low (10 m/s, Fig. 1a), but breaks into many fragments of different sizes when impact velocity is high (100 m/s, Fig. 1b). In FEMDEM fracture modelling, the entire domain is treated as a multi-body system (e.g. the sphere and the base in Fig. 1) and each discrete body is further discretised by finite element meshes. The FEM formulation is used to simulate continuum behaviour, i.e. the calculation of strains and stresses in finite element domains, and the DEM formulation is used to simulate discontinuum behaviour, i.e. the calculation of contact forces across discontinuities. Comprehensive descriptions of the FEMDEM method can be found elsewhere [23,33]. Regarding three-dimensional fracture modelling using the FEMDEM method, there are three main benefits. Firstly, the interaction between discrete fracture walls can be modelled more realistically and accurately by contact mechanics in DEM algorithms; moreover, other media, e.g. fluid, can be directly introduced between fracture walls for fluid-structure interaction simulations [25]. Secondly, the FEMDEM fracture model can initiate new fractures and furthermore, fracture mechanics energy concepts are used to limit fracture propagation; it has the advantage of not requiring the specification of initial flaws or any predefined fracture patterns (e.g. no initial flaws in the sphere in Fig. 1), which are normally prerequisites for fracture growth models based on linear and non-linear elastic fracture mechanics. Thirdly, due to the addition of the contact detection and interaction algorithms in the DEM formulation, this fracture model is particularly useful when a large number of fragments are generated after impact (e.g. Fig. 1b), and for modelling fracture and fragmentation in multi-body systems [24]. In terms of specific formulations and algorithms, the new three-dimensional fracture model has the following three main features. 1. A new space discretisation scheme featuring three-dimensional interface elements has been developed. Using this scheme, any arbitrarily shaped three-dimensional domain can be discretised by 4-node tetrahedral elements and 6node interface elements, which are inserted between tetrahedral elements. The material failure criteria are applied to the interface elements whose failure would physically separate tetrahedral elements and generate discrete fracture surfaces. 2. A Mohr-Coulomb failure criterion with a tension cut-off is used to determine the failure state of interface elements. The shear strength is defined as a function of the normal stress acting perpendicular to the shear direction. Therefore, fracturing behaviour in complicated stress fields can be realistically captured. (b) Impact velocity 100 m/s. 3. The finite element formulation and the discrete element formulation are separated both in the space domain and in the time domain. In the space domain, the FEM formulation is used for continua, while the DEM formulation is used for the interaction across discontinuities, i.e. boundaries of discrete bodies and fracture surfaces. In the time domain, the continuity between tetrahedral elements is only constrained by interface elements before fracture initiation, then after fracture formation the interaction between tetrahedral elements on both sides of the fractures is purely simulated by contact detection and interaction algorithms. The complete governing equation is solved by an explicit time integration scheme. The detailed algorithms of this three-dimensional fracture model are quite involved and can be found in Guo [22] and Guo et al. [24]. Here only some key concepts related to mesh sensitivity are briefly introduced. In order to separate tetrahedral elements according to certain failure criteria, a special type of elements -6-node interface elements are inserted between 4-node tetrahedral elements. The deformation in the continuum domain will generate stresses both in 4-node tetrahedral elements and 6-node interface elements. In the interface elements, the stress is calculated from the relative displacement between two triangular faces of adjacent tetrahedral elements. The displacement d and stress r of an interface element are defined as where r is the normal stress component, corresponding to the normal displacement d n , and s is the shear stress component, corresponding to the shear displacement d s . The three-dimensional fracture model investigated in this paper is similar to the concept proposed by Hillerborg et al. [34], who assumed there exists a plastic zone corresponding to a micro-fractured zone with some remaining ligaments for stress transfer in front of the actual fracture tip. For a single mode I tensile fracture, for example, the transition from the elastic zone to the discrete fracture via the plastic zone is illustrated by Fig. 2. The white area represents the continuum domain that is intact without any fractures, while the discrete fracture is represented by the light yellow area. The orange area is defined as the plastic zone, which corresponds to the displacement range d np $ d nc in the interface elements. At d n ¼ d np , the normal stress r in the interface element reaches its peak value, which is the tensile strength f t in this case. Ahead of this position (to the right-hand side), the domain is at a strain softening stage (orange area), so the normal stress r decreases from the tensile strength f t to zero at the actual fracture tip, where the normal displacement d n in the interface element reaches its critical value d nc . Considering the whole transition from the elastic zone to the discrete fracture via the plastic zone, a stress-displacement relation including strain softening effect is used for the interface elements (Fig. 3), which is similar to the combined single and smeared crack model proposed by Munjiza et al. [35]. It should be noted that the normal stress r and shear stress s are calculated following stress-displacement curves of the same shape but different definitions of f, d p and d c on the curves. It is also worth mentioning that the shape of the curve after the peak stress has a very generalised form in Fig. 3, and specific data sets can be used to define the post-peak curve for any quasi-brittle materials. For example, Rougier et al. [21] adopted a different form for their modelling of granite. The peak stress f in Fig. 3 represents the material strength, so it means the tensile strength f t when calculating the normal stress component r, and shear strength f s for the shear stress component s. In this model, the tensile strength f t is assumed to be a constant, while the shear strength f s is defined by the Mohr-Coulomb criterion with a tension cut-off, where c is the cohesion, / is the internal friction angle, and r n is the normal stress acting perpendicular to the shear direction. Here the engineering mechanics sign convention is used, so tensile stress is positive and compressive stress is negative. It should be noted that because the normal stress r n cannot exceed the tensile strength f t , the tension cut-off that happens when r n P f t is automatically guaranteed in Eq. (3). The interface elements will fail when the displacement reaches its critical value d c , which is defined based on the Griffith theory [36]. It assumes that a certain amount of energy is absorbed by the formation of a unit area of the fracture surface in a brittle medium, which is called the fracture energy G f and can be calculated as Therefore, the normal stress r can be calculated following the stress-displacement relation in Fig. 3, where d np is the maximum elastic displacement in the normal direction, d nc is the critical displacement at failure in the normal direction, z is a heuristic softening parameter obtained by curve fitting using experiment data from direct tension tests of concrete [35,37], and can be calculated by Eq. (7). In actual simulation using this fracture model, the parameters in Eq. (7) are usually chosen as a = 0.63, b = 1.8 and c = 6.0, which are material properties derived from experiment data [38]. D is a parameter defined to quantify the deformation in interface elements, and is given by Eq. (8). where d sp is the maximum elastic displacement in the shear direction, d sc is the critical displacement at failure in the shear direction. In a similar way, the shear stress s can be calculated by substituting normal displacement d n with shear displacement d s , and other parameters in the normal direction (with subscript n) with the corresponding parameters in the shear direction (with subscript s). For example, different values can be used for the fracture energy G f for tensile and shear modes, but for the numerical tests in this paper (Sections 3 and 4), the shear failure mode is of second order influence on the Brazilian test (indirect tension) and no influence on the pure tension example, so only the fracture energy G f for the tensile mode is used in the simulations. After interface elements fail, discrete fractures will form between tetrahedral elements, using the faces of adjacent tetrahedral elements as fracture surfaces. At this stage, the stress-displacement relation defined in Fig. 3 the failed interface elements. Instead, the contact detection and interaction algorithms in the DEM formulation will be used to simulate the interaction, e.g. normal compression and sliding friction, between fracture surfaces. It should be noted that the mesh size sensitivity of the contact algorithms after fracture formation has been studied elsewhere [39], so in this paper the mesh size sensitivity (Section 3) is investigated by modelling opening-mode fractures and is mainly associated with the FEM formulation. However, in the study of mesh orientation sensitivity (Section 4) the contact algorithms are automatically activated in the modelling of mechanical contact between platens and the disc specimen, and between fracture surfaces in the compressive crushing zones near platens. Mesh size sensitivity Previous analytical and numerical studies [31,35] have shown that the size of finite elements close to the fracture tip needs to be much smaller than the length of the plastic zone to achieve accurate results in two-dimensional fracture simulations using the FEMDEM method. In this section, a similar methodology of simulating a series of models with the same geometry but different element sizes is used to investigate mesh size sensitivity for the new three-dimensional fracture model within the FEMDEM method. In the analysis of numerical results of this section, the length of the plastic zone in front of the actual fracture tip will be used as a main approach to quantify the fracture propagation process. This topic of fracture tip plastic zone in quasi-brittle materials has been extensively studied by analytical and experimental methods [40][41][42][43][44]. In the previous work [31], the theoretical value of the plastic zone length was estimated from analytical solutions [45]. More specifically, the lower value of the plastic zone length D for a short tensile fracture is obtained from Muskhelishvili's solution as where E is the Young's modulus of the continuum, f t is the tensile strength, and d c is the critical opening displacement when bonding stress in the plastic zone equals zero, which can be obtained from Eq. (5), therefore For a long tensile fracture, the lower value of the plastic zone length D is obtained from Westergaard's solution as It should be noted that although Eq. (12) is derived from two-dimensional analysis, the three-dimensional cases simulated in this section can be simplified into two-dimensional problems in theoretical analysis, so Eq. (12) will be used in Section 3.1 as the theoretical estimation for comparison with numerical results. Test setup The first problem simulated here is similar to the cases studied in the two-dimensional simulations [31], which is the propagation of a single fracture at the centre of a square domain. The model is shown in Fig. 4. The size of the square domain is 120 mm  120 mm in the xy-plane, and 20 mm in the z-direction. A pre-existing horizontal fracture is inserted at the centre of the square. A linearly increasing pressure P is applied at the fracture surfaces and the loading rate is 1:0  10 10 Pa/s. As the pressure increases, the fracture will start to propagate until it breaks the model into two equal parts. As both the geometry and loading condition are symmetric with respect to the central yz-plane, only the right half of the model is simulated (Fig. 4) and a roller boundary condition, which means the translational displacement in the x-direction is constrained, is added to the left boundary of the right-half model. The material used in the tests (Table 1) is assumed to represent typical rock [46][47][48] or fine concrete mortar properties [49]. The friction coefficient between fracture surfaces is set to be 0.6. Five models with the same geometry and loading condition but different element sizes are tested. In order to eliminate the influence of mesh orientation, the domain is meshed using structured 4-node tetrahedral elements. Unstructured meshes will be used in Section 4 to investigate the mesh orientation sensitivity. The five meshes are shown in Fig. 5 and the element sizes h and corresponding element numbers N are listed in Table 2. Numerical results The numerical results of five models with consecutively refined meshes are compared below. Note that the stress con- Before showing the numerical results, first the theoretical estimations of the plastic zone length are given using the material properties in Table 1. From Eq. (11) the estimation of the lower value of the plastic zone length D lower can be obtained as The upper value of the plastic zone length D upper can be estimated from Eq. (10) Based on the theoretical estimation, for element size of 20 mm (Model 1) and 10 mm (Model 2), the plastic zone can only be discretised by 1-2 finite elements. It can be seen from Figs. 6 and 7 that the numerical results match the theoretical predictions. In these two cases the length of the plastic zone is governed by the element size, which spreads only one element in Model 1 (h = 20 mm) and two elements in Model 2 (h = 10 mm). The stress gradient in front of the fracture tip cannot be accurately captured because there are not enough elements inside the plastic zone. Refined meshes with element size h = 5 mm (Fig. 8), h = 2.5 mm (Fig. 9) and h = 1.25 mm (Fig. 10) are tested. The results show the length of the plastic zone is independent of the element size. In Model 3 (h = 5 mm, Fig. 8), the plastic zone spreads approximately 3 elements, which is equivalent to 15 mm. The same length of the plastic zone can also be seen in Model 4 (h = 2.5 mm, Fig. 9) and Model 5 (h = 1.25 mm, Fig. 10), which have 6 and 12 elements in the plastic zone, respectively. Especially from the stress contours of element size 2.5 mm (Fig. 9) and 1.25 mm (Fig. 10), the gradient of stress distribution, which decreases from the tensile strength to zero inside the plastic zone, is clearly characterised. To further compare the plastic zone length obtained from different mesh sizes, the measured values from numerical modelling are plotted in Fig. 11. The normalised element size is the original value divided by 20 mm, i.e. the largest element size Table 1 Material properties in single fracture propagation tests. Material properties Values Density q (kg m À3 ) used in this series of tests. It can be seen that longer plastic zones are generated from larger element sizes and the plastic zone length converges to 15 mm for element size equal to or smaller than 5 mm. Compared with the theoretical estimations given in Eqs. (13) and (14), it can be deduced that the plastic zone should be discretised by at least three finite elements to give a correct numerical representation of the plastic zone ahead of the fracture tip. Test setup The second problem is a series of three-point bending tests, i.e. where a beam supported at its two ends is compressed in the middle and in the end it breaks due to flexural deformation. The test setup is shown in Fig. 12. The dimensions of the beam specimen are 500 mm  50 mm  20 mm, which are the length, height and thickness in the x; y and z-direction, respectively (Fig. 12a). Loading velocities V = 0.1 m/s are applied in the vertical y-direction to the three platens in order to generate a three-point bending condition. The upper platen moves downwards and the two lower platens move upwards at the same velocity. It should be noted that there is in effect a twofold higher velocity with this setup than a conventional laboratory test where only the central platen moves. To reduce the impact effect, the velocities first increase linearly from zero to a constant value V = 0.1 m/s in 0.1 ms, and then remain constant. The material of the beam specimen in the simulations is assumed to be homogeneous and isotropic, and there are no preexisting flaws inside it (Fig. 12b). The material properties of the beam specimen are the same as the values listed in Table 1. It should be noted that the three-dimensional fracture model is only applied to the beam specimen, and the three steel platens are assumed to be rigid, so material properties are not needed for the platens. The friction coefficient is set to be 0.6 between fracture surfaces, and 0.1 between the beam specimen and platens. To investigate the influence of mesh sizes on the mechanical behaviour in three-point bending conditions, the beam specimen is discretized by four different mesh sizes (Fig. 13). The element sizes h and corresponding element numbers N are listed in Table 3. All the three platens are meshed in the same manner in four tests, which have 137 elements each. Numerical results Fig. 14 shows contours of horizontal stress before fracture initiation and the final fracture pattern in three-point bending tests. From the horizontal stress contours it can be seen that before fracture initiation, the same pattern of stress fields is achieved in all four tests, where the upper part of the beam specimen is in compression and the lower part is in tension, with a neutral surface in the middle of the vertical y-direction. The highest tensile stress happens at the middle of the outer extending arc of the modelled beam, which corresponds to the location of the final fracture (Fig. 14e). It should be noted because all the four tests obtain exactly the same fracture pattern, which is a single fracture breaking the beam specimen into two equal parts, only the final fracture pattern of Beam 4 (h = 2.5 mm) is shown in Fig. 14. The load F (contact force in the loading y-direction between the upper platen and the beam specimen) is plotted against the maximum deflection d y (vertical displacement at the centre of the beam specimen) for the four mesh sizes in Fig. 15. It should be noted that the contact force is calculated by an integration of nodal contact forces on the platen. It can be seen from the F-d y curves that the peak loads of larger mesh sizes (25 mm and 10 mm) are higher than the peak loads for smaller mesh sizes (5 mm and 2.5 mm), and specimens with coarse meshes fail at smaller deformations. The load-deflection relation and the value of peak load converge to a stable state when mesh size h is equal to or smaller than 5 mm. This observation is in agreement with the results obtained from the same material in single fracture propagation tests (Section 3.1), which showed that the plastic zone length ahead of the fracture tip converges when element size equal to or smaller than 5 mm. It is also in agreement with the conclusion from a two-dimensional numerical simulation of an impact test on a concrete beam with different meshes [35]. It is worth mentioning that despite the over-estimation of peak loads by relatively coarse meshes, the errors are less than 8%, which indicates they might be employed when higher accuracy is not necessary and computational resources are limited. It can also be seen that the brittle failure behaviour of the beam specimen is correctly captured by the fracture modelling. Here the brittle failure is defined as the significant loss of strength with fracture formation [50]. After reaching the peak value, the load on the beam specimen immediately drops to zero, which means the beam loses its strength to sustain any load so the structure can be regarded as collapsed. It is worth mentioning that there are fluctuations on the F-d y (load-maximum deflection) curves before reaching peak loads; this is because the slim shape of the beam specimen causes certain vibration modes, which affect the recording of the contact force between the beam and the upper platen. Mesh orientation sensitivity A complete mesh sensitivity analysis includes two parts: mesh size sensitivity and mesh orientation sensitivity. In the previous section, only mesh size sensitivity is investigated using structured meshes. Once the mesh size satisfies the requirement, the next aspect to consider is mesh orientation. The three-dimensional fracture model used in this paper is based on fixed meshes, so at the element level fractures can only propagate along tetrahedral element boundaries. Tijssens et al. [26] have shown that cohesive zone models show clear mesh dependency of fracture patterns in structured meshes, which means the fractures tend to propagate along dominant directions of element alignment. Therefore, in the three-dimensional fracture modelling using the FEMDEM method, unstructured meshes are recommended in order to reduce the mesh dependency of fracture patterns at the global scale. It should be noted that, even though from a global point of view mesh dependency can be reduced by using unstructured meshes, fracture paths are still dependent on local mesh orientation, and it is necessary to prove the global fracture pattern and critical load are not affected by local mesh orientation when unstructured meshes are used. In this section, specially designed Brazilian tests with unstructured meshes are simulated to examine the mesh orientation sensitivity. It should be noted that the mesh orientation sensitivity studied here does not mean the sensitivity to certain mesh alignment patterns, but refers to the repeatability of numerical results (e.g. fracture path and peak load) using different unstructured meshes with the same mean element size. Test setup The setup for the Brazilian tests is shown in Fig. 16. A vertically placed disc specimen perpendicular to the z-direction is compressed diametrically between two platens. The diameter of the disc specimen is 40 mm and the thickness in the zdirection is 15 mm (Fig. 16a). Loading velocities V are applied to both platens to generate an indirect tensile stress field inside the disc. The two loading velocities have the same value but opposite directions. To reduce the impact effect when the loading starts, the velocities first increase linearly from zero to a constant value V = 0.05 m/s in 0.2 ms, and then remain constant in the simulations. The time-step used in the simulations is Dt ¼ 2  10 À9 s. The domain is meshed using unstructured 4-node tetrahedral elements and the mean mesh size is $1.2 mm. According to the conclusion drawn from Section 3, this mesh size is small enough to generate accurate results so the mesh size effect is not considered in the tests. A total number of 51,690 elements are generated for the disc specimen and 2854 elements for the platens. The two loading platens are originally placed horizontally so the compressive loading is in the vertical y-direction. Then they are rotated with respect to the z-axis to a certain angle h but the disc specimen is kept in its original position (Fig. 16b). The angle between the loading axis and the vertical y-direction is defined as the loading angle h. Because the elements along the loading axis (blue dashed lines in Fig. 16b) are arranged in different patterns when the loading direction changes, the effect of local mesh orientation can be investigated by comparing the fracture patterns and peak loads of four loading angles 0°, 30°, 60°and 90°under identical loading conditions, without the need to actually construct different meshes. The material properties used for the disc specimen are the same as in the mesh size sensitivity test (see Table 1) except the fracture energy G f is increased to 50 J m À2 . The fracture energy G f was intentionally given a low value in Section 3 because larger fracture energy results in a longer plastic zone (Eqs. (10) and (11)), which is difficult to measure due to the limited size of the domain. It should be noted that the steel platens in the Brazilian tests are assumed to be rigid, so material properties are not needed for them. The friction coefficient is set to be 0.6 between fracture surfaces, and 0.1 between the disc specimen and platens. Numerical results The numerical results of loading angles 0°, 30°, 60°and 90°are presented in Fig. 17. It can be seen that although elements are irregularly arranged along the loading axis for different loading angles, the simulations all obtain correct global fracture patterns that match theoretical predictions [51] and the range of experimental observations for homogeneous isotropic rock [52]. Due to the high contact forces, shear fractures first initiate at the two ends of the disc specimen that are in contact with the loading platens. Then the central fracture propagates through the whole disc and splits it into two halves. Final fracture patterns have both major tensile splitting fractures along the loading axis and minor crushing zones (shear fractures) near the loading platens. The fracture path differs somewhat in character in each case and departs more from the diametral loading plane in the h = 90°case (Fig. 17d). However, the failure modes and global fracture patterns are very similar in all the cases, e.g. there are no branches from the middle of the major tensile splitting fracture, which can break the disc into more than two pieces. The relations between the load F and the axial strain e obtained from numerical simulations are shown in Fig. 18. The load F is calculated as where F 1 and F 2 are the contact forces between the two platens and the disc specimen, respectively. Because both platens move at the same velocity but in opposite directions, the values of F 1 and F 2 are almost equal. The axial strain e is defined to measure the deformation in the disc along the loading axis and is calculated as: where d is the diameter of the disc specimen. From Fig. 18 it can be seen that all the four simulations have the same response at the initial elastic deformation stage. When the axial strain e exceeds 1.0%, the four curves start to separate and then reach different values of peak loads. The range of peak loads is 2130.0-2266.4 N with a mean value of 2220.0 N, so the variation coefficient (i.e. standard deviation over mean value) is 2.4%, which is comparable for indirect tensile strength of isotropic rock [53]. This shows that if an unstructured mesh is used and the mean mesh size is small enough, the numerical results of three-dimensional fracture modelling are acceptable regardless of the local mesh orientation. Computational efficiency To further test the computational performance of the three-dimensional fracture model, a computational efficiency analysis is conducted using the data recorded from the single fracture propagation tests (see Section 3.1). More specifically, the CPU time is recorded for each of the simulations reported in Section 3.1 and they are compared with respect to the total element number. The current numerical code of the three-dimensional fracture model within the FEMDEM method is a serial code written in C and C++ programming languages. All of the simulations are run on a workstation with Intel Xeon CPU E5-2680 (2.70 GHz). The CPU time needed for one time-step during the fracture propagation stage is plotted in Fig. 19 for five different mesh sizes. It can be seen that the CPU time per time-step increases linearly with increasing element number, which proves that the numerical code works efficiently for different scales. It should be noted that the CPU time needed for FEMDEM modelling also depends on the contact algorithms in the DEM formulation, which might dominate the overall computational performance if there are a large number of discontinuities (e.g. fractures and discrete bodies) in the domain. The research on computational efficiency of the DEM part in the FEMDEM method can be found elsewhere. For example, Munjiza and Andrews [54] studied the contact detection algorithm and reported that the total detection time is proportional to the total number of discrete bodies. 19. Plot of CPU time per time-step versus total element number. The first three data points are also shown in the enlarged window at the right-hand side. Discussion and conclusions The mesh sensitivity of a new three-dimensional fracture model within the combined finite-discrete element method was investigated by specially designed numerical tests. Both mesh size and mesh orientation were considered. The sensitivity to mesh size was examined by modelling a single tensile fracture propagation problem and three-point bending tests using a series of models with the same geometry but different structured mesh sizes. The mesh orientation sensitivity was investigated by diametrically compressing a disc specimen from different angles. A very fine and unstructured mesh was used in this test so only local mesh orientation affected the numerical results when the loading angle changed. Moreover, the computational efficiency of the three-dimensional fracture model was studied using the data of CPU time recorded from the mesh size sensitivity test. From the numerical investigation of mesh size sensitivity it can be demonstrated that the accuracy of three-dimensional fracture modelling depends on the element size around the fracture tips. If the element size is of the same order of magnitude or larger than the theoretical length of the plastic zone, the stress field around a fracture tip is more like a uniform distribution, so the far-field stress has a more significant effect on the fracture propagation than the local stress field. In contrast, for a fine mesh, which can be defined for our purpose as when the element size is only a certain fraction (e.g. one third) of the length of the plastic zone, the gradient of local stress distribution inside the plastic zone can be correctly captured. The three-dimensional fracture model investigated in this paper is based on fixed meshes and fractures can only propagate along finite element boundaries so the fracture patterns are mesh-dependent. However, the results of the mesh orientation sensitivity test proves that if the element size in an unstructured mesh is smaller than one third of the plastic zone length, although at the element level the fracture path may deviate from the theoretical path to accommodate the element boundaries, from a global point of view, an acceptable solution of the mechanical response of the whole system can still be obtained. Furthermore, if the mesh size is small enough to represent the microstructures (e.g. mineral grains and grain boundaries) in quasi-brittle materials, the roughness of fracture surfaces, rather than being caused purely by mesh dependency can actually represent the realistic microscopic roughness observed in fractured materials. In general, it can be suggested that unstructured meshes are preferable in fracture simulations for a homogeneous isotropic quasi-brittle material of certain strength properties using the three-dimensional fracture model within the FEMDEM method. Before running an actual simulation, first the theoretical size of the plastic zone should be estimated by Eqs. (10) and (11). Then based on the specific size of the simulated domain, it is essential to choose at least one third of the theoretical plastic zone length as the mean element size in mesh generation. It should be noted that in numerical discretisation of a continuum domain, stress and strain fields in the vicinity of fracture tips are only approximations. In order for the approximation to represent the stress gradient ahead of a fracture tip as accurately as possible, the strategy adopted in this paper is to use low-order (4-node tetrahedron) elements for the whole domain and limit the mesh size around fracture tips. The other approach would be to use high-order finite elements, and in the future it would be worthwhile to explore the possibility of enriching the element libraries to improve the accuracy of the current program. In addition, although the computational time only increases linearly as the total element number grows, large-scale engineering problems may still require unaffordable computational time based on estimates of computational effort from this linear relation. In this respect, parallelisation of the current code combined with the use of less complicated algorithms in non-fractured subdomains might be the most fruitful avenues to provide solutions to overcome this difficulty. Although having those limitations, the current threedimensional fracture model still has the ability to model fracturing and fragmentation behaviour in a wide range of medium-scale engineering problems, such as multi-body collision, fluid-structure interaction and fractured media modelling, in which the whole fracturing process, i.e. pre-peak hardening deformation, post-peak strain softening, transition from continuum to discontinuum, and the explicit interaction between discrete fracture surfaces can be realistically captured.
2019-04-29T13:07:22.751Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "118aff3cc3c01c6c48288dc7b0617f6273827046", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.engfracmech.2015.11.006", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1a9e277f2d55edb736e774f2eafa0511bc583101", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
219631428
pes2o/s2orc
v3-fos-license
Capitellum fracture: Outcome of surgical treatment Background: Fractures of the capitellum humeri are uncommon injuries accounting for only 1% of all fractures and around 6% of fractures close to the elbow. As the complex nature of capitellar fractures various treatment options have evolved and open reduction and internal fixation (ORIF) with Herbert screw, Biodegradable screw, suture anchor were considered. Purpose of study was to find the mechanism of injuries and clinical outcome of capitellar fractures managed by ORIF. Method: A prospective follow-up study was planned on eight patients of capitellar fractures admitted at Mahatma Gandhi Medical College & Hospital, Sitapura, Jaipur. These patients were followed-up for a period of twelve months and Mayo Elbow Performance Index was assessed. Results: Study includes 08 patients with age range from 24 years to 67 years with mean age 42.8±14.55 years. Females (62.5%) were affected more than males (37.5%). Mechanism of injury were road traffic accident in 05(62.5%), fall on outstretched arm in 02 (25%) and direct blow to the elbow in 01 (12.5%) cases. The average loss of ROM of the affected elbows was 12° of flexion-extension and 6° of supination-pronation compared with the unaffected elbows. The average MEPI Score was 91.8 ± 7.88 (range, 75 to 100) with 06 excellent and 02 good. No evidence of post operative complications was found at the end of follow-up. Conclusion: Capitellum fractures are rare and complex articular injuries. Road traffic accidents are most common mechanism responsible for such injuries. ORIF is recommended to achieve good clinical outcome but may end into stiffness and decreased range of motion. Introduction Fractures of the capitellum humeri are uncommon injuries accounting for only 1% of all fractures and around 6% of fractures close to the elbow [1][2][3] Injuries to the capitellum are usually a result of axial loading of the capitellum by forces transmitted through the radial head, the lateral trochlear ridge and the lateral half of the trochlea [4] . Due to the small number of soft tissue attachments at this site, almost all of these fractures are displaced [5] . The incidence of distal humeral coronal shear fractures is higher among women because of the higher rate of osteoporosis in women and the difference in carrying angle between men and women [6,7] . Patients with capitellar fractures frequently present with pain and swelling of the elbow after injury. Capitellum fractures are often more complex than expected upon analyzing conventional radiographs [8,9] . They are not obvious on anteroposterior radiographs because the fracture line may not be recognized against the background of the distal humerus. They are best seen on a true lateral view [4] . Computed tomography is therefore regularly recommended in these cases so as to diagnose the extent of the fracture and to plan operative treatment [7] . An untreated displaced capitellar fragment undergoes changes resulting from bony absorption to bony proliferation and obliterates the radial fossa [4] . Eventually, arthritic degeneration of the elbow joint ensues, limiting range of motion half of the trochlea [10][11][12][13] . Hahn first described a fracture of the capitellum in 1853 [14] . Since then, several classifications have been developed for these fractures. The classifications most commonly used for capitellum fractures are the descriptive Bryan and Morrey classification later on modified by McKee and the Dubberley classification [11,15] Another classification was proposed by Ring, generally focusing on coronal shear fractures of the distal humerus [8,16] As the complex nature of capitellar fractures has become better appreciated, treatment options have evolved from closed reduction, immobilization, and fragment excision to a preference for open reduction and internal fixation [7,17] . As regard to articular surface reconstruction, various implants including Kirschner wires, head-leass compression screws, Herbert screws, mini fragment screws and bio-absorbable implants have been adopted. Herbert screw fixation is a good option due to excellent compression at the fracture site, stable fixation with the least damage to articular surfaces and nonprominence of the implant intra-articularly. Moreover, early mobilization can be started [4,18] . Stiffness, pain, myositis ossificans, articular incongruity, arthritis, and ulno-humeral instability may fails result if reduction is non-anatomic or if fixation fail. [7] So, a retrospective study was planned to find clinical outcome of 8 cases with fractures of the capitellum considering that open reduction and Herbert screws fixation is a reliable and effective management for fractures of capitellum. Material & Method Considering capitellum fracture as a rare entity, a prospective follow-up study was planned at Mahatma Gandhi Medical College & Hospital, Sitapura, Jaipur, during January 2018 to December 2019. All age group and both the gender were considered for study. Total 08 patients of capitellum fracture operated during January 2018 to December 2018 were enrolled in study. These patients were followed up three monthly for a period of twelve months. Based on previous studies and expert opinion, a predesigned and pretested proforma was used to collect relevant information of patients. Proforma include two parts: first part for current information which include socio-demographic variables, clinical variables, investigation, mode of treatment, outcome and second part is for follow-up which include clinical assessment, complications occur with time and Mayo Elbow Performance Index. Separate sheet was used for each patient. All patients were assessed clinically and subjected to required investigations including hematological and radiological (Xray, CT scan). Fractures were classified using the radiographs according to the modified Bryan-Morrey classifications [11,15] . We classified the fractures in our patients according to above classification. Surgical technique ORIF with lateral column approach  supine positioning  lateral skin incision centered over the lateral epicondyle extending to 2cm distal to the radial head  At times, modification is needed depending upon the fracture pattern.  Headless/Herbert screw fixation  minifragment screw using posterior to anterior fixation  counter sink screw using anterior to posterior fixation  avoid disruption of the blood supply that comes from the posterolateral aspect of the elbow  supplemental fixation for concomitant pathology, LUCL/UCL repair via bone tunnels or suture anchors.  Patients were treated according to fracture pattern. Postoperative care A long arm posterior plaster splint was applied routinely with the elbow at approximately 90° of flexion, which was kept for 1 week and active range of motion was started and elbow ROM brace was given. Follow up Patients were followed up every three monthly for a period of twelve months and clinic-radiological evaluation was done. For patients with type 3 and 4 fractures tablet indomethacin 75 MG OD was given for 4-6 weeks. The condition of bone union, and complication such as avascular necrosis on radiographs, wound healing problems or other complications, if any, were recorded. At each follow-up, pain, range of motion, stability of the elbow joint and daily function was assessed by clinical examination, which enabled calculation of the Mayo Elbow Performance Index Score [19] . Statistical Ananysis Data was coded and entered in SPSS 24.0 trial version. Data was presented in tables, graph and charts. The paired t-test was used for statistical comparisons with regard to ROM between the affected and the unaffected elbow. P value <0.05 will be considered statically significant. Result Total 11 patients were admitted in our institute during study period. Out of 11, one patient was excluded due to mental illness, lost to follow-up (01) and one patient did not give consent. Study includes 08 patients with age range from 24 years to 67 years with mean age 42.8±14.55 years. 50% of patients were in the age group of 24 to 40 years. Females (62.5%) were affected more than males (37.5%) which may be due to osteoporosis and poor nutrition. The right hand was involved in six (75%) cases, whereas the left in two (25%). Mechanism of injury were road traffic accident in 05(62.5%), fall on outstretched arm in 02 (25%) and Direct blow to the elbow in 01 (12.5%) cases. (Figure 1) According to Modified Bryan-Morrey Classification of capitellar fractures, two fractures were classified as type IV, one as type III, one as type II and four as type I. Mean duration of injury and surgery was 5 days (range 1 to 9 days). Mean operating time was 72 minute with range of 55 to 130 minutes. No intraoperative or postoperative complication was encountered. All fractures healed well in their normal anatomic position as seen on radiographs. They had good stability although 2 reported mild pain during activity without restriction of movement at the final follow up (twelve months). (Table 1) The average loss of ROM of the affected elbows was 12° of flexion-extension and 6° of supination-pronation compared with the unaffected elbows. But the average ROM of the affected and unaffected elbows did not differ significantly with respect to flexion-extension (132° ± 15°and 144° ± 6° respectively; p = 0.054), and supination-pronation (172° ± 13° and 178° ± 3° respectively; p = 0.22). The average MEPI Score was 91.8 ± 7.88 (range, 75 to 100) with 06 excellent and 02 good. No evidence of post operative complications was found at the end of follow-up. (Table 1) *Numbers in figure are absolute percentage. F IV 130 160 85 2 67 F IV 120 155 75 3 51 M I 130 175 95 4 33 F I 135 180 100 5 24 M I 145 180 100 6 43 F III 90 160 85 7 39 F II 130 180 95 8 60 M I 125 160 anchor. Discussion Based on fracture type and complexity, comfort of the orthopedic surgeon and protection of the blood supply, various methods for management of capitellum fractures have been described. These include closed reduction, excision and open reduction with or without internal fixation. Open reduction and internal fixation is a suitable method for maintaining joint congruity while allowing early mobilization. Herbert screws have been used with varying degrees of success. Present study included eight patients of capitellum fracture operated during study duration and they were followed-up for a period of twelve months to assess their clinical outcome. 50% patients were below 40 years of age with mean age 42.8±14.55 years and capitellum fracture was predominantly found in females (62.5%). Right hand was involved in 75% of cases and most common mechanism of injury was road traffic accident (62.5%). Mean duration of injury and surgery was 5 days (range 1 to 9 days). Amr S. Elgazzar et al. [20] done a study at Egypt on 10 patients of capitellum fractures found mean age of patients was 37 years with range of 20 to 48 years which is lower than present study. In this study male to female ratio was 1.5:1 which shows male predominance while present study shows female predominance with ratio of 1:1.67. Study done by Valentin Rausch et al. [21] on 27 patients of capitellum fractures was found median age of 57 years with range of 4 to 78 years. Male were 40.75% and female were 59.25% which is similar to present study (M 37.5%, F 62.5%). Falling on an outstretched hand was most common cause of injury in study of Amr S. Elgazzar et al. [20] while direct blow to their elbow during a fall was found in 70.37% of patients in study of Valentin Rausch et al. [21] Similar to present study, mean age at operation was 47 (18-65) years in study done by Giuseppe Giannicola et al. [22] and all had occurred following a fall onto the elbow or the outstretched hand or in motor vehicle accidents. Study done by Tengbo Yu et al. [23] found mean age 42 ± 13 years (range, 19 to 64 years). Ten patients occurred after a fall, and 5 occurred in road traffic accidents. [20] find that out of 10 capitellar fractures, six fractures were classified as type I, two as type II, and three as type IV. The mean extension of the elbow was 7.5° (range 0-20°) and the mean flexion Overall, six results were found to be excellent and four to be good according to Mayo elbow performance score. Conclusion Capitellum fractures are rare and complex articular injuries. Capitellum fractures are more common in females due to some underlying causes. Road traffic accidents are most common mechanism responsible for such injuries. Open reduction and internal fixation with Herbert screws, biodegradable screw, K-wire, suture anchor according to fracture pattern and associated injuries is recommended, because this procedure leads to minimal articular damage and able to achieve stable fixation and restoration of a functional range of motion. Some patients might have decreased range of motion which is more with type 3 fracture. Limitation Due to rare fracture, small number of patients could be included in study. Follow-up period was short as many complication required longer duration to develop which may be missed in present study. The larger numbers of patients and longer follow-up period is necessary to determine true incidence of complication and outcome.
2020-05-28T09:12:49.190Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "2817de7a09f52a2fa6a81c7f96d263aa9f25c651", "oa_license": null, "oa_url": "https://www.orthopaper.com/archives/2020/vol6issue2/PartK/6-2-66-390.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "28bfd8462b4c5bf130322c88ca9bf66e1e3fcc2d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260302510
pes2o/s2orc
v3-fos-license
Study on species diversity of Akanthomyces (Cordycipitaceae, Hypocreales) in the Jinyun Mountains, Chongqing, China Abstract Akanthomyces species have only been reported from Guizhou and Qinghai Province, with few reports from other regions in China. In this research, the species diversity of Akanthomyces in the Jinyun Mountains, Chongqing was investigated. Fourteen infected spider specimens were collected and two new species (A.bashanensis and A.beibeiensis) and a known species (A.tiankengensis) were established and described according to a multi-locus phylogenetic analysis and the morphological characteristics. Our results reveal abundant Akanthomyces specimens and three species were found at Jinyun Mountain. Due to its being an important kind of entomopathogenic fungi, further attention needs to be paid to the diversity of other entomopathogenic fungi in Chongqing, China. Based mainly on phylogenetic analyses, several Akanthomyces species (A. arachnophilus (Petch) Samson & H.C. Evans, A. cinereus Hywel-Jones, A. koratensis Hywel-Jones, A. longisporus B. Huang et al., A. novoguineensis Samson & B.L. Brady, A. ovalongatus L.S. Hsieh et al. and A. websteri Hywel-Jones) were transferred to the new genus Hevansia Luangsa-ard et al. (Kepler et al. 2017). In addition, Lecancillium attenuatum Zare & W. Gams, L. lecanii Akanthomyces is an important genus in entomopathogenic fungi and its diverse bioactive substances have attracted widespread attention (Lee et al. 2005;Madla et al. 2005;Kuephadungphan et al. 2014;Putri et al. 2014;Kinoshita et al. 2017). However, Akanthomyces species have only been reported from Guizhou and Qinghai Province and there have been few reports from other regions in China (Chen et al. 2018(Chen et al. , 2019c(Chen et al. , 2020b(Chen et al. , 2020c(Chen et al. , 2022aWang et al. 2023). In this research, the species diversity of Akanthomyces in the Jinyun Mountains, Chongqing was investigated. Several spider-associated specimens were found and a few new Akanthomyces strains were isolated and purified. The goal of this research was to identify these new strains by multigene phylogeny as well as by morphological characteristics. Specimen collection and identification Fourteen infected spider specimens were collected from Jinyun Mountain (29°50'22.14959"N, 106°23'18.0744"E), Beibei District, Chongqing, in May 2021. The surface of each spider body was rinsed with sterile water, followed by surface sterilisation with 75% ethanol for 3-5 s and rinsing 3 times with sterilised water. After drying on sterilised filter paper, the mycelium or a part of the sclerotium was removed from the specimen and inoculated on potato dextrose agar (PDA) and improved potato dextrose agar (PDA, 1% w/v peptone) plates (Chen et al. 2019b). Fungal colonies emerging from the specimens were isolated and cultured at 25 °C for 14 days under 12 h light/12 h dark conditions following protocols described by Zou et al. (2010). The specimens and axenic cultures were deposited at the Institute of Fungus Resources, Guizhou University (formally Herbarium of Guizhou Agricultural College; code, GZAC), Guiyang City, Guizhou, China. Macroscopic characterisation was determined from PDA cultures incubated at 25 °C for 14 days, including the growth rate of the colony, the presence of octahedral crystals, and the colours of the colony (surface and reverse) were observed. To investigate the microscopic characteristics, a small amount of the colony was removed and mounted in lactophenol cotton blue or 20% lactate acid solution and observed using an optical microscope (OM, DM4 B, Leica, Germany). DNA extraction, polymerase chain reaction amplification and nucleotide sequencing DNA extraction was carried out using a fungal genomic DNA extraction kit (DP2033, BioTeke Corporation) according to Liang et al. (2011). The extracted DNA was stored at −20 °C. Polymerase chain reaction (PCR) was used to amplify genetic markers using the following primer pairs: ITS4/ITS5 for the internal transcribed spacer (ITS) region (White et al. 1990), LR0R/LR5 for 28s large subunit ribosomal (LSU) (Vilgalys and Hester 1990), fRPB2-7cR/fRPB2-5F for RNA polymerase II second largest subunit (RPB2) (Liu et al. 1999) and 2218R/983F for translation elongation factor 1 alpha (TEF) (Castlebury et al. 2004). The thermal cycle of PCR amplification for these phylogenetic markers was set up following the procedure described by Chen et al. (2021). PCR products were purified and sequenced at Sangon Biotech (Shanghai) Co. The resulting sequences were submitted to GenBank (Table 1). Sequence alignment and phylogenetic analyses DNASTAR Lasergene (version 6.0) was used to edit the DNA sequences. The ITS, LSU, RPB2 and TEF sequences were downloaded from GenBank, based on Sung et al. (2007), Kepler et al. (2017), Mongkolsamrit et al. (2018), Chen et al. (2018Chen et al. ( , 2019cChen et al. ( , 2020bChen et al. ( , 2020cChen et al. ( , 2022a, Aini et al. (2020) and Wang et al. (2023) and others selected on the basis of BLAST searches in GenBank. ITS sequences and other loci were aligned and edited by MAFFT online service (Katoh et al. 2019) and MEGA6 (Tamura et al. 2013). Combined sequences of ITS, LSU, RPB2 and TEF were obtained using SequenceMatrix v.1.7.8 (Vaidya et al. 2011). The model was selected for Bayesian analysis by ModelFinder (Kalyaanamoorthy et al. 2017) in PhyloSuite software (Zhang et al. 2020). The combined loci were analysed using Bayesian inference (BI) and maximum likelihood (ML) methods. For BI, a Markov chain Monte Carlo (MCMC) algorithm was used to generate phylogenetic trees with Bayesian probabilities using MrBayes v.3.2 (Ronquist et al. 2012) for the combined sequence datasets. The Bayesian analysis resulted in 20,001 trees after 10,000,000 generations. The first 4,000 trees, representing the burn-in phase of the analysis, were discarded, while the remaining 16,001 trees were used to calculate posterior probabilities in the majority rule consensus tree. After the analysis was finished, each run was examined using the programme Tracer v.1.5 (Drummond and Rambaut 2007) to determine burn-in and confirm that both runs had converged. ML analyses were performed with IQ-TREE (Trifinopoulos et al. 2016), using an automatic selection of the model. The final alignment and the original phylogenetic tree are available from TreeBASE under submission ID 30378. Genealogical Concordance Phylogenetic Species Recognition (GCPSR) analysis The Genealogical Concordance Phylogenetic Species Recognition model was applied to analyse the related species. The pairwise homoplasy index (PHI) (Bruen et al. 2006) is a model test based on the fact that multiple gene phylogenies will be concordant between species and discordant due to recombination and mutations within a species. The test was performed in SplitsTree4 (Huson and Bryant 2006) as described by Quaedvlieg et al. (2014) to determine the recombination level within phylogenetically closely-related species using a four-locus concatenated dataset. The new species and their closely-related species were analysed using this model. The relationships between closely-related species were visualised by constructing a split graph, using both the LogDet transformation and splits decomposition options. The final value of the highest scoring tree was -11,790.345, which was obtained from the ML analysis of the dataset (ITS+LSU+RPB2+TEF). The parameters of the GTR model used to analyse the dataset were estimated, based on the following frequencies: A = 0.236, C = 0.283, G = 0.272, T = 0.209; substitution rates AC = 1.00000, AG = 2.12340, AT = 1.00000, CG = 1.00000, CT = 5.43884 and GT = 1.00000, as well as the gamma distribution shape parameter α = 0.557. The selected model for BI analysis was GTR+F+I+G4 (ITS+L-SU+TEF) and K2P+G4 (RPB2). The phylogenetic trees (Fig. 1) constructed using ML and BI analyses were largely congruent and strongly supported in most branches. Phylogenetic analyses demonstrated that eight new strains formed a subclade with Akanthomyces tiankengensis (KY11571 and KY11572) with high statistical support in ML analysis (92% ML). Strains CQ05171, CQ05172, CQ05811 and CQ05812 clustered with A. tiankengensis into a subclade, while the new species A. beibeiensis (CQ05921 and CQ05922) and A. bashanensis (CQ05621 and CQ05622) clustered in a subclade with high statistical support (96% ML/0.98 PP; Fig. 1). GCPSR analysis A four-locus concatenated dataset (ITS, LSU, RPB2 and TEF) was used to determine the recombination level within Akanthomyces bashanensis (CQ05621), A. beibeiensis (CQ05921) and A. tiankengensis (KY11571, CQ05171, CQ05811). Chaiwan et al. (2022) noted that, if the PHI is below the 0.05 threshold (Φw < 0.05), it indicates that there is significant recombination in the dataset, meaning that related species in a group and recombination level are not different. If the PHI is above the 0.05 threshold (Φw > 0.05), it indicates that it is not significant, which means the related species in a group level are different. The result of the pairwise homoplasy index (PHI) test of A. bashanensis, A. beibeiensis and A. tiankengensis was 0.333 and revealed that the three species were different (Fig. 2). Description. Spider host completely covered by white mycelium. Conidiophores mononematous, arising from the lateral hyphae. Colonies on PDA, attaining a diameter of 26-27 mm after 14 days at 25 °C, white, consisting of a basal felt, floccose hyphal overgrowth; reverse yellowish. Hyphae septate, hyaline, smooth-walled, 1.5-1.9 μm wide. Conidiophores mononematous, hyaline, smooth-walled, with single phialide or whorls of 2-4 phialides or verticillium-like from hyphae directly, 12.1-20.5 × 1.5-2.1 μm. Phialides consisting of a cylindrical, somewhat inflated base, 11.8-12.9 × 1.3-1.6 μm, tapering to a thin neck. Conidia hyaline, smooth-walled, fusiform to ellipsoidal, 1.7-2.6 × 1.6-1.8 μm, forming divergent and basipetal chains. Sexual state not observed. Etymology. Referring to its location in Jinyun Mountain, which was formerly known as Bashan. Remarks. Akanthomyces bashanensis was easily identified as Akanthomyces, based on the BLASTn result in NCBI and the phylogenetic analysis of combined datasets (ITS, LSU, RPB2, TEF) (Fig. 1) and it has a close relationship with another new species, A. beibeiensis. A. bashanensis was easily distinguished from A. beibeiensis by its longer phialides and smaller conidia. Jeewon and Hyde (2016) recommended that a minimum of > 1.5% nucleotide differences in the ITS regions and protein coding genes may be indicative of a new species. The pairwise dissimilarities of ITS, LSU, RPB2 and TEF sequences show 11 bp differences within 569 bp (1.93%), 19 bp differences within 881 bp (2.15%), 13 bp differences within 1070 bp (1.21%) and 4 bp differences within 973 bp (0.41%) between A. bashanensis and A. beibeiensis, respectively. The pairwise dissimilarities of ITS, LSU, RPB2 and TEF sequences show 10 bp differences within 569 bp (1.75%), 20 bp differences within 881 bp (2.27%), 19 bp differences within 1070 bp (1.77%) and 4 bp differences within 973 bp (0.41%) between A. bashanensis and A. tiankengensis, respectively. Furthermore, A. aranearum (Petch) Mains and A. ryukyuensis (Kobayasi & Shimizu) Mongkols., Noisrip., Thanakitp., Spatafora & Luangsa-ard were both absent from the available sequence in NCBI and having a spider host. Comparing with the typical characteristics (Table 2), A. bashanensis was easily distinguished from A. aranearum by its cylindrical phialide, smaller fusiform to ellipsoidal conidia and absence of synnemata and distinguished from A. ryukyuensis by absence of teleomorphs. Thus, the morphological characteristics and molecular phylogenetic results support A. bashanensis as a new species. Remarks. Akanthomyces beibeiensis was easily identified as Akanthomyces according to the blast result in NCBI and the phylogenetic analysis of combined datasets (ITS, LSU, RPB2, TEF) (Fig. 1) and it has a close relationship with another new species, A. bashanensis. A. beibeiensis is easily distinguished from A. bashanensis by its shorter phialide and larger conidia. The pairwise dissimilarities of ITS, LSU, RPB2 and TEF sequences show 4 bp differences within 569 bp (0.7%), 8 bp differences within 881 bp (0.9%), 20 bp differences within 1070 bp (1.86%) and 2 bp differences within 973 bp (0.2%) between A. beibeiensis and A. tiankengensis, respectively. Furthermore, A. aranearum and A. ryukyuensis were both absent from the available sequences in NCBI and had spider hosts. Comparing with the typical characteristics (Table 2), A. beibeiensis was easily distinguished from A. aranearum by its cylindrical phialide, smaller fusiform to ellipsoidal conidia and absence of synnemata, and distinguished from A. ryukyuensis by absence of teleomorphs. Thus, the morphological characteristics and molecular phylogenetic results support A. beibeiensis as a new species. Description. Spider host completely covered by white mycelium. Colonies on PDA, attaining a diameter of 27-28 mm after 14 days at 25 °C, white, consisting of a basal felt, floccose hyphal overgrowth; reverse yellowish. Hyphae septate, hyaline, smooth-walled, 2.4-2.6 μm wide. Conidiophores mononematous, hyaline, smooth-walled, with single phialide or whorls of 2 phialides. Phialides consisting of a cylindrical, somewhat inflated base, 16.2-25.3 × 2.1-2.9 μm, tapering to a thin neck. Conidia hyaline, smooth-walled, subglobose to ellipsoidal, 2.4-3.8 × 2.1-3.0 μm, forming divergent and basipetal chains. Sexual state not observed. Discussion Akanthomyces species are widely distributed and commonly isolated from soil, insects and spiders (Chen et al. 2018(Chen et al. , 2019cShrestha et al. 2019 In the present study, the new strains differed from other spider-pathogenic species and had a close relationship with Akanthomyces tiankengensis, based on the phylogenetic analysis. Two new species were established by combining phylogenetic analysis and morphological characteristics. Interestingly, A. tiankengensis was located at Monkey-Ear Tiankeng and found in November, indicating that it had adapted to the cold environment. Whether these new species can adapt to their environment and have special metabolic processes is worthy of further research. The hosts of Akanthomyces species cover Hemiptera, Coleoptera, Lepidoptera, Orthoptera and Araneae (Hodge 2003;Mongkolsamrit et al. 2018;Chen et al. 2020bChen et al. , 2022c. Chen et al. (2022b) noted that a host jump may be common in Simplicillium species, the spider-associated species may have originated from insects and then jumped to a spider host. An abundant diversity in insects and spiders has been discovered at Jinyun Mountain (Huang and Zhang 1991;Li et al. 2009aLi et al. , 2009bWang et al. 2012;Huang et al. 2021;Yan et al. 2021). Whether the new species originally came from an insect host or other substrates and then jumped to a spider host, is also worthy of further research. Mains (1950) and Vincent et al. (1988) surmised that cylindrical synnemata covered by a hymenium-like layer of phialides producing one-celled catenulate conidia were the typical characteristics of Akanthomyces. However, Chen et al. (2020bChen et al. ( , 2020c reported two new Akanthomyces species with mononematous conidiophores. In the present study, the two new species had mononematous conidiophores. Akanthomyces species with mononematous conidiophores are often present on the surface of moss or in open places, from which their conidia can easily be spread by airflow diffusion or other methods. Those Akanthomyces species with synnematous conidiophores often appear in the shrubbery of the original forest, litter layer or shallow soil (Hywel-Jones 1996), where air flow under the forest canopy is slow and humidity is high and where dispersal of conidia through airflow diffusion is difficult. Therefore, the presence of synnematous conidiophores may be the result of convergent evolution, which could help them to fit in their niche (Abbott 2002). Thus, the Akanthomyces species may change their type of conidiophores to increase their adaptability to different environmental conditions. The taxonomic delimitation of Akanthomyces was originally based on morphological characteristics. Kepler et al. (2017) proposed the rejection of Torrubiella Boud. and Lecanicillium W. Gams & Zare in favour of Akanthomyces and transferred Torrubiella and Lecanicillium species into Akanthomyces, which has resulted in a combined analysis of morphological characteristics and phylogenetic analysis for the taxonomy of Akanthomyces. In this research, a PHI test and base difference rate were added, which could solve the taxonomic delimitation of cryptic species. Amongst the four loci (ITS, LSU, RPB2 and TEF), the locus TEF could not be used to distinguish A. bashanensis, A. beibeiensis and A. tiankengensis. However, any two of the three loci could easily be used to distinguish these three species. Thus, we recommend that at least two loci should be provided for the cryptic Akanthomyces species and analysis of the cryptic species with its related species should be done using the multiple methods. Furthermore, the genomics data, phylogenetic networks and haplotype analysis should be applied to cryptic species and the taxonomy of Akanthomyces made it closer to the natural taxonomy system. Currently, the diversity of entomopathogenic fungi in some Natural Reserves and Forest Parks in different regions of China has shown that the abundant diversity of entomopathogenic fungi is present in the study areas, and there is a high species diversity in specific areas (Chen et al. 2019a(Chen et al. , 2020aFan et al. 2020;Zhao et al. 2020Zhang et al. 2021). Jiyun Mountain is located in Chongqing City and has varied altitudes, abundant plant and animal resources, which have nurtured abundant fungal resources (Zhou et al. 2012a, b). In this research, abundant Akanthomyces specimens were found at Jinyun Mountain and further attention needs to be paid to the diversity of other entomopathogenic fungi in Chongqing, China.
2023-07-30T15:12:34.942Z
2023-07-28T00:00:00.000
{ "year": 2023, "sha1": "ed035ea93ae861c6013b733c2041e36d8cf1e8e2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "912d8f3905fb8efea38be95c35ea3f4eed1c0281", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
211193426
pes2o/s2orc
v3-fos-license
High-mobility group box 1 protein (HMGB1) from Cherry Valley duck mediates signaling pathways and antiviral activity High-mobility group box 1 protein (HMGB1) shows endogenous damage-associated molecular patterns (DAMPs) and is also an early warning protein that activates the body’s innate immune system. Here, the full-length coding sequence of HMGB1 was cloned from the spleen of Cherry Valley duck and analyzed. We find that duck HMGB1(duHMGB1) is mostly located in the nucleus of duck embryo fibroblast (DEF) cells under normal conditions but released into the cytoplasm after lipopolysaccharide (LPS) stimulation. Knocking-down or overexpressing duHMGB1 had no effect on the baseline apoptosis rate of DEF cells. However, overexpression increased weakly apoptosis after LPS activation. In addition, overexpression strongly activated the IFN-I/IRF7 signaling pathway in DEF cells and significantly increased the transcriptional level of numerous pattern recognition receptors (PRRs), pro-inflammatory cytokines (IL-6, TNF-α), IFNs and antiviral molecules (OAS, PKR, Mx) starting from 48 h post-transfection. Overexpression of duHMGB1 strongly impacted duck virus replication, either by inhibiting it from the first stage of infection for novel duck reovirus (NDRV) and at late stage for duck Tembusu virus (DTMUV) or duck plague virus (DPV), or promoting replication at early stage for DTMUV and DPV infection. Importantly, data from duHMGB1 overexpression and knockdown experiments, time-dependent DEF cells transcriptional immune responses suggest that duHMGB1 and RIG-I receptor might cooperate to promote the expression of antiviral proteins after NDRV infection, as a potential mechanism of duHMGB1-mediated antiviral activity. Introduction High-mobility group box 1 protein (HMGB1) belongs to a family of nonhistone chromosomal proteins, which are widely conserved in the nucleus of eukaryotic cells. HMGB1 was discovered in the 1960s and was named for its high migration ability in polyacrylamide gel electrophoresis [1]. HMGB1 has two N-terminal DNAbinding domains-HMG box A and box B-as well as an acidic C-terminal domain [2]. In mammals, HMGB1 has two nuclear localization sequences and no endoplasmic reticulum localization sequence; thus, HMGB1 is normally located in the nucleus. Proper signal stimulation leads to high acetylation of HMGB1 resulting in cytosolic relocation [3]. HMGB1 has different redox states due to the different extracellular redox environment. HMGB1 in the all-thiol state acts primarily on the RAGE receptor resulting in the production and release of pro-inflammatory cytokines and chemokines [4]. When presented in the oxidative environment, cysteines 23 form of HMGB1. This disulfide HMGB1 can act on the TLR4 receptor and modulate the production of inflammatory cytokines [5,6]. Studies have confirmed that the HMGB1 from humans can be involved in inflammatory responses as a proinflammatory cytokine [7]. HMGB1 is an endogenous damage-associated molecular patterns (DAMPs) biomolecule. At the onset of inflammation, HMGB1 can be passively released from necrotic cells or actively secreted by stimulated monocytes/macrophages. The release of HMGB1 is observed later than the pro-inflammatory mediators, such as IL-1β and TNF-α, but is rather sustained. Thus, HMGB1 is considered to belong to the late inflammatory mediators in rats [8,9]. HMGB1 has cytokine-related characteristics and can be actively secreted by activated immune cells (such as monocytes/macrophages, natural killer cells, and dendritic cells). It acts on the surface receptors of immune cells and endothelial cells. Extracellular HMGB1 induces the expression of inflammatory factors and further release of HMGB1 which leads to exacerbation of inflammation. HMGB1 stimulates the release of chemokines and cytokines, and increases the expression of adhesion molecules involved in immune responses, thus inducing the chemotaxis and activation of inflammatory cells and favouring the disruption of the epithelial barrier [10,11]. Recent studies have shown that the DAMPs are released after cell damage or death, which become new hotspots in the initiation and persistence of innate immune responses [12]. HMGB1 is involved in the pathogenesis of a variety of viral diseases. Cellular HMGB1 and "replication transcriptional activator" (Rta) synergistically up-regulate the ORF 50 promoter to promote Kaposi's sarcoma-associated virus replication [13]. Extracellular HMGB1 is a late inflammatory mediator released after infection with West Nile virus [14], atypical pneumonia virus [15], porcine reproductive and respiratory syndrome virus [16], grass carp reovirus [17]. However, while the involvement of HMGB1 in a variety of viral diseases has been confirmed, the presence or absence of HMGB1 in ducks and the best approaches to regulate the host's antiviral innate immune mechanisms are currently unclear. Animals, cells, virus, and ligands Cherry Valley ducks were purchased from a farm near Taian, China. Duck embryo fibroblast (DEF) cells derived from 11-day-old duck embryos were cultured in Dulbecco's modified Eagle medium (DMEM) (Gibco, Grand Island, NY, USA) with 10% fetal bovine serum (Transgen, Beijing, China). These samples were cultured at 37 °C, 5% (v/v) CO 2 . Duck Tembusu virus (DTMUV)-FX2010 strain, novel duck reovirus (NDRV), and duck plague virus (DPV)-GM strain were used in this study, as described [18][19][20][21]. DEF cells were first seeded into 96-well plates and used when the cells reached 80% confluency. Ten-fold dilutions of the virus stock solution were prepared in DMEM medium and 100 μL of each dilution was added to a 96-well cell culture plate; eight replicates were set for each dilution. A blank cell culture control was also set up. The cells were cultured in a 37 °C, 5% (v/v) CO 2 incubator. Finally, the cells were observed each day until the CPE produced by the virus no longer progressed. The virus titers were determined to be 10 4.9 (DTMUV), 10 4.2 (NDRV), and 10 6.1 (DPV) TCID 50 (50% tissue culture infective dose)/mL in DEF cells by the Reed and Muench method [22]. Lipopolysaccharide (LPS) from Escherichia coli O111:B4 and purified by phenol extraction was purchased from Sigma (Sigma-Aldrich Corp., St. Louis, MO, USA). Molecular cloning of the HMGB1 Total RNA was extracted from duck spleen via TransZol up (Transgen). Reverse transcription of RNA into cDNA used a HiScriptRII One Step RT-PCR kit (Vazyme, Nanjing, China). To clone the duck HMGB1 (duHMGB1), primers (Additional file 1) were designed based on the predicated gene in the GenBank (Accession Number, XM_027469875.1) (Additional file 2). All PCR products were analyzed using electrophoresis on a 1% agarose (Biowest, Hong Kong) gel in 1 × TAE at 120 V for 20 min. The PCR products were then cloned into a pMD19-T (TaKaRa) vector and transformed into E. coli DH5α (Vazyme, Nanjing, China). Competent cells were then sequenced. Animal experiments Three-week old healthy ducks were used as source of lymphatic, circulatory, digestive, respiratory, urinary, and central nervous tissues including the bursa, spleen, heart, glandular stomach, intestine, trachea, lung, kidney, brain, etc. The extraction and reverse transcription of total RNA were performed as described above. The expression of duHMGB1 in these tissues and organs was measured using a SYBR Green PCR Kit (Vazyme, Nanjing, China). Plasmid construction The DNA fragment containing the complete ORF of duHMGB1 to which the BamH I and Not I restriction sites were added was subcloned into the pcDNA3.0(+) expression vector using Hieff Clone TM Multi One Step Cloning Kit (Yeasen, Shanghai, China). This recombinant plasmid was named pcDNA3.0(+)-duHMGB1-Flag. Western blotting analysis DEF cells were cultured in a 6-well plate for 12-24 h. When the cells reached approximately 80% confluence, the pcDNA3.0(+)-duHMGB1-Flag and pcDNA3.0(+)-Flag were transfected into the DEF cells using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA), respectively. After 24 h, the cells were lysed with RIPA buffer (Solarbio, Beijing, China) containing protease inhibitor (Beyotime). The processed protein samples were subjected to SDS-PAGE electrophoresis, and the proteins were transferred to polyvinylidene fluoride (PVDF) membrane (Solarbio, Beijing, China). The PVDF membrane was blocked with 5% skim milk powder overnight at 4 °C. The samples were then incubated with mouse anti-Flag antibody (ProteinTech, Shenzhen, China) for 2 h at 37 °C. The membrane was then incubated with the secondary antibody under similar conditions. The protein bands were visualized with an ECL kit (Bio-Rad). Indirect immunofluorescence DEF cells were seeded in 24-well culture plates plated with cell-climbing slices. The pcDNA3.0(+)-duHMGB1-Flag was transfected into DEF cells as an experimental group, and pcDNA3.0(+)-Flag was transfected into DEF cells as a control group. Subcellular localization of duH-MGB1 was determined at 24 hours post-transfection (hpt). We next studied duHMGB1 release into the cytoplasm upon LPS-stimulation. After transfecting pcDNA3.0(+)-duHMGB1-Flag into DEF cells for 24 h, 500 ng/mL LPS was added to the experimental group, and the control group was treated with equal volumes of DMEM medium. Immunofluorescence imaging of DEF cells was performed at 12, 24 and 36 h after LPS treatment. Cells were fixed with 4% paraformaldehyde for 15 min and then permeabilized to the cell membrane for 10 min with 0.1% Triton X-100. The cells were incubated with mouse anti-Flag antibody (ProteinTech, Shenzhen, China) for 1 h at 37 °C, and then incubated with fluorescein isothiocyanate (FITC)-goat anti-mouse IgG (Transgen) at 37 °C for 45 min. Finally, the cell climbing slices were taken out. The cells were studied with a laser scanning confocal microscope after sealing with mounting medium (DAPI antifade, Solarbio). Flow cytometry DEF cells were seeded at a density of 1 × 10 6 cells per well into 12-well plates and cultured overnight at 37 °C. To investigate whether duHMGB1 affects apoptosis in DEF cells, two sets of experiments were designed. In the first set of experiments, DEF cells were transfected with pcDNA3.0(+)-duHMGB1-Flag or pcDNA3.0(+)-Flag (control group). The apoptosis rate of DEF cells was examined for 48 h after transfection. In the second experiment, pcDNA3.0(+)-duHMGB1-Flag and pcDNA3.0(+)-Flag were transfected into DEF cells. After 24 h, 500 ng/mL LPS was added and the culture was continued for 24 h to determine the apoptosis rate. All cells were digested with trypsin (without EDTA), and the digestion was stopped with complete medium. The apoptosis rate of the DEF cells was measured with a flow cytometer using a FITC Annexin V Apoptosis Detection Kit (BD Biosciences, Franklin Lakes, NJ, USA). Detection of related gene mRNA expression levels The DEF cells were cultured in a 6-well plate for 12-24 h. The pcDNA3.0(+)-duHMGB1-Flag and pcDNA3.0(+)-Flag were transfected into the DEF cells using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) when the cells reached approximately 80% confluence. Cells from the experimental group (pcDNA3.0(+)-duHMGB1-Flag) and the control group (pcDNA3.0(+)-Flag) were collected at each time point at 24, 36, 48, and 60 hpt. The qRT-PCR was performed using ChamQ TM SYBR ® qPCR Master Mix (Vazyme, Nanjing, China) to detect the relative expression of target genes with primer sequences in Additional file 3. The duck glyceraldehyde-3-phosphatedehydrogenase (GAPDH, GenBank ID: GU564233.1) was used as an endogenous reference gene. The foldchanges in gene expression were calculated using the 2 −ΔΔCT method with GAPDH serving as a normalization gene and mean control values as the baseline reference. [23] The differences among the groups were evaluated by non-parametric tests (Mann-Whitney U tests) using SPSS software version 17.0 (SPSS Inc., Chicago, IL, USA), *P < 0.05; **P < 0.01; ***P < 0.001. Dual-luciferase reporter assay DEF cells in 24-well plate with 80% confluence were cotransfected with pcDNA3.0(+)-duHMGB1-Flag plasmid or empty vector (500 ng/well), reporter plasmid (100 ng/well), and pRL-TK plasmid (Promega) (50 ng/ well) by Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA). The luciferase reporter plasmids (pGL3-IFN-β-Luc, pGL3-IRF7-Luc and pGL3-NF-κB plasmids) were prepared in-house [18,24]. The specific details are as follows: The promoter of the avian (chicken) for IFN-β was CCT CCA GTA CAG CCA CCA CAT GGT CTC ACC TTG CCA GAC TCA AGA GAA GCC TGA AGG AAA AAA GCA AAT AGA AAG CAA AAC GAA AAA TGG AAA CAA GGG AAT TCT CTC TAC ATA ATG ATG AAA AGA AAC ATG CAA CAT CTC ATA AAG CTG GCC TCA CTG CAA CAC CCC AAAC. The chicken IRF-7 (chIRF-7) binding positive regulatory domains were predicted by the TFSEARCH: Searching Transcription Factor Binding Sites. The pGL3-chIRF-7-Luc contains four copies of the IRF-7-positive regulatory domain motif of the chicken IFN-β promoter in front of a luciferase reporter gene (sequence: TTC ACT TTC AAT A). Cells were harvested as lysate at four time points, and the luciferase activity was detected with a dual-luciferase reporter assay system (Promega) according to the manufacturer's instructions. Detection of antiviral activity of duHMGB1 The pcDNA3.0(+)-duHMGB1-Flag and empty vector were transfected into DEF cells at 80% confluence in 6-well plates. Cells transfected with pcDNA3.0(+)-duH-MGB1-Flag served as the experimental group, and cells transfected with an empty vector were the control group. All cells were infected with DTMUV, NDRV, and DPV at 24 hpt, respectively. The medium in the 6-well plate was discarded, and the cells were washed with PBS three times, followed by infection with the viruses at the 10 TCID 50 /mL concentration for 1 h. The virus solution was then discarded, and cells were washed with PBS twice, and 2 mL low-serum medium (DMEM with 2% fetal bovine serum) was added into each well. The Si-duHMGB1-2 and Si-NC were transfected into DEF cells at 70% confluence in 6-well plates. Cells transfected with Si-duHMGB1-2 served as the experimental group, and cells transfected with an Si-NC served as control group. The medium in the 6-well plate was discarded after 36 h, and the cells were washed with PBS for three times, followed by infection with 1 TCID 50 /mL NDRV for 1 h. The virus solution was then discarded, and the cells were washed with PBS twice, and 2 mL low-serum medium (DMEM with 2% fetal bovine serum) was added into each well. At 12, 24, 36, and 48 hours post-infection (hpi), culture supernatants were collected for RNA and viral DNA extraction. RNA extraction and reverse transcription were conducted as described above. Viral DNA was extracted using viral DNA kits (Omega, CA, USA) according to the manufacturer's instructions. Statistical analysis All data were expressed as mean ± SE of three independent experiments. Significance was determined with the Mann-Whitney U tests using SPSS software version 17.0 (SPSS Inc., Chicago, IL, USA). P values less than 0.05 were considered indicative of statistical significance. Cloning, structure, and phylogenetic analysis of duHMGB1 Two pairs of primers were designed with reference to NCBI's duck HMGB1 predicted sequence (accession number, XM_027469869), and 892 bp and 710 bp sequences were obtained, respectively, to obtain HMGB1 intact CDs sequence and partial 5′ and 3′ non-coding region sequences. We uploaded the acquired sequence to GenBank (Accession Number, MK855081). The functional domain of HMGB1 was predicted by SMART software. Like HMGB1 in mammals, duck HMGB1 has two functional domains: BoxA and BoxB ( Figure 1A). The phylogenetic tree was constructed with full-length HMGB1 protein and indicated three major branches: mammals, fish, and birds. DuHMGB1 was branched with birds and showed higher evolutionary relationship than with mammals and fish ( Figure 1B). The alignment of multiple sequences generated by ClustalW2 showed that the duHMGB1 displayed high sequence identity with HMGB1 of chicken (99%), human (89%), or mouse (89%), suggesting that HMGB1 is highly conserved across species ( Figure 1C). Tissue distribution of HMGB1 in healthy ducks QRT-PCR detected the expression level of duHMGB1 in 21 tissues. Figure 2 shows that duHMGB1 was expressed in all tested tissues especially in the spleen, trachea, and esophagus; the highest expression was found in the lung. However, expression was weak in the brain, cerebellum, skin, muscle, and muscle stomach. The results showed that HMGB1 was expressed in more than 20 tissues, indicating that this factor can play a role in multiple tissues. duHMGB1 is expressed in the nucleus, but transfers to the cytoplasm upon LPS stimulation The results of indirect immunofluorescence and western blot analyses demonstrated that recombinant duHMGB1 plasmid was expressed in DEF cells (Figure 3). Indirect immunofluorescence showed that duHMGB1 was mostly expressed in the nucleus after the plasmid was transfected into cells for 24 h ( Figure 3B). The expression of duHMGB1 obviously increased in the cytoplasm after LPS stimulation of DEF cells at 24 h versus the control group (Figure 4). duHMGB1 is modestly involved in the process of apoptosis Our results showed that duHMGB1 overexpression had no effect on apoptosis in DEF cells ( Figure 5A, B) shows that overexpression of duHMGB1 promoted apoptosis of DEF cells when induced with LPS versus the control group; this pro-apoptotic effect is modest. The apoptosis rate induced by LPS in the experimental group was 1.2fold higher than in the control group (P < 0.05). We also found that knocking-down of duHMGB1 gene expression in DEF cells was achieved using siRNA interference (Additional file 4A) without altering the apoptosis rate after 24 h of culture (Additional file 4B). These results indicate that duHMGB1 is modestly involved in apoptosis. duHMGB1 is involved in innate immunity To investigate the role of duHMGB1 in duck innate immunity, the pcDNA3.0(+)-duHMGB1-Flag or empty vector were transfected into DEF cells. The changes in mRNA expression of five pattern recognition receptors (TLR2, TLR3, TLR4, RIG-I, and MDA5), four proinflammatory cytokines (IL-1β, IL-6, IL-8, and TNF-α), three interferons (IFN-α, IFN-β, and IFN-γ), and ISGs (PKR, OAS, and Mx) were detected by qRT-PCR. Figure 6 shows that expression of all genes was mostly downregulated until up to 36 hpt and then upregulated at 48 hpt for PRRs such as TLR2, TLR4, TLR3, RIG-I and MDA5, for pro-inflammatory cytokines such as with duHMGB1 plasmid or empty plasmid (control group). Apoptosis was analyzed by flow cytometry using PI (y axis) and FITC-conjugated annexin V (x axis) after additional 48 h of culture. B Effect of duHGMB1 overexpression on apoptosis after LPS stimulation. DEF cells were transfected with duHMGB1 plasmid or empty plasmid (control group), After 24 h, 500 ng/mL LPS was added and the culture was continued for 24 h. The cells were analyzed by flow cytometry for PI (y axis) and FITC-conjugated annexin V (x axis). The total percentages of PI − annexin V + cells (Q3) and PI + annexin V + cells (Q2) indicate the apoptosis rate. I, II, IV and V are from a single experiment, which was representative of three separately performed experiments. The bar graphs (III and VI) mean value ± SE of three experiments. Mann-Whitney U test was performed to evaluate the differences. *P < 0.05. Figure 6 Overexpression of duHMGB1 induces gene expression of pattern recognition receptors, pro-inflammatory cytokines and anti-viral molecules in DEF cells. The experimental group was DEF cells transfected with duHMGB1, and the control group was DEF cells transfected with empty vector. Cells were collected at 24, 36, 48 and 60 hpt analyzing inducible gene expressions using qRT-PCR. Fold-changes in gene expression were calculated using the 2 −ΔΔCT method with GAPDH serving as a normalization gene and mean control values as baseline reference. Data are represented as the mean value ± SE of three experiments. Mann-Whitney U test was performed to evaluate the differences. *P < 0.05; **P < 0.01; ***P < 0.001. IL-1β and TNF-α, interferons such as IFN-α and IFN-β, and anti-viral molecules (OAS, PKR and Mx). The IL-6 response was significantly induced after 60 hpt. The pcDNA3.0(+)-duHMGB1-Flag plasmid and reporter plasmids were co-transfected into DEF cells for a luciferase reporter assay to further demonstrate that duHMGB1 is involved in the signaling pathway of IFN-β in DEF cells. Figure 7 shows that duHMGB1 significantly activated IFN-β and IRF-7 luciferase activities versus empty vectors (13.3-fold at 48 hpt, P < 0.001; 5.6fold at 36 hpt, P < 0.001). Overexpression of duHMGB1 in DEF cells had no significant effect on the NF-κB promoter activity (data not shown). duHMGB1 has broad-spectrum anti-viral activity The significant changes in the mRNA expression levels of IFN-α, β, γ, and ISGs after overexpression of duHMGB1 suggest that duHMGB1 has good antiviral effects at later stages. Cells transfected with pcDNA3.0(+)-duHMGB1-Flag or empty vector were infected with NDRV, DPV, or DTMUV. The changes in RNA or DNA expression of the three viruses were measured by qRT-PCR to confirm the antiviral function of duHMGB1. Figure 8 shows that the RNA expression of NDRV was decreased by 16.4-fold (P < 0.001) at 24 hpi versus the control group. By contrast, knockingdown duHMGB1 using siRNA interference showed increase of NDRV replication. In addition, duHMGB1 displayed the strongest anti-virus infection ability against DTMUV at 24 hpi among the four scheduled time points. Versus the control group, the RNA expression of virus decreased 8.2-fold (P < 0.01). Figure 8 shows that the DNA expression of DPV was down-regulated by 2.3-fold (P < 0.001) versus the control group at 36 hpi. In summary, HMGB1 displayed antiviral effects on a single-stranded RNA virus (DTMUV), double-stranded segmental RNA virus (NDRV), and DNA virus (DPV); thus, HMGB1 possesses broad-spectrum antiviral function. Figure 7 Overexpression of duHMGB1 activates the IFN-I signaling pathway. A dual luciferase reporter gene assay was used to study the IFN-I signaling pathway. pcDNA3.0(+)-duHMGB1-Flag and empty vector (control group) plasmids (500 ng/well) were co-transfected with reporter plasmids (100 ng/well) (A) pGL3-IRF7; (B) PGL3-IFN-β with pRL-TK (normalization) (50 ng/well). After 36 hpt, cells were harvested, and luciferase activity was measured. Relative IRF-7-, or IFN-β-reporter activation was calculated as fold-change in normalized Firefly luciferase activity with reference to mean control values set to 1. Data were means from three independent experiments and each experiment was analyzed in triplicate. Mann-Whitney U test was performed to evaluate the differences. *P < 0.05; **P < 0.01; ***P < 0.001. were collected for detecting the viral titers at 12, 24, 36 and 48 hpi by RT-qPCR. Viral copy number was expressed as copy number (log 10 ) per µL RNA or DNA related to the virus. Mann-Whitney U test was performed to evaluate the differences. *P < 0.05; **P < 0.01; ***P < 0.001. Figure 9 duHMGB1 over-expression in DEF cells modulates gene expression pattern of pattern recognition receptors, cytokines and anti-viral molecules after NDRV infection. The experimental group was DEF cells transfected with duHMGB1, and the control group was DEF cells transfected with empty vector. After 24 h transfection, the cells were infected with NDRV at 10 TCID 50 /mL. Cells were collected at 24, 36, 48 and 60 hpi analyzing inducible gene expressions using qRT-PCR. Fold-changes in gene expression were calculated using the 2 −ΔΔCT method with GAPDH serving as a normalization gene and mean control values as baseline reference. Data are represented as the mean value ± SE of three experiments. The differences among the groups were evaluated by Nonparametric tests (Mann-Whitney U tests) using SPSS software version 17.0 (SPSS Inc., Chicago, IL, USA), *P < 0.05; **P < 0.01; ***P < 0.001. duHMGB1 impacts antiviral and innate immune responses after NDRV infection DEF cells were stimulated by NDRV at 24 h after overexpression of duHMGB1 to explore the change of antiviral and innate immune responses after NDRV infection. Figure 9 shows that the mRNA expression levels of RIG-I receptors, IFN-β, IFN-γ, and PKR associated with antiviral response were up-regulated both at 12 and 24 hpi versus the control group. The mRNA expression level of IFN-α was up-regulated by 4.8 times (P < 0.05) at 12 hpi versus the control group. These results suggest that duHMGB1 cooperates with RIG-I receptor to recognize NDRV and thus promote the expression of interferon and PKR. It has obvious antiviral effects at 12 and 24 hpi. To further verify this hypothesis, three interfering RNAs of duHMGB1 were designed: Figure 5A shows that Si-HMGB1-2 displayed the highest interference efficiency. Therefore, Si-HMGB1-2 was selected as the interfering RNA for subsequent experiments. Figure 10 shows that the mRNA expression levels of IFN-β and PKR were down-regulated at 12 and 24 hpi versus the control group. The expression levels of RIG-I and IFN-α were down-regulated at 24 hpi versus the control group-RIG-I was down-regulated 2.6-fold (P < 0.01). These results indicated that the expression pattern of the genes above in HMGB1-knockdown cells was roughly opposite of that in HMGB1-overexpressing cells during NDRV infection. Discussion HGMB1 is a highly conserved protein present everywhere from yeasts, bacteria, plants, invertebrates to mammals [25][26][27][28][29]. More specifically, HMGB1 has been demonstrated to be involved in immune responses to infection, injury, and inflammation in mammals [1]. We have cloned and sequenced duHMGB1 from the cherry valley duck. We found that duHMGB1 has the highest sequence identity with chicken HMGB1 (99%). However, the identity was also very high between duck and human. Moreover, duHMGB1 gene expression was found to be widely distributed in duck tissues. This is consistent with the widespread distribution of HMGB1 in different mammalian and chicken tissues. However, the content of HMGB1 in lymphoid tissues and testis of mammals is higher [30], the content of HMGB1 in ileum and bursa of fabricius is higher in chickens [31], and the content of HMGB1 is highest in lung tissues of ducks. Analysis of the sequence showed that duHMGB1 has two nuclear localization sequences like mammalian HMGB1 [32]. We observed that overexpressed recombinant flagged duHMGB1 after transfection of DEF cells localized mostly to the nucleus (as in mammals, [3]). However, duHMGB1 was released from the nucleus to cytoplasm as soon as 24 h post-stimulation with LPS. Extensive acetylation of HMGB1 upon activation by LPS may be a hypothetic mechanism since HMGB1 acetylation is induced by LPS in mammalian cells and since this acetylation is the signal to induce relocation of nuclear HMGB1 to cytoplasm [3]. The role of duHMGB1 in apoptosis was not clearly observed in DEF cells after overexpression or gene knocking-down using RNA interference, at variance with the situation observed in mammals [33]. However, after LPS stimulation, duHMGB1 overexpressing DEF cells had an increased apoptotic rate compared to empty vector transfected control cells. In mammals, HMGB1 undergoes a redox reaction in the extracellular environment to induce apoptosis through the mitochondrial pathway [34,35]. The reason, according to the literature, may be that LPS induces the transfer of HMGB1 from the nucleus, but also the release of the protein in the extracellular environment. The biological functions of HMGB1 are determined by the post-translational modifications of the protein (acetylation, etc.) in addition to its subcellular localization [36]. We may thus suspect that LPS is able to induce HMGB1 release from DEF cells in supernatant, as in mammals. Nevertheless, for DEF cells, it is a hypothesis that would need to be confirmed using Western blotting or ELISA. Our results show that overexpression of duHMGB1 in DEF cells induced a strong timely expression of TLR (TLR2, TLR4, TLR3) and PRRs (MDA5 and RIG-I) as well as interferons type I, anti-viral molecules (PKR, OAS, and Mx) and pro-inflammatory cytokines (IL-1β, IL-6, IL-8, and TNF-α). This indicates that HMGB1 can induce a clear pattern of gene expression linked to inflammatory and anti-viral innate immune responses in DEF cells. In addition, we demonstrated that duHMGB1 overexpression in DEF cells can activate the IFN-I signaling pathway, which is similar but not identical to mammals and chickens. HMGB1 in mammals can interact with TLRs and activate related signal transduction pathways to produce a range of cytokines [36]. Qu et al. reported that chicken HMGB1 is a significant inflammation factor in NDV infection. Chicken HMGB1 is involved in NDV-induced NF-κB activation and the inflammatory response, and promotes inflammatory cytokine production through the RAGR, TLR2, and TLR4 receptors [31]. Our results indicate that the expression of TLR4 and RIG-I were up-regulated after duHMGB1 overexpression. There may be molecular cooperative relationships between duHMGB1 and TLR4, duHMGB1 and RIG-I. However, the functional cooperation between them requires further research before firm conclusions are reached about an antiviral mechanism. Figure 10 Knocking-down duHMGB1 expression reduces or suppresses induction of some major innate immune and anti-viral gene expression after NDRV infection. The experimental group was DEF cells transfected with Si-duHMGB1, and the control group was DEF cells transfected with Si-NC. After 36 h transfection, the cells were infected with NDRV at 1 TCID 50 /mL. Cells were collected at 24, 36, 48 and 60 hpi for analyzing inducible gene expressions using qRT-PCR. Fold-changes in gene expression were calculated using the 2 −ΔΔCT method with GAPDH serving as a normalization gene and mean control values as baseline reference. Data are represented as the mean value ± SE of three experiments. The differences among the groups were evaluated by Nonparametric tests (Mann-Whitney U tests) using SPSS software version 17.0 (SPSS Inc., Chicago, IL, USA), *P < 0.05; **P < 0.01; ***P < 0.001.
2020-02-20T09:18:13.030Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "57012c67a7ddb7da7d0fc067813ee4ee33504c09", "oa_license": "CCBY", "oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/s13567-020-00742-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e23b00ba16dccf14d8b77056b6e9392ceb13f78a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
210640458
pes2o/s2orc
v3-fos-license
Natural cracking and the methods of its research when searching for oil and gas deposits (for example the southern areas of the Tyumen region) . Thanks to the complex approach to the study of natural cracking (use of modern remote and geochemical research) in the southern regions of Western Siberia (south of theTyumen region), large regional tectonic blocks and depression zones have been identified by space and geological-geophysical data. Their potential for detecting hydrocarbon deposits was evaluated, taking into account the geodynamic and fluid dynamic approaches. Introduction Natural crack zones and faults of various orders are inextricably linked with oil and gas saturation of reservoirs. It has been established that within the fields there are differentscale blocks and their structures delimiting -inter-block zones. In this regard, in recent years, geodynamic and fluid dynamic approaches based on the use of space and groundbased geophysical and geochemical information have been widely used to explain oil and gas accumulation processes [1]. The history of the issue, problem state, methods for solving it Deciphering satellite images in the visible and infrared ranges, analyzing maps by the gravitational field Δg, magnetic field ΔТ and structural maps by the roof of the pre-Jurassic base, the southern part of Western Siberia (south of the Tyumen region) can be conditionally divided into large tectonic blocks -Schuchinsk, Karabash, Miass, Tobolsk, Alym, Severo-Alym, Zaozern, Alymsko-Kalchisk, Nagornensk, West Nagornensk, Abalaksk, East Abalaksk, Mikhailovsk, East Mikhailovsk. Thus, in the regional plan, the southern part of the Tyumen region has a fault-block structure. Along the depression zones there are signs of horizontal displacement of large tectonic blocks. Depression zones have a northwest and northeast direction and extend quite a long distance -from a few hundred to the first thousand kilometers. According to seismic data, depressions of an isometric or ellipsoidal shape are noted at the intersections of the depression zones. Over the past 25 years, it has been shown that large domed structures, to which hydrocarbon deposits are associated, are, as a rule, also "broken up" by tectonic disturbances in the northwest and northeast directions. Moreover, the anticlinal structures themselves and the surrounding area are complicated by small overlap faults, grabens and horsts. Thus, only at a different hierarchical level, the inheritance of crack systems from large tectonic blocks and depression zones to smaller, local ones is noted. On the numerous structural plans of the roof of the pre-Jurassic foundation, systems of northwestern and northeastern tectonic disturbances are clearly traced everywhere in many territories. Therefore, it can be stated with confidence that these crack systems were laid back in the Paleozoic time and are "operating" to the present moment. In addition to the above-mentioned system of tectonic disturbances, the meridionallatitudinal lineament system was recorded on the structural plans of the Upper Jurassic, Cretaceous, Paleogene-Neogene and Quaternary sediments, up to the very day surface, which subsequently found its reflection on temporary seismic sections. Probably, this system arose in the Late Cretaceous period of tectonic activation that occurred in Western Siberia. Numerous deposits of gas, gas condensate and oil are confined and controlled by this particular system. At the regional level, the meridional-latitudinal crack system is reflected in the form of the Mendeleev depression zone, which belongs to the Ural-Kazakh edge deflection. For example, consider the intersection of two regional depression zones: the Greater-Uvat (north-eastern direction) and the Abalak-Malinov (north-western direction). Junction node of these two depressions an elongated depression in the north-west direction depression called the Abalak-Uvat basin. In connection with the discovery of large and giant deposits in the Shirotniy Priobye (Samotlor, Pokachevskoye, Fedorovskoye, Surgutskoye, Pravdinskoye, Salymskoye, Priobskoye, Krasnoleninskoye, Zapadno-Talinskoye, etc.), the southern regions of the Tyumen region for a long time remained in third roles. However, in the 80-90s of the last century A.L. Klopov [2][3][4][5][6] conducted oil prosspective research in the south of the Tyumen region using the geo-indication method. This made it possible to create maps of oil-promising cosmophotomalanias. The ideological part of the method was that hydrocarbons seeping from the hydrocarbon deposits into the overlying sedimentary deposits saturated the upper part of the soil and plants. As a result of this, the previous changed the spectral brightness coefficient and corresponded to a different phototone in cosmophotographic images. On the northeastern, eastern, and southern sides of the Abalak-Uvat basin, extensive promising cosmophoto anomalies (CFA), presumably representing oil-bearing lands, are distinguished. Southern CFA is spatially located in the northern part of the large Mikhailovsk tectonic block. Earlier, within this block, the Mikhailovskaya 1 exploration well was drilled, and within the Abalak-Malinovsk depression, closer to its southwestern side, respectively, the Mikhailovskaya 2 exploration well. When testing the supposedly promising intervals of gas and oil inflows were not obtained. As shown by geo-indication studies, these wells were drilled outside the prospective CFA, in a phototone that displays unpromising lands. Later, at the beginning of the XXI century, according to the method of qualitative and quantitative analysis of space materials using the reference classification for predicting oil prospective areas [7][8][9] in the northern part of the biggest Mikhailovsk tectonic block, as well as within the Abalak-Malinov depression zone, several small , medium and large anomalies, which according to the set of features corresponded to the Taylakovsky (east Khanty-Mansi Autonomous Okrug), Yakkun-Yakhsky (south Khanty-Mansi Autonomous Okrug) and Polunyakhsky (south Khanty-Mansi Autonomous Okrug) oil fields. As a rule, the results of distance research are subject to ground verification (verification). Employees of the West Siberian branch of the IPGG SB RAS in the northern part of the Mikhailovsk tectonic block were carried out ground-based complex geochemical research. Their goal was: confirmation of the filtration mechanism of the formation of geochemical anomalies in zones of tectonic disturbances; confirmation of promising cosmophoto anomalies identified by the geo-indication method; confirmation of oil promising areas identified by the method of qualitative and quantitative analysis of space materials using the reference classification and identification of the most promising areas in order to search for hydrocarbon deposits. Before conducting ground-based geochemical work, cosmogeological research were carried out. They are aimed at identifying and studying the structures of the upper part of the sedimentary cover, promising for the migration and accumulation of hydrocarbons. Cosmogeological research methods can be conditionally divided into: cartographic, structural-geomorphological and lineament. Cartographic are associated with the construction of special maps and geodynamic content schemes that allow you to determine the plotting structures favorable for the formation of oil and gas fields. Structuralgeomorphological methods are effectively used in the direct search for hydrocarbon deposits. Lineament methods are based on the analysis of dislocations and deformations of the upper part of the sedimentary cover and, recently, have been successfully adapted to the search, exploration and exploitation of oil and gas deposits. The results of cosmogeological research of different scales make it possible to trace tectonogenic objects that appear on a day surface in the form of various structural and morphological features of the research territory [10][11][12][13][14]. The methodology for processing and interpretation of cosmogeological data consists of three parts: processing and interpretation of satellite imagery materials in various ranges (visible, infrared, etc.), geological interpretation of seismic, gravimagnetic, thermal and other geological and geophysical (GIS, petrophysics, etc.) data and integrated geological interpretation of Earth remote sensing materials. Using this methodology, systems of geodynamically stressed zones (GDSZ) and tectonic blocks were identified. GDSZ systems are weakened linear sections of the earth's surface. Conventionally, they are vertical channels for the translation of deep fluids in the form of a narrow parallelepiped. Projection GDSZ from the earth's surface deep into the section, with their confirmation on temporary seismic sections, are treated as faults of the sedimentary cover. One of the oil-search attributes at the analysis of the study area is the presence of the weakened in the tectonic relation place of crossing (junction) of the GDSZ. It is in such places that local geophysical, geochemical, hydrogeological and other anomalies are recorded. This approach is the main and most effective tool for predicting and diagnosing the oil and gas potential of the subsoil of the research territory. The final stage of the cosmogeological methodology is the construction of a model of the study area in the fault-block variant. Results The result of cosmogeological research on Mikhailovsk area was the identification of three systems GDSZ: 6 -north-west, 5 -north-east and 2 -meridional directions. By comparison the results of a qualitative and quantitative analysis of thermal cosmic images with GDSZ systems, it was found that 2 large and 2 small promising anomalies within the Mikhailovsky tectonic block are confined to nodes of geodynamically stressed zones. In addition, 2 large promising anomalies within the Abalak-Malinov depression zone are located on the on a northeast board and also coincide with the nodes of GDSZ. The results of ground-based complex geochemical research (verification) on Mikhailovsk area showed that: 1. Abnormal concentrations of mercury emanations from ground samples from a depth of 2 meters gravitate, on the one hand, to the GDSZ, forming elongated linear anomalies, and on the other hand, to the nodes of the GDSZ. Moreover, linear anomalies are confined to the structural nose of the Mikhailovsky anticline structure (within the tectonic block), and the anomalies in the nodes of the GDZ are located on the southwestern side and in the middle part (on the saddle between the Iksky deflection and the Ivanovo depression) of the Abalak-Mikhailovsk depression zone. 2. The oil currently extracted conditionally consists of 70% alkanes (methane homologs) and 30% arenes (benzene homologs). The total concentrations of vaporous alkanes (hexane, heptane, octane, nonane and decane) and arenes (benzene, toluene, xylenes) in ground samples from a depth of 2 meters have a slightly different distribution compared to the area distribution of mercury emanations. On the one hand, the chains of anomalies of these hydrocarbon components gravitate toward GDSZ the north-west directional, including nodes, and on the other hand, the most contrasting anomalies are noted on the sides of the Abalak-Mikhailovsk depression zone, as well as in the northern part of the saddle between Iksky deflection and Ivanovo depression. Within the Mikhailovsk anticlinal structure, anomalies of alkanes and arenas were not recorded. From this it becomes clear why the Mikhailovskaya 1 prospecting well, drilled in the dome of the structure, turned out to be unproductive. 3. The distribution of the activity parameter of hydrocarbon-oxidizing bacteria (HOB), which are "fed" by hydrocarbon emanations, practically repeats the areal location of anomalies of alkanes and arenes. This confirms not only the cross-cutting ("open", fluidconducting) nature of the identified tectonic disturbances, but also the long process of vertical migration of hydrocarbon components from a probable deposit to the day surface. 4. Spatial coincidence of anomalies in alkanes, arenas, and parameter activity HOB, as a rule, is a reliable sign for detection of hydrocarbon deposits. 5. The results of 2D seismic surveys carried out later confirmed the correct orientation of the identified GDSZ and the presence of hydrocarbon traps in the section beneath the recorded geochemical anomalies in the Triassic deposits (within the Mikhailovsk tectonic block) and in the Lower Cretaceous and Upper Jurassic deposits (within the Abalak-Malinov depression zone). 6. Geochemical research carried out before, during and after seismic surveys on Mikhailovsk area revealed the fact that, when seismic impacts on the geological environment, part of the GDSZ (tectonic disturbances) are revealed (become fluidconducting), and the other part is "sealed", i.e. remain impenetrable. Conclusion Thus, research on the identification of natural cracking at various hierarchical levels showed that in the northern part of the Mikhailovsk tectonic block deposits of the pre-Jurassic base are promising , and in within the Abalak-Malin depression zone -deposits of a sedimentary cover.
2019-11-07T14:28:02.026Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "731c003b267ea0e8a6bdfe612b6f71d5c35c70c8", "oa_license": null, "oa_url": "https://doi.org/10.33764/2618-981x-2019-2-2-254-260", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cd4705b2b498f2f89ac15f02460c13f296f9e8b0", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
11018394
pes2o/s2orc
v3-fos-license
Prevalence and Molecular Analysis of Occult Hepatitis B Virus Infection Isolated in a Sample of Cryptogenic Cirrhosis Patients in Iran OBJECTIVES The aims of this study are to investigate the prevalence of occult hepatitis B virus infection among patients with cryptogenic cirrhosis and to analyze the relationship between surface protein variability and occult hepatitis B virus infection, which may be related to the pathogenesis of occult hepatitis B virus infection in cryptogenic cirrhosis. Occult hepatitis B virus infection is a well-recognized clinical entity characterized by the detection of hepatitis B virus DNA in serum and/or liver in the absence of detectable hepatitis B virus surface antigen, with or without any serological markers of a past infection. METHODS Sera from patients with cryptogenic chronic liver disease were tested for hepatitis B virus DNA using both real-time and nested PCR. In the detected hepatitis B virus DNA samples, the surface gene was analyzed for mutations. RESULTS Hepatitis B virus DNA was detected in 38% of patients, all of whom had a viral load below 10,000 copies/mL. All hepatitis B virus belonged to genotype D. There were no significant associations between occult hepatitis B virus infection status and age, gender, ALT/AST levels, viral load or serologic markers of previous hepatitis B virus infection. There were 14 mutations found in 5 patients; 6 were in the major hydrophilic region, of which 4 were Y134F assigning for the "a" determinant region. All patients who acquired Y134F contained S207R (within HLA-A2-restricted CTL epitope) as a combination. CONCLUSION Hepatitis B virus surface antigen variants may arise as a result of natural selection to evade the immune surveillance of the infected host, and subsequently may go undetected by conventional hepatitis B virus surface antigen screening tests. Etiological diagnosis of cryptogenic cirrhosis is significantly underestimated with current serology testing methods alone. Introduction Cryptogeni c cirrhosis is a diagnosis made after excluding identifiable causes, including viral hepatitis, autoimmune hepatitis, and metabolic liver diseases.][3][4] Occult HBV infection (OBI) is the designation given to patients negative for HBV surface antigen (HBsAg) who have PCR-detectable HBV DNA.It is classified into seropositive and seronegative OBI depending on the presence of HBV core (anti-HBc) and/or HBV surface (anti-HBs) antibodies. 5][13][14][15] The reason for the lack of circulating HBsAg in HBV DNApositive patients is unclear.Rearrangement in the HBV genome has been one of the proposed mechanisms; however, studies on HBV variability have so far generated conflicting results.In HBV DNA sequences obtained from serum samples of HBsAg seronegative carriers, HBV mutants with amino acid substitutions within the common "a" determinant region of HBsAg have been identified.][24][25] Iran has an intermediate-to-low endemicity for HBV infection, [26][27][28] and the prevalence of HBV-related cirrhosis is reported to be 51% to 56%. 29 The objectives of this study were to investigate the prevalence and clinical importance of OBI in cryptogenic cirrhosis and to analyze the association of surface protein variability with OBI, which might be related to the pathogenesis of OBI in cryptogenic cirrhosis. Methods This survey was a cross sectional study on selected patients from referral cases of cryptogenic cirrhosis (whith negative serologic test for HBsAg) to Tehran Hepatitis Network (THC) during the year 2010.A convenient sample was used and all available cases were included in the study.The diagnosis of cryptogenic cirrhosis was based on liver biopsies while in some patients who had contraindication, the diagnosis was made by conventional clinical, biochemical, imaging, and endoscopic criteria after all known identifiable causes were excluded by relevant investigations.All patients gave written informed consent to participate in the study prepared by the Tehran Hepatitis Network.All patients were negative for antibodies against hepatitis C (HCV), hepatitis D and human immunodeficiency virus.Patients who tested positive for HBV DNA by PCR and negative for HBsAg as well as HCV RNA and other factors implicated in the cause of chronic liver disease were labeled as OBI, as per the diagnostic criteria described below.A serum sample was drawn from each subject and stored at -80°C.Serological markers for HBV (HBsAg, anti-HBc and anti-HBs) were each checked using two different ELISA kits; Siemens (Germany), and Acon (San Diego, USA). HBV DNA was extracted from stored serum using the QIAamp DNA Blood Mini Kit (Qiagen, Hilden, Germany) following the manufacturer' s instructions.In brief, 20 µL of protease was added to 200 µL of serum in a 1.5 mL tube.Then, 200 µL of Al buffer was added, vortexed and incubated for 10 minutes at 56ºC.For DNA precipitation, 200 µL of ethanol was added to the mixture and centrifuged for 1 minute.Components were transferred to a collection tube containing a filter tube.Trapped DNA was washed in two steps by AW1 and AW2 buffers to eliminate impurities, with centrifugation after each step.After centrifugation, 50 µL of elution buffer was added and the eluted DNA was stored at 20ºC.HBV DNA was examined in all samples by real-time PCR (Fast-Track Diagnostic, Luxembourg).Positive samples were selected for standard PCR reactions using previous methodology, 30 with the exception of using 1.5 U Taq DNA polymerase HotStarTaq PCR (Qiagen) in each reaction mixture to increase the fidelity of the amplification process.A 5-µL aliquot of the PCR product was analyzed by electrophoresis in 1% agarose gel stained by ethidium bromide and visualized under ultraviolet light.The diagnosis of OBI was made when either real-time or nested PCR showed HBV DNA in the absence of detectable HBsAg, regardless of being seropositive or negative either for anti-HBc or anti-HBs or both markers. Direct sequencing of complete genomes was carried out (ABI-3130XL DNA Sequencer, PerkinElmer, Foster City, CA, USA) using 0.5 µL of internal primers. 30The comparative analysis was done using Chromas and BioEdit Package software version 7.0.5.3.Surface gene amino acid/nucleotide variations were compared with reference sequences from the GenBank database by aligning the corresponding sequences to be representative of the most frequent nucleotide found at each position from different Caucasians.2][33] Thus, the consensus nucleotide sequences and derived amino acid sequences from samples were compared with reference sequences of HBV genotype D (with the corresponding the most similar, AB033559, Okamoto, 1987), as well as with Iranian sequences obtained from GenBank and from our laboratory data.Any amino acid differences from the former were considered as "variants" (host HLA-determined), while any amino acid differences from the latter (Iranian database sequences) were considered as "mutations."Using contingency tables, associations between categorical variables were analyzed using χ 2 and Fisher exact tests and the mean values were compared by t-test.A p-value of <0.05 was considered statistically significant. Results The mean age of patients was 49 ± 14 years.Of the total 29 patients, there were 24 (83%) males and 5 (17%) females.All were negative for HBsAg, but 7 (24%) and 9 (31%) were positive for anti-HBc and anti-HBs, respectively.Also, 2 (7%) had anti-HBc alone as a past marker of HBV infection; 4 (14%) had anti-HBs alone, 5 (17%) had both antibodies and 18 (62%) had neither antibodies (results not shown).The mean ALT and AST levels were 81 ± 91 and 81 ± 92 (unit/mL), respectively.(Table 1) Of 29 subjects, 11 (38%) were OBI positive.All the samples that were found to be positive by real-time PCR were also found to be positive by nested PCR.In OBI-positive cases, the HBV DNA levels ranged between 22 and 7138 copies/mL (results not shown).Among the 11 OBI-positive patients, the mean age was 48 ± 15 years, 82% were males and 18% were females.The number of positive patients with anti-HBc alone, anti-HBs alone, both antibodies, and neither antibody were 1 (9%), 1 (9%), 2 (18%) and 7 (64%), respectively (results not shown).The corresponding number of patients OBI negative with anti-HBc alone, anti-HBs alone, both antibodies, and neither antibody were 1 (6%), 3 (17%), 3 (17%) and 11 (61%), respectively (results not shown).No statistically significant relationship was found between OBI-positive versus OBI negative in terms of demographic, serological status and ALT or AST levels.(Table 1) HBV surface gene and protein variability were studied by amplifying and sequencing.All were infected with genotype D, subgenotype D1 and ayw2 subtype (results not shown).Table 2 showed 37 mutations occurred at 16 nucleotide positions, of which 18 (49%) were non-synonymous (amino acid altering) and 19 (51%) were synonymous (no amino acid changing).At the amino acid level, 14 substitutions occurred in five patients; six were in MHR, of which four were Y134F assigning for the "a" determinant region.Six mutations occurred in two different locations: one in Q30K (isolate 14) and five in residues 207 and 208 (Table 2).Both these domains are known to be within HLA-A2-restricted CTL epitopes (29-30).Interestingly, all patients who acquired Y134F in the "a" determinant region contained S207R as a combination (Table 2).Three mutations emerged in the inter-epitopic region.Furthermore, it was possible to identify the level of surface protein evolution between isolates by measuring the ratio of synonymous to non-synonymous nucleotide sequences.The mean ratio for all sequences was 1.05 according to the number of mutations per site. Discussion OBI has been reported in association with a wide range of clinical manifestations, from asymptomatic carriage to HCC.It has received increasing attention in recent years because it appears to accelerate the progression of liver fibrosis and cirrhosis, ultimately leading to HCC. [8][9][10] In some clinical settings, OBI is unexpectedly frequent.In this study, OBI was diagnosed in 38% of patients with cryptogenic cirrhosis who had been found negative on several occasions for HBsAg.][13][14][15] A number of explanations for the persistence of HBV DNA in HBsAg-negative patients have been proposed, including HBV DNA in low copy numbers and low quantity of HBsAg in serum (just enough for viral assembly but below the sensitivity of assays).In the present study, HBV DNA levels in OBI-positive patients were found to be quite low, as has been observed in other studies. 5,342][13] In this study, 5 out of 11 OBI-positive patients had mutations within the major hydrophilic region of the surface protein, encompassing amino acid residues 100-160, including the "a" determinant region. Most of the amino acid changes observed in the present study were clustered in two regions: the "a" determinant region and residues 204-215 of the small surface protein.These residues have been shown to stimulate the host B cell and CTL epitopes, respectively. 35,36Findings of this study were in accordance to the findings of other studies. 23,24The presence of HBsAg mutants has been reported in some patients with chronic HBV infection who have not received either active immunization or HBIG, suggesting that pressure from the host immune system alone is able to drive the selection of HBV mutants. 19,37,38If this is the case, these persistent mutants may be responsible for liver injury. 0][41] This possibility is strengthened by observations that OBI can be detected in individuals after spontaneous HBsAg seroclearance, and OBI in these cases appears to represent leftover virus in the liver after HBsAg seroconversion. 42In this scenario, the viral genetic changes may not be the cause of OBI but may instead be characteristic of less fit viral sequences that are not as quickly cleared from the liver by the immune system.In this study, 6 out of 11 OBI-positive patients did not contain any surface protein mutations.Studies that have analyzed the full viral genome have reported different results. 23,24,43oreover, no relationship was found between the presence of OBI and demographic, biochemical (AST, ALT), or serologic (anti-HBs, anti-HBc) features.Thus, none of these parameters were useful for distinguishing OBI-positive from OBI-negative patients. Interestingly, the ratio between synonymous and nonsynonymous nucleotide sequences in OBI-positive patients was 1.05.This means that negative selection pressure had already been exerted on the surface protein (due to immune or functional constraints) after a long-lasting chronic HBV infection.Comparing to genotypes B and C from cirrhotic patients in GenBank (results not shown), the occurrence of so few substitutions in genotype D suggests that considerable constraints must exist against HBV variability in a particular genotype infecting a person of a particular background, or that genetic drift in Iranian genotype D is relatively slow.A definite conclusion would require a cohort study involving mutational analysis of multiple genotypes and stages of infection in chronically infected individuals ranging from inactive carriers to HCC cases. The present study had some limitations that must be recognized.First, cross-sectional studies are carried out either at a single point in time or over a short period.Thus, associations identified in crosssectional studies should not be considered as a causal relationship.Second, the sample size was small.Third, the samples were selected conveniently and not randomly.Therefore, the current findings may not represent the whole population of Iranian cryptogenic cirrhosis patients. Conclusion This study suggests that diagnosis of cryptogenic cirrhosis based on HBsAg testing alone may miss a substantial proportion of HBV infections, and that assaying for HBV DNA is therefore important in HBsAg-negative patients.As HBV-related cirrhosis carries a high risk of development of HCC, follow-up studies should be conducted on OBI-positive patients to assess its significance in the progression of liver disease, including HCC. Y134F R 17 26 Amino acids are described by single-letter codes and numbered from the beginning of the surface protein.Only positions at which changes occurred are shown, so relative proportion of epitopic to non-epitopic areas is skewed in favor of regions where substitutions occurred. Table 1 : Comparison of baseline characteristics of patients with and without OBI. Table 2 : Amino acid mutations within surface protein of patient groups, arranged by immune epitopes.
2017-10-17T06:20:45.334Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "3f59428459c246feffbaa107649e85a82f99103f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5001/omj.2014.23", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "30478a068353cd3ac41ed4306e4f3adfa588a659", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
214732382
pes2o/s2orc
v3-fos-license
An analysis of the impact of Newcastle disease vaccination and husbandry practice on smallholder chicken productivity in Uganda A number of studies have demonstrated the clear beneficial impact that vaccinating against Newcastle disease (ND) can have on reducing the frequency and severity of ND outbreaks. Here we go one step further and analyse the additional benefits in terms of improved production that result from vaccination. Data were collected from a cross sectional survey in Uganda of 593 chicken-rearing smallholders (for the purpose of this study this was defined as a farm with fewer than 75 chickens). Consenting participants were administered a detailed questionnaire covering a range of aspects of chicken production and management. These data were subsequently analysed in a generalised linear model framework with negative binomial error structure and the total offtake over the previous 12 months (chicken sales + chicken consumption + chickens gifted) was included as the dependent variable. Different measures of flock size were tested as independent variables and the model was also offered the district of the flock, ND vaccine adoption, use of poultry housing, provision of supplementary feed and use of dewormers as potential independent variables. We also developed an analogous model for the offtake of eggs (sale and consumption). The total size of the flock (counting chickens of all ages) was the measure of flock size that had the strongest association with offtake and was a significant but weak effect with an incidence rate ratio (IRR) of 1.011 (95 % Confidence intervals (CIs) = 1.007–1.015). ND vaccine adoption had a strong significant positive effect on offtake with an IRR of 1.571 (95 % CIs = 1.363–1.808). Use of a poultry house also had a significant effect (IRR = 1.365, 95 % CIs = 1.193–1.560). In the model of egg production, the number of hens was the demographic determinant with the lowest Akaike Information Criterion (AIC) (IRR = 1.094, 95 % CIs = 1.056–1.136) and ND vaccine adoption had a strong positive effect on egg offtake (IRR = 1.801, 95 % CIs = 1.343–2.412). Vaccinating against ND has a clear beneficial impact on the productivity of the flock, and the livelihoods of smallholder farmers. Introduction Smallholder poultry production is identified as a key stepping stone in the route out of poverty in developing countries on account of the rapid production cycles, low input requirements and the relative liquidity of poultry as an asset . Consequently, in developing countries, smallholder chicken production accounts for 70 % of chicken production ). However, outbreaks of disease and in particular virulent Newcastle disease (ND) which is often associated with high mortality (up to 100 %), place a considerable constraint on the productivity of chicken flocks (Aboe et al., 2006;Otim et al., 2007;Harrison and Alders, 2010). Inexpensive vaccines that are effective and easy to administer (Tu et al., 1998;Alders and Spradbrow, 2001) are available and have been shown to be effective at reducing mortality rates among infected flocks (Harrison and Alders, 2010;Alders, 2014;Bessell et al., 2017). Consequently, strategies are being developed to improve rates of adoption of ND vaccines but in spite of this, adoption rates and compliance (in terms of frequency of vaccination) are variable (Alders, 2014;Bessell et al., 2017;Campbell et al., 2018). ND vaccines have been shown to have a beneficial impact in terms of reducing disease incidence and enabling the production of larger flocks, albeit with larger flocks more likely to vaccinate (de Bruyn et al., 2017). However, further evidence quantifying the role of ND vaccination and other interventions in impacting flock productivity would provide key instruments for driving advocacy and understanding the benefits for adoption of ND vaccines both within countries with endemic virulent ND and from among international stakeholders. Small scale studies have addressed this in detail at fine scales (Jugessur et al., 2006;Njue et al., 2006;Henning et al., 2013), so here we propose to review changes in productivity resulting from ND vaccination where it has been promoted across large areas. We consider this over a 12 month period during which there will be peaks in demand for chicken meat (for example associated with religious festivals) and peaks in incidence of ND. A further component that has been identified as a key step in improving flock productivity are improvements in poultry husbandry (Henning et al., 2009;Rodríguez et al., 2011;FAO, 2014). This includes the use of a poultry house offering protection from predators and from escape (Ahlers et al., 2009;Desta and Wakeyo, 2013;FAO, 2014), supplementary feeding to improve the weight and fertility of the chickens (Ahlers et al., 2009;Henning et al., 2009;FAO, 2014) and dewormers to improve the growth and final weight of chickens (Phiri et al., 2007;Chota et al., 2010;Mwale and Masika, 2015;Bessell et al., 2018). Accordingly, if undertaken alongside ND vaccination, poultry husbandry improvements could lead to a substantial improvement in production and previous studies have shown positive economic returns on ND vaccination, supplementary feeding, housing and parasite control (Jugessur et al., 2006;Njue et al., 2006), but in certain instances there could be a net economic loss (Udo et al., 2006). Therefore, the objective of the study was to add further resolution to the relative contribution of ND vaccination and poultry husbandry in improving poultry productivity and to identify the relative contribution of each practice on impacting smallholder productivity. Survey background The study was designed to interview smallholder farmers that fall into two categories: 1.) those that adopt ND vaccines and 2.) those that do not adopt ND vaccines. An ND vaccine adopter was defined as any smallholder that answers "yes" to the question "Have you vaccinated your poultry against Newcastle disease in the past 12 months?". This category includes both smallholders that vaccinate frequently -3 or 4 times per year which is recommended to ensure protection (Alders et al., 2002) and those that vaccinate less frequently. Hence, we do further analysis breaking down the question "How frequently do you vaccinate your poultry against Newcastle Disease?". Participants were recruited through a single field survey with an aim of recruiting 50 % ND vaccine adopters and 50 % non-adopters. In order to efficiently identify ND vaccine adopters, the adopters were identified through interviews with agrovet store owners or known village vaccinators (village vaccinators (Bessell et al., 2017) are local individuals that are trained in the storage, preparation and administration of I-2 ND vaccine by eye drop (Alders et al., 2002)). Non-vaccinating households were selected based on convenience sampling from the village population assisted by locally recruited guides who knew the geography of the area. Wherever possible the enumerators maintained a gap of 500 m or 5 houses was between interviewees and the dispersal of interview locations was verified by mapping the interview locations. For the purposes of this study smallholders keeping at least one and no more than 75 chickens were enrolled. The ceiling of 75 was set to ensure that we interviewed smaller scale producers rather than semi-intensive producers operating formal broiler and layer systems. Small extensive and extensive scavenging systems have been classified as farmers rearing 1-5 and 5-50 adult chickens (FAO, 2014). In this study by setting a ceiling of 75 total chickens we are consistent with this classification. This ceiling also ensured that flock size did not become an overwhelming determinant of behaviour. Questionnaire The questionnaire was implemented on Android OS smartphones using the Open Data Kit (ODK) App (Hartung et al., 2010). It covers a number of aspects of chicken rearing: 1 Respondent eligibility 2 Basic informationtime, date, village, coordinates 3 Background details of the respondent 4 Breakdown of chicken flock 5 Breakdown of chickens gifted and purchased 6 Chicken health 7 Selling chickens 8 Chicken meat consumption 9 Gifting of chickens 10 Egg production, consumption and sales 11 Reasons for death of adult chickens 12 Reasons for death of chicks and growers (defined as juvenile chickens) 13 Uses of revenues from chicken sales 14 Mammalian species farmed The questionnaire was addressed to the person responsible for making decisions with respect to the chickens. Where it was not possible for one individual in the household to answer all questions, the input of others was sought. The questionnaire survey was coded using xlsForm and the survey is in supplementary information S1 and can be viewed at https://ee.kobotoolbox.org/preview/::942WfITH, note that questions are dependant, so the online survey only opens up after initial questions have responses). Sample size Sample sizes were estimated on a per district level as a population proportion based on 50 % sample proportion, 95 % confidence level and a precision of 10 %. Fitting to a population of 35,000 households gave a required sample size of 96 households in each district (Daniel, 1999) and was implemented as a minimum of 50 adopters and 50 nonadopters in each district. However, in practice it was possible to get nearer to 60 adopters and 60 non-adopters in each district further powering the study. Survey implementation The questionnaire was written in English and interviews were conducted in English or the local language that the respondents were most comfortable speaking. The interview was conducted in a conversational manner with respondents sometimes seeking animal health advice or asking questions to the enumerators. There was no formal translation to local language(s), instead enumerators were translating as necessary during the interviews and enumerators and respondents sometimes switched between languages during an interview as they felt appropriate or comfortable. Enumerators reviewed and made locally relevant changes to the questionnaires during a training where they pre-tested and practiced the questions until they were fully familiar with the interview. The interview was further pre-tested with selected local smallholders. The study was implemented in September and October 2017. The research was carried out with the approval by the District Veterinary Offices of the study districts and every district assigned a veterinary officer to work with the research team during data collection within their districts. In accordance with the standards of Uganda, no formal ethical approval was required for this study. Study participants were informed of the purpose of the study and only participated if they agreed to do so. Smallholders were not offered or given any incentive for participation and smallholders' participation did not impact in any way on future provision or access to services. Study areas The study was implemented in five districts in Uganda (Fig. 1). The districts were selected as they have similar socio-economic, agro-ecological and ethnic characteristics and were districts where co-authors have been active in setting up ND vaccine distribution networks. Multiple districts were required to control for any district level measures that were taken to control disease spread and district was tested as a covariate in statistical models. Equal numbers of ND vaccine adopters and non-adopters were targeted for recruitment in each study area. The study targeted 600 smallholders in total (50 % adopters and 50 % nonadopters). Modelling Productivity is analysed as a statistical model of the offtake of chickens and of eggs. Chicken offtake is defined as the number sold during the previous 12 months, the number gifted during the previous 12 months and 2 times the number consumed during the past 6 months (data were not collected on consumption over a 12 month period). By incorporating sales over a 12 month period we allow for annual fluctuations in market values that coincide with seasonal variations and market responses to religious festivals. Different generalised linear model (GLM) frameworks were tested including binomial and Poisson error structures and these were found to be poor fits with large over dispersion. The most appropriate fit was with a negative binomial GLM implemented in R (R Core Team, 2017) using the glm.nb function from the MASS package (Venables and Ripley, 2002). The model was fitted in a number of stages: • The offtake is likely to be some function of flock size, or the number of productive chickens in the flock. However, there are a number of ways that flock size could be measured so we tested we fit 5 separate models: Where a is the intercept and ε is the error term, the Akaike Information Criterion (AIC) of the 5 models were compared and that with the lowest AIC selected and taken forward. • The model with the lowest AIC above was fit with the district of the flock. AICs of the models with and without district were tested and the model with the lowest AIC retained. • Four husbandry factors were tested in turn: use of ND vaccines, ownership of a poultry house for overnight housing, use of supplementary feeding and use of dewormers. Adopters of ND vaccines are defined as those that responded "yes" to the question "Have you vaccinated your poultry against Newcastle disease in the past 12 months?". The husbandry variables were included and subsequently the least statistically significant was excluded until only statistically significant husbandry factors remain. • Interactions between the flock size variable(s) and the husbandry practices were tested and retained if they reduced AIC. The fit of the models was evaluated by inspecting the dispersion and plotting the model residuals. Analysis of diagnostic plots showed one data point that corresponded to a flock of 13 chickens that had an offtake of 134, this was subsequently removed. We had no a priori reason to assume that there was spatial structure to the data, but the variogram of the final model's residuals was inspected for evidence of spatial autocorrelation and there was none. A similar model was constructed to describe the offtake of eggs during the previous week. Here offtake was described by the sale and consumption of eggs combined. The model was constructed in the same way as the chicken offtake model, but there were no outliers that were identified and subsequently all data were retained. Chicken sale values were modelled based on the median, maximum and minimum sale values cited by respondents. We fit a triangular distribution using the triangle package in R (Carnell, 2019) to the values for each respondent and drew at least 2,000,000 samples in total from the realm of distributions, sampling from each distribution proportional to the size of that flock. Survey breakdown Rather than the target of 50 % ND vaccine adopters, 52.3 % of respondents were vaccinating against ND and the majority of vaccine adopters vaccinate three times per year (Table 1). More adopters than non-adopters used dewormers on their chickens, and owned a poultry house (Table 1). Most chickens were fed by scavenging alone (Table 1). Flock sizes were larger among adopters compared to non-adopters and this difference was consistent across study province or district (Fig. 2). Offtake was greater among the adopters compared to the nonadopters and sales accounted for the greatest proportion of offtake (Fig. 3). There was a greater proportion of deaths due to diseases compared to deaths due to predators among non-adopters, but not among adopters (Fig. 3). Offtake comprises home consumption (plus gifts were included here) and sales and these two metrics are broadly correlated (Fig. 4). However, a large number of respondents reported home consumption but no sales (n = 136), but few (n = 21) had sales but no home consumption. Economics Modelling the sale values cited by respondents, the median sale value was US $4.22 per chicken sold and there was quite a tight distribution around these sale values (Fig. 5). The median cost per egg sold was 9USc. The median vaccine cost was 3USc per dose per bird. Modelling The model with total chickens as the denominator had the lowest AIC, and AIC was further reduced by including district ( Table 2). The model with the lowest AIC included ND vaccine adoption and ownership of a poultry house (Table 2). Sensitivity analysis including just those households that vaccinate at least three times per year as adopters made no significant change to the model results presented in Table 2. Plotting the values of predictions extrapolated from the model shows the relative change in offtake with increasing flock size for flocks that have poultry housing and for those that do not (Fig. 6, Table 3). In the model of egg production, a model fitted with the number of hens as the demographic predictor has the lowest AIC, but including district did not reduce AIC. Subsequently, the model with ND vaccine adoption and poultry housing had the lowest AIC, but poultry housing was marginally non-significant and so was dropped from the final model (Table 4). It must be noted that 43.5 % of respondents reported no egg production during the reference time period comprising the previous week. Discussion We have collected data and parameterised a model to describe the impacts of different husbandry practices on the level of offtake from smallholder chicken flocks in Uganda. Ideally, this type of study would require long term longitudinal monitoring of the study flocks to record the inputs, outputs, births and deaths, sales and consumption. However, a study of this nature represents a considerable logistical challenge on even a relatively small scale (Henning et al., 2008). Hence, in order to conduct these analyses over large geographical areas, we used a crosssectional study that considered smallholders' activities over the previous year and fitted the model to account for the likely shortcomings in the data collection framework. Consequently, the study relies on the recollection of the farmers of their activities over the previous year and could be prone to recall bias. Whilst it would be beneficial to use farmer record books, many smallholder farmers do not keep records of their activities so record books can not be used to give a representative sample of smallholders. One of the principal drivers of productivity will be the number of chickens in the flock and in particular the number of productive chickens. The number of new chickens entering the flock over a period of time will generally be a function of the number of productive hens during that time, a number which will change during the time period. However, due to the cross-sectional nature of the data collection in this study, we do not have a reliable estimate for the number of hen days, so we tested four different proxy variables, considering: 1) the total number of chickens, 2) only the number of adult chickens, 3) the adult chickens and growers separately and 4) the adults, growers and chicks separately. In this model, the best predictor was the total number of chickens, indicating that the wider flock dynamics needed to be included beyond just the number of adult chickens. However, flock size was a predictor of offtake but not a particularly strong predictor and this is in part due to the flock size being recorded at the end of the period of offtake and so it does not account for events such as die-offs, or decisions to sell or consume productive chickens. That a static measure of the flock's size is not a good predictor or productivity indicates that there are a lot of other potential drivers of production that could be considered here. The differences in management practices are emphasised by Fig. 4 which shows there are a large number of flocks that can be very productive but rear chickens purely for home consumption or for gifts and do not sell chickens. This potentially produces a very different dynamic with chickens likely consumed in small numbers (one or two chickens) at regular intervals whilst with sales we might expect the smallholders to sell multiple chickens at irregular intervals. The model was run Table 1 Descriptive statistics to compare the adopters and the non-adopters. The percentages represent the percentage that are in that group (except for the "Overall" row). separately with the outcome as either sales or gifts + consumption and the same predictors remained significant. However, due to the zeroinflation problem, the sale model was a poor fit and so is not presented. ND vaccination was identified as a significant contributor to flock productivity with an increase in flock chicken productivity of 57 % after other management practices that affect flock productivity had been taken into account, which is generally greater than increases in off-take observed elsewhere in Africa (Fisher, 2014). This compares to returns on investment of ND vaccination using the F strain by nose drop administration in Kenya of 3.36 and of 1.15 from supplementary feeding for 16 farms in Kenya (Njue et al., 2006). A similar small scale study in Mauritius found similar returns on investment from ND vaccination using live NDV4 vaccine by eye drop or drinking water of 4.2-6 (Jugessur et al., 2006). The corresponding impact on egg production was greater still at 80 % which whilst the value of eggs is smaller than chickens, eggs represent an important source of regular income and protein (Sonaiya and Swan, 2004). Non-adopters Adopters In this study in Uganda it is conspicuous that the majority (70 %) of respondents were compliant with the recommended cycle of vaccinating 3-4 times per year (Alders et al., 2002) (Table 1), of those remaining, 21.3 % vaccinate twice per year. Accordingly, sensitivity analysis including only adopters that vaccinate at least three times per year made no significant difference to the predictors. This compliance to the vaccination regime may contribute to the magnitude of the impact of ND vaccination. It should be noted that the beneficial impact of productivity among ND vaccine adopters was not observed in similar studies that were conducted in Burkina Faso and India. In Burkina Faso, there was a positive benefit with adopters 17 % more productive, but once the costs of the inactivated ND vaccine are considered the net economic benefits are marginal (unpublished data). As we did not collect disease data from study flocks we do not have the data to explain this result without resorting to pure speculation and so we have not presented the results. In India no effect was seen in a much smaller and geographically limited study (unpublished data). For the model of flock chicken production, the ownership of a poultry house had a significant effect, but not deworming or supplementary feeding. The non-significance of dewormers could be due to wider management practices such as rereleasing chickens to contaminated environments or because we measured the absolute number of chickens taken off, rather than the weight of those chickens, and indeed the impact of dewormers on chicken weight is highly variable (Phiri et al., 2007;Chota et al., 2010;Mwale and Masika, 2015;Bessell et al., 2018). Similarly for supplementary feeding, the condition of the chicken was not considered in this study, purely productivity measured by numbers of chickens. Whilst these analyses focussed on four facets of poultry productionvaccination, deworming, feeding and housing and that there are a number of additional factors that are not considered here. In the outcome we consider only offtake of chickens and eggs. However, the objectives of chicken production may be different and in particular in the context of this study smallholders may be seeking to hold on to birds in order to grow the flock. A further constraint on productivity is the lack of effective marketing channels for rural poultry, marketing, market instability and supply fluctuations often acts as a considerable constraint on making a profit from selling poultry produce (Sonaiya and Swan, 2004;Queenan et al., 2016) A limitation on this study was the necessary ceiling of 75 chickens that was placed on the size of the flock. This places something of an artificial constraint as the better, more productive flocks may have grown beyond this level. It is noticeable in Fig. 2 that the sizes of adopter's flocks are pushing the upper limit on flock size. This ceiling on flock size may have an impact of reducing the observed impact of productivity by excluding those flocks that have seen the greatest increase in production. Whilst the enumerators report that almost all farmers that were approached agreed to participate in the study, we did not record the numbers of farmers that declined to participate. In future studies this would be a valuable addition to the study design as the numbers that refuse to participate gives an estimate of the unobservable population. Furthermore, we did not record any interviews for data quality verification. Whilst this would be valuable, the recording could change the dynamics of the interview, or lead to farmers declining to participate. In the study population there was no significant impact on the gender of respondent on production, and the gender of the respondent was no significant difference between genders in terms of ND vaccine adoption. However, poultry production is know to be strongly gendered in nature and in other settings gender is known to influence ND vaccine adoption Campbell et al., 2018). Future studies of this nature would benefit from taking sociocultural issues and the role of women in poultry rearing and could be structured to ensure that different households structures are represented in the sample and within this consider the sometimes complicated decision making structures that exist (Guèye, 2000). Due to the challenges of field logistics, we did not record any breakdown of the different input costs to construct a full economic model. To do this would require a different study design involving a longitudinal study of a small number of households (Henning et al., 2008(Henning et al., , 2009(Henning et al., , 2013, rather than a large cross-sectional survey implemented here. Hence the model that we have developed is a model of the number of offtakes and from the modelled value data, we can infer the value of this production. Whilst further economic analyses are required to understand the wider economic benefits of husbandry practices, the input costs are relatively low. The cost of 3USc per vaccine dose per chicken, amounts to 2.40 USD to maintain vaccination for one year in a flock of 20 chickens on a 3 month vaccination regime. This compares favourably to $41.36 gross benefits in meat production (if we assume that the value of a chicken that is gifted or consumed is equal to one that is sold) and an improvement in egg production amounting to $2.81 per year from a flock with 5 hens. This represents clear benefits to the smallholder as well as additional benefits in terms of the greater certainty that the flock will survive an ND outbreak. Offtake = the number of chickens sold during the previous 12 months, the number gifted during the previous 12 months and 2 times the number consumed during the past 6 months. a IRR = incidence rate ratio. Fig. 6. Fitted values for the model for flocks that are non-adopters and do not own a poultry house (blue line), and those that are adopters and own a poultry house (red line), this fitted for Alebtong district. Broken lines represent 95 % confidence intervals. Offtake = the number of chickens sold during the previous 12 months, the number gifted during the previous 12 months and 2 times the number consumed during the past 6 months. Conclusions We have demonstrated that in this study area that ND vaccination is a key intervention that can have a substantial effect on chicken flock production measured by sales, consumption and gifting of chickens and eggs. Whilst housing also impacts on chicken production, ND vaccination had the biggest effect. This is consistent with results from elsewhere, and enforces the need for vaccination against ND and key diseases for improving livelihoods of rural communities.
2020-03-26T10:20:43.751Z
2020-03-20T00:00:00.000
{ "year": 2020, "sha1": "1462d52b57fb1415341af46db1da5b5bc37805a1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.prevetmed.2020.104975", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "975a36dc1fa2724b49b246d4fe6906412cfc48ca", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
12343732
pes2o/s2orc
v3-fos-license
Breakdown of C3 after complement activation. Identification of a new fragment C3g, using monoclonal antibodies. The physiological breakdown of C3 has been studied using monoclonal anti-C3 antibodies, and it has been found that the later stages of this process--the breakdown of C3bi--is more complex than had previously been recognized. C3bi is the reaction product produced from C3b by the action of factor I which, in the presence of factor H, produces a double cleavage in the alpha chain of C3b. It is here reported that, both on cells and in the fluid phase, the breakdown of C3bi in serum gives rise to two products: C3c and the product previously described as alpha 2D, which we now propose to designate C3d,g. Alpha 2D differs from C3d in that it contains an additional fragment of approximately 8,000 mol wt that carries the antigenic determinant for the clone 9 monoclonal anti-C3 antibody. C3g cannot be precipitated by anti-C3 antisera and therefore behaves as a uni- or bideterminant antigen. The cleavage of C3d,g to C3d and C3g does not occur in sterile serum. It is also still uncertain what enzyme cleaves C3bi to C3c and C3d,g in plasma. Plasmin can do so in vitro, but plasminogen-depleted serum can still produce the cleavage. The antigenic determinant recognized by clone 9 in C3 is not exposed in C3 or C3b, but appears as a neoantigen in C3bi (and in C3d,g). Anti-C3g therefore is a potentially useful ligand for detecting complement-activation products. C3g represents a new, highly anionic C3 fragment and seems not to be identical with the C3e fragment described by others. Where the monoclonal antibodies were labeled with 12sI, immunoglobulin G (IgG) fractions of the ascites were prepared by fractionation on DEAE-Cellulose. Iodination of monoclonal antibodies was performed using the iodogen procedure (8), and offering 1 mCi of 125I for every mg of protein. Complement Components. C3 was purified at the Scripps Clinic and Research Foundation by the method of Tack and Prahl (9), and at the Medical Research Council Centre by the method of Harrison and Lachmann (10). C3 was radiolabeled both by the iodogen procedure and by the Bolton and Hunter (11) procedure. Factor H was purified as described in (10). Factor I was purified substantially as described in (4). Bovine conglutinin was purified from bovine serum as described (12). Cobra venom factor was purified from Naja Naja venom as described (13). Enzymes. The enzymes thermolysin, cathepsin G, leukocyte elastase, and kallikrein were kindly donated by Dr. Alan Barrett of the Strangeways Laboratory, Cambridge, England. Trypsin was purchased from Sigma Chemical Co., Ltd., London, England. Plasmin was purified by chromatography of whole plasma on Sepharose-lysine, which is used as an early step in the fractionation of C3 (10). Sepharose-lysine columns, after extensive washing, were eluted with 0.2 M epsilon-aminocaproic acid, and the plasminogen dialyzed and concentrated by uhrafihration. Plasminogen was activated with streptokinase obtained from Sigma Chemical Co., Ltd. The amount of streptokinase used was titrated to obtain optimal lysis of fibrin on a fibrin plate. The concentration of plasminogen was compared with that in undiluted human serum, again assessed by lysis of a fibrin plate. The fibrin plates were prepared as described (14), and plasminogen concentrations were measured by the diameter of the zone of fibrinolysis produced. Complement Intermediates. The preparation of sheep erythrocytes (E), 1 their sensitization with antibody (preparation of EA), and the preparation of yeast-treated human and guinea pig complement (R3 reagents) were performed as described (15). The intermediate EA treated with complement (EAC) was made by treating 1 ml of 10% EA with 1 ml of human R3 reagent for 2 min at 37°C followed by washing. This intermediate contains, as far as C3 fragments are concerned, a mixture of C3b and C3bi. More defined intermediates were made as follows: I ml of 10% EA was treated with 1 ml of guinea pig R3 reagent for 5 min at 37°C to make the intermediate EAC 142. The mixture, after incubation, was chilled to 4°C and washed rapidly. It was then treated with C3 using 1 mg of C3 for 1 ml of 10% EACt42. Incubation was continued for 15 min at 37°C with stirring. The EAC3b was washed and used as such. To convert EAC3b to EAC3bi, the cells were treated with purified preparations of factor H and factor I for 15 min at 37°C. The EAC3bi were washed and used as such. To convert EAC3bi to EAC3d, the cells were treated with a final concentration of 2 #g/ml of trypsin for 5 min at 37°C. The cells were then spun down and resuspended in a concentration of 5/zg/ml of soy bean trypsin inhibitor and washed twice more. The treatment of EAC3bi with enzymes other than trypsin is described in Results. Polyvalent Antisera. Polyvalent antibody to C3 reacting with both C3c and C3d was kindly given by Dr. J. Bradwell, Immunodiagnostic Research Laboratories, University of Birmingham, Birmingham, England. An antiserum reacting predominantly with C3d was kindly given by Ortho Diagnostics, Raritan, NJ. An antiserum reacting exclusively with C3c was given by Dr. H. Mi]ller-Eberhard, Scripps Clinic and Research Foundation. Anti-rat IgG antiserum was raised by immunization of a rabbit with rat IgG. This antiserum was absorbed with EAC for use in the agglutination and agglutination inhibition reactions. Coprecipitation Assays. In these assays, immunoelectrophoresis was performed on microscope slides. 1% agarose in 0.05 M veronal buffer, pH 8.6, was used as support, and electrophoresis was carried out at 7 mA/slide at 4°C until the albumin Bromophenol blue marker had reached the end of the slide. The antibody slots were filled with a mixture of polyclonal anti-C3 and ~25I-labeled monoclonal antibodies. Development was allowed to take place for 24-48 h. The slides were then exhaustively washed, dried, stained with Coomassie Brilliant Blue, and dried again. Finally, they were submitted to autoradiography, usually allowing an exposure of 1-3 d. Agglutination Assay for Monoclonal Antibodies. Monoclonal antibodies were titrated in three steps in microtiter plates using 50-/zl vol. 50/tl of 0.5% of complement intermediate cells were a Abbreviations used in this paper." E, sheep erythrocytes; EA, erythrocyte antibody; EAC, EA treated with complement; PAGE, polyacrylamide gel electrophoresis. added and mixed well. The cells were allowed to settle completely at 4°C and then read for agglutination. At this stage, agglutination titers were very low and were not recorded. The plates were then centrifuged for 1 rain at 1,000 rpm, and the supernatant was gently shaken off. The cells were resuspended in 50 #1 of rabbit anti-rat IgG, diluted 1:400. Cells were then allowed to settle once more, and the agglutination patterns read visually. Assay of C3 Fragments by Inhibition of Agglutination. Sources of C3 fragments were diluted in microtiter trays in 50-~1 vol, and 50 #1 of the monoclonal antibody was added (ascites diluted 1:250,000) and the mixture incubated for 1 h at 4°C. The plates were then centrifuged at 1,000 rpm for 1 min and the supernate gently shaken off. The cells were resuspended in 50/~1 1:400 sheep anti-rat IgG and allowed to settle. Agglutination was read by pattern. Polyacrylamide Gel Electrophoresis (PAGE). PAGE analysis was carried out by standard techniques (16). The Antigenic Determinants Recognized by Three Monoclonal Antibodies Against C3 REACTION WITH CELL-BOUND C3 STUDIED BY ANTIGLOBULIN AGGLUTINATION REAC-TIONS. Table I shows the agglutination titers obtained. Clone 3 and clone 4 show the reactivity to be expected of anti-C3d and anti-C3c, respectively. Weak reactivity of clone 4 is, however, seen with EAC3d cells. This can be ascribed to the fixation of some C3 to EACI42 in a form that is insusceptible to cleavage by I and H. Clone 9 reacts exclusively with a neoantigen exposed in C3bi. To this extent it resembles bovine conglutinin, which reacts with a carbohydrate determinant exposed only in C3bi (4). Clone 9 however, does not react with the same determinant as conglutinin. Thus, the reaction of clone 9, unlike that of conglutinin, is not dependent on calcium or inhibited by EDTA. Clone 9 is not inhibited by 0. I M N-acetylglucosamine, which inhibits conglutination (17), nor does it react with zymosan in the absence of complement as conglutinin does (18) (data not shown). STUDIES BY COPRECIPITATION USING RADIOLABELED MONOCLONAL ANTIBODIES AND THE BREAKDOWN PRODUCTS OF PURIFIED C3 AND OF C3 IN SERUM. In these experiments the source of C3 is subjected to electrophoresis and then precipitated with a polyvalent anti-C3 antibody, to which is added a radiolabeled monoclonal antibody. Two precautions have been found necessary. One is to use only one monoclonal antibody on each slide. Because monoclonal antibodies have been found to diffuse through precipitin lines (7), they can produce staining of lines in unexpected parts of the slide, which can cause confusion if more than one monoclonal a n t i b o d y is present on a slide. A further problem is that changes in the antigenic structure of C3 m a y occur after electrophoresis, so that a line in the C3 position might be C3b or even C3bi or a line in the C 3 b / C 3 b i position might be a mixture of C3c and C3d. To minimize the risk of post-electrophoretic breakdown, only purified IgG fractions have been used as antisera in the trough. It can be seen ( Fig. 1) that C3b and C3bi cannot be distinguished by their electrophoretic mobility in these conditions and both give lines in the "beta 1A" position. C3bi is split to C3c and C3d by trypsin. C3c is slightly faster in mobility than C3bi and C3d is slightly slower. This is in contrast to the situation using the aged, cobra venom-treated serum, which contains C3c and a faster fragment precipitated by anti-C3d. This fragment is a l p h a 2D and is the C3d-containing fragment generated by enzymes occurring in h u m a n serum. O n a u t o r a d i o g r a p h y clone 3 (anti-C3d) reacts as expected with C3, C3b, C3bi, C3d, and the C3 in normal h u m a n serum but not with C3c. Clone 4 (anti-C3c) reacts with C3, C3b, C3bi, and C3c, but not with C3d or alpha 2D. Clone 9 reacts with C3 and with C3b (which was unexpected, as it failed to react with cell-bound C3b). It reacts strongly with C3bi but with neither of the two trypsin digest products shown by p r e c i p i t a t i o n --C 3 c and C3d. O n the other hand, it does react strongly with the a l p h a 2D line. There are, therefore, at least two differences between a l p h a 2D a n d FIG. i. Coprecipitation assay using radiolabeled monoclonal antibodies. Immunoelectrophoreses were set up as shown in top row and electrophoresed until the albumin marker reached the right hand end of the slide. After development, washing, and staining for protein (2nd row) autoradiography was performed. C3 was used at 1 mg/ml, and the breakdown products were generated from this with little dilution, as follows: C3b was generated from C3 using EAC142 (0.1 ml packed cells/ mg C3 for 30 min at 37°C); C3bi was generated from C3b using purified factors I and H (1% by weight of C3b treated) for 1 h at 37°C; C3c + C3d was generated from C3bi by cleavage with trypsin 2/~g/ml for 10 min at 37°C. The reagents made from whole serum (NHS) containing 0.01 M azide were: NHS/CVF, serum treated with 5/~g purified CVF/ml for 45 min at 37°C; NHS/ CVF/aged, NHS/CVF incubated at 37°C for 20 h; NHS/CVF/aged (60°C 1 h), NHS/CVF/aged heated at 60°C for 1 h. This treatment destroys the antigenicity of C3d. C3d. They differ in electrophoretic mobility, and alpha 2D carries the antigenic determinant reacting with clone 9, whereas C3d does not. Does T~ypsin Digestion of C3bi Produce a Nonprecipitable Fragment Reacting with Clone 9? To answer this question, C3 fragments were tested for their ability to inhibit the binding of clone 9 to EAC3bi. Table II shows the results for the C3 fragments tested by the inhibition assay described in Materials and Methods. Purified C3 gives much better inhibition of clone 3 and clone 4 than it does of clone 9. A variety of C3 preparations have been tested, and the more native preparations give the lower inhibition of clone 9. C3b does not inhibit clone 9 at the concentrations tested, and one may therefore conclude that the clone 9-reactive antigen is not exposed. C3bi inhibits all three monoclonal antibodies well. Of particular interest is the result with the tryptic digest of C3bi, which on precipitation analysis shows only C3c + C3d. This digest inhibits clone 9 well and therefore must contain a fragment carrying the determinant for clone 9 that is not capable of being precipitated. It is proposed to call this fragment C3g. The Size of Alpha 2D and C3d. Alpha 2D, by virtue of containing C3g, should be larger than C3d. To see whether this was the case, immunoelectrophoresis was carried out on serum that had been supplemented with ~25I-C3 and then treated with cobra venom factor (CVF) at 37°C for 20 h; using anti-C3d antiserum to precipitate the alpha 2D formed. The anodal part of the alpha 2D line, which can readily be separated from the C3c line, was cut from the immunoelectrophoretic plate the agarose gel broken up by freezing and thawing, and the protein was eluted by boiling with sodium dodecyl sulfate. This alpha 2D preparation was then subjected to PAGE and autoradiography and compared with C3 and a tryptic digest of C3bi. The results are shown in Fig. 2. It can be seen that the alpha 2D is ~8K larger at 41K than is the C3d fragment at 33K. A low molecular weight band can be seen on the track containing trypsin-digested C3bi but cannot be given a size on this gel. Can Cell-bound?C3bi be Converted to Cell-bound Alpha 2D? The summary of the reaction of the three monoclonal antibodies with soluble C3 fragments is shown in the upper part of Table III. It can be seen that C3bi, alpha 2D, C3c, and C3d can be distinguished by the pattern of reaction with the monoclonal antibodies. Similarly, it is therefore possible to detect cell-bound alpha 2D showing reaction with clone 3 and clone 9 but not with clone 4. Such reactivity was found initially on cells of patients with cold hemagglutinin disease in vivo, but in the present experiments this question Enzymes that can generate alpha 2D globulin have differential destructive activity on clone 4 and clone 9. These are boxed. At concentrations between those given in the two sides of the box, alpha 2D globulin is generated from C3bi. * Concentrations compared with those generated in normal human serum with streptokinase. EAC43bi at zero times need one dose of conglutinin to give positive conglutination. * Normal human serum heated at 56°C for 30 min. enzymes (thermolysin is the best example and trypsin is another) split C3bi to alpha 2D at a lower concentration than that needed to bring about the further split of alpha 2D to C3d. On the other hand, leukocyte elastase and cathepsin G cannot be used to make an alpha 2D intermediate, and kallikrein will not destroy C3bi at the concentrations tested. Plasmin in the physiological concentration range gave rise only to alpha 2D. Loss of Conglutinin Reactivity of EAC3bi on Incubation in Serum with and without Plasmin These experiments show that the split from C3bi leading to alpha 2D is not a split at an alternative site to that producing C3d, but that the two splits are sequential; a fragment carrying the clone 9 determinant is released when alpha 2D is converted to C3d. On the basis of all the findings recorded above it seems appropriate to designate the material hitherto called alpha 2D as C3d,g. therefore probably present at low concentration. From its specificity plasmin would seem the most likely candidate. However, attempts to show that the breakdown of C3bi (as shown by loss of reactivity with conglutinin) can be retarded by depleting plasminogen or accelerated by activation with streptokinase have both so far been unsuccessful (Table V), and this raises doubts of whether plasmin is indee enzyme concerned. However, where plasmin inhibitors are absent plasmin will cleave C3bi efficiently. What is the Physiological Enzyme that Brings C3g Can Be Eluted from Complement-coated Cells. These experiments were done using 125I-C3. The results shown in Fig. 3 were done with iodogen-labeled C3, but those obtained by Bolton and Hunter (11) reagent were not appreciably different. The sequence of conversions are outlined in Fig. 3. EAC3b was generated from EAC142 and radiolabeled human C3. About 7% of the offered C3 was bound to the cells. The EAC3b was treated with I and H to convert it to C3bi. This process was accompanied by the elution from the cell of 30-40% of the C3 counts (4). This material presumably derives from C3b that was not covalently bound to the cell surface and is eluted in the form of C3bi. It has been assumed that the specific activities of C3b and C3bi remain the same. On this basis, 32,000 molecules of C3bi/erythrocyte remain in this particular experiment. The cells were then eluted with plasmin to release the C3c and to leave the alpha 2D (C3d,g) on the cells. The specific activity of C3c (818 cps//xg) is higher than that of C3bi (658 cps//~g), whereas that of C3d,g is lower (253 eps//~g). C3d,g cells were then treated with trypsin to release the C3g. This fragment also is of relatively low specific activity (156 cps//xg) compared with the starting C3, and as there is also very little of it, it is not surprising that radioactive C3g bands have not been detected. The C3g preparation was tested by inhibition of agglutination and gave a titer of 27 (corresponding to ~25 #g/ml C3bi) on inhibition of clone 9 but no inhibition at all with clones 3 and 4. Fig. 3 shows in schematic form what we now believe to be the breakdown pattern of C3bi. The scheme is similar to that given by Harrison and Lachmann (19) except for the transposition in the alpha chain of the alpha d,g and the alpha c fragments (and for redesignation of the fragment there called C3e as C3g). It also shows some resemblance to that of Nagasawa and Stroud (20). The sequential nature of the reactions breaking down C3bi has now been established both on cells and in free solution. It is clear that the initial cleavage of C3bi in serum splits the molecule into C3c, which in the case of cell-bound C3bi is eluted from the cell, and the intermediate identified as alpha 2D (C3d,g) on immunoelectrophoretic analysis. This first split of C3bi into C3c and C3d,g without the second split of C3d,g to C3d and C3g can be brought about by low concentrations of a number of proteolytic enzymes, notably trypsin, thermolysin, and plasmin. Some other enzymes--leucocyte elastase and cathepsin G--do not produce the first split without the second. It is not possible from this distribution of enzymic activity to make any predictions as to the chemical nature of the bond split. It is presumably in the tertiary structure of the fragment that the fine specificity rests, which determines which of these cleavages happens first. The enzymes that bring about C3bi breakdown in vivo are still not clearly known. The enzyme that brings about this split extravascularly may well be plasmin, in as much as it has the right specificity, but we were unable to demonstrate any acceleration of the split with streptokinase, nor was it absent in sera passed through sepharose-lysine to remove plasminogen. Thus it seems that there may be other enzymes that are capable of this split. It has recently been suggested (21) that factor I itself may split C3c from cell-bound C3bi. In the fluid phase, however, it has been shown using purified components (19,22) that factor I does not split C3bi. 2 It is also not clear what enzymes subsequently split C3d,g to C3d and C3g. In spite of the conventional view to the contrary it must be doubted whether this reaction normally goes on in plasma either in vitro or in vivo. In vivo data show that the final fragment of complement activation found on erythrocytes in cold haemolytic antibody disease and in mesangiocapillary glomerulonephritis (23)--two of the diseases in which the most intense complement activation occurs--is alpha 2D (C3d,g) and not C3d or C3g. 2 We have never seen C3d in aged serum, except in one instance when no sodium azide was added to the serum. The split of C3d,g to C3d and C3g may require enzymes not present in normal plasma but derived from either bacteria or from the breakdown of cells at inflammatory sites. Discussion The fragment called C3g in this paper is defined by its reactivity with clone 9. It has been shown that this fragment is part of alpha 2D and can be cleaved from it by trypsin. The C3g antigenic determinant is poorly if at all present in native C3 and is absent from C3b both on ceils or in solution. The observation that the anti-C3g antibody reacts on coprecipitation assays with C3 and C3b must be taken to represent changes either induced subsequent to electrophoresis, possibly by contaminating enzymes in the antibody, or brought about by the reaction with the polyclonal anti-C3 itself. This latter mechanism would be analogous to the situation described by Coombs et al. (24) for C4, who showed that, whereas C4 on its own does not react with the C4 receptor on guinea pig erythrocytes, complexes of C4 and anti-C4 will indeed do so. C3g is, however, well exposed in both C3bi and in alpha 2D, which we now propose should be called C3d,g. To this extent C3g antigen is a neoantigen appearing on complement activation, and anti-C3g may therefore find a use in the demonstration of complement activation products. C3e, as a separate acidic fragment of complement, was described by Ghebrihewet and MiJller-Eberhard (25), and they identified their fragment with the leukocytosis-inducing fragment earlier described by Rother (26). C3g as defined by reactivity with clone 9 resembles the fragment described by Ghebrihewet and Miiller-Eberhard (25) approximately with respect to molecular weight and to a highly anodal mobility (because alpha 2D is so much more anodal than C3d). However, C3e was derived from C3c, whereas the present fragment is undoubtedly derived from alpha 2D and is not present in C3c. Furthermore, we have been unable to precipitate C3g with any of the polyclonal anti-C3 sera available to us, including that used to precipitate C3e in the earlier study of Ghebrihewet and Mfiller-Eberhard (25). We were unable to produce precipitation even with a mixture of clone 9 and polyvalent antisera, suggesting that it is difficult to make antibodies to more than one (or two) C3g determinants. The biological properties of the C3g will be reported together with details of purification in a separate publication, but preliminary testing failed to show consistent production of leucocytosis in rabbits. The identification of C3g with the C3e of Ghebrihewet and Miiller-Eberhard is therefore unlikely and it is probably a hitherto unrecognized C3 fragment. Summary The physiological breakdown of C3 has been studied using monoclonal anti-C3 antibodies, and it has been found that the later stages of this process--the breakdown of C3bi--is more complex than had previously been recognized. C3bi is the reaction product produced from C3b by the action of factor I which, in the presence of factor H, produces a double cleavage in the alpha chain of C3b. It is here reported that, both on cells and in the fluid phase, the breakdown of C3bi in serum gives rise to two products: C3c and the product previously described as alpha 2D, which we now propose to designate C3d,g. Alpha 2D differs from C3d in that it contains an additional fragment of ~8,000 mol wt that carries the antigenic determinant for the clone 9 monoclonal anti-C3 antibody. C3g cannot be precipitated by anti-C3 antisera and therefore behaves as a uni-or bideterminant antigen. The cleavage of C3d,g to C3d and C3g does not occur in sterile serum. It is also still uncertain what enzyme cleaves C3bi to C3c and C3d,g in plasma. Plasmin can do so in vitro, but plasminogen-depleted serum can still produce the cleavage. The antigenic determinant recognized by clone 9 in C3 is not exposed in C3 or C3b, but appears as a neoantigen in C3bi (and in C3d,g). Anti-C3g therefore is a potentially useful ligand for detecting complement-activation products. C3g represents a new, highly anionic C3 fragment and seems not to be identical with the C3e fragment described by others.
2014-10-01T00:00:00.000Z
1982-07-01T00:00:00.000
{ "year": 1982, "sha1": "62859d6edd80a0674b643dc8df022c5b8699c332", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/156/1/205.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "62859d6edd80a0674b643dc8df022c5b8699c332", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261387883
pes2o/s2orc
v3-fos-license
Embodying the inquiry: Disaster, affectivity, and the localized politics of security The responsibilizing of civil society for security has been well analysed in recent years, but the place of the public inquiry as an important site of negotiations over issues of affect in security has been largely under-acknowledged. This article investigates the scope, recommendations, and forensic investigation of the Manchester Arena Inquiry, an inquiry established in the wake of the 2017 bombing and which prefigures the gaze of the UK’s forthcoming ‘Protect Duty’. Once formalized, this Duty will situate venue workers as crucial embodiments of national counter-terrorism priorities. The paper shows how contestations over affective embodiments of security are navigated across the Inquiry, with national security articulated as being produced exclusively in local spaces, and through a body divorced from its experience via sophisticated management techniques. We find how security is imagined through local workers becoming ‘watchfully-anxious’, with routinized tasks and training deployed to generate this necessary destabilization. Bodies of venue staff must be displaced and moved around, opening space for racialized encounters – where these encounters are rendered necessarily productive of security, regardless of their result. Workers are required to confess, defending their role in security failure and situating them within national priorities. Through close analysis of the Inquiry’s reports, and drawing from interviews with UK disaster management experts, the discussion reveals how the Manchester Arena Inquiry positions national security as produced through low-paid workers defending the minutiae of their jobs in the context of the local venue. Through its forensic investigation and detail-oriented scope, the public inquiry is revealed as an important technology in the (re)production of localized forms of security knowledge, which in turn delegitimizes knowledge of disaster as structural or political. Introduction The bombing of the Ariana Grande concert at the Manchester Arena in 2017, where 22 people were killed and more than 1,000 injured (Manchester Arena Inquiry 2020), was responded to through an array of commemorative events from concerts, vigils, and a memorial garden.As with certain other security failures or instances of public concern, notably Bloody Sunday, the Iraq War, and the Covid pandemic, a public inquiry was also established in the aftermath of the Arena attack to investigate how the disaster occurred.The Manchester Arena Inquiry began in 2019, and has seen emergency responders, private security operatives, and other workers present witness statements in forensic style in response to the Inquiry's interrogation.These minute-by-minute accounts of the events around the bombing have been probed by the Inquiry in order to determine where prevention might more effectively have taken place (Manchester Arena Inquiry 2021).The scope and detail of the proceedings inform the statutory framework of the forthcoming Protect Duty, legislation which is set to require "improv[ements in] protective security and preparedness [as instituted by]… venues and organizations owning, operating or responsible for publicly accessible locations" (Home Office 2021, 5,14) In other words, the UK Government response to the Manchester Arena attack will make individual venues and event organizers statutorily responsible for counter-terrorism, with the Manchester Arena Inquiry exploring the terrain for such responsibility.Whilst the principles behind this Duty have been central to the 'Protect' strand of the UK's broader counter-terrorism strategy since its inception (Coaffee 2010, 953), the Duty moves away from previous approaches of encouraging the owners of crowded locations to install proportionate protective security.The enforcement of the Protect Duty is likely to take place through civil rather than criminal sanctions where failures occur (Home Office 2021, 28), with the oversight and enforcement of the Duty operationalizing many of the themes investigated during the public inquiry.Could more vigilant venue managers and security operatives, and more robust processes, have prevented the attack or mitigated the damage caused?The Manchester Arena Inquiry therefore plays an important role in making visible the construction of knowledge about securityand about the way responsibility for the aversion of disaster is imagined. In excavating the moments and minutiae of the days and hours before the bombing, the Manchester Arena Inquiry has been substantially more detailedand concerned with reproducing securitythan previous public inquiries into security breaches.The 2005 London bombings, for which no public inquiry has yet taken place, was investigated by the London Assembly.This investigation centred the bureaucratic failings of the response to the bombings in London, instead of "becom[ing] involved in 'what if?' scenarios" (quoted by Committee Chairman Richard Barnes in Greater London Authority 2006, 1; also see Edkins 2008).Where the response to the 2005 bombings was concerned with making more effective the response to an attack that had already taken place, the Manchester Arena Inquiry expended substantial effort in thinking through how security might be produced, staving off in particular forms of violence rendered terrorism.In other words, a much more security-oriented approach structured the Manchester Arena Inquiry in comparison to previous inquiries, and in particular to the investigation into the 2005 London bombings.We find how striking articulations of security are evident across the Manchester Arena Inquiry, where these had been absent in previous investigations.Security risk is purported across the Manchester Arena Inquiry to be manageable, including through the pursuit of racialized and gendered bodies for its own sake.The Inquiry opens up novel and sophisticated routines through which the security operative might better disengage from their own experience, in order that security is produced.In effect, the paper illuminates the candid nature of negotiations over security knowledge-praxis, as manifested in the Manchester Arena Inquiryand about the complex formulations of (white) anxiety through which security is made.Moreover, these complex formulations performatively erase violence as existing structurally.Instead, through the embodied, forensic interrogation of local spaces as crucial sites of the production of non-violence (evident across the Inquiry), the structural and the international are written out of negotiations over violence and security.Local spaces emerge as the locus of the production of national security, with policy formulations effectively placing expectation for terrorism-prevention on minimum wage venue workers. This article analyses how those with responsibility for local spacesand how local space itselfare reconstituted by the forensic interrogation of the Inquiry.With little existing critical work into how public inquiries operate as important sites of contestation over security knowledge-praxis, this article utilizes literature which treats security as a performative exercise (Butler 2006;Massey 2005;Massumi 2015), applying it to the functions of inquiries, thereby developing insights into the way investigation reveals how security is known, produced, and contested.In effect, the contribution of the paper speaks to what is sayable and imaginable through security, thinking through the affective implications of such conjugations of security.The paper takes issues of affect and affectivity to be concerned with "prioritiz[ing] the body as a means for making sense of the world [and is concerned with how the body] experience[s], encounter[s], and perform[s] life among other bodies within material space" (O'Grady 2018).In so doing, the paper at hand analyses how bodies are negotiated in their engagement with local venue spaces across the Inquiry.The public inquiry is not analysed as causally-productive of security embodiments, but as a site at which contestations over security knowledge-praxis are particularly evident, given their close connection to specific breaches of security and the forensic nature of post-breach investigation.Importantly, articulations of security emerge across the paper as meaningfully divorced from notions of probability or likeliness: a central focus of the paper therefore concerns the very erasure of local experience through formal processes of a public inquiry as productive of security, even whilst local experience is written as the crux of security politics.Of note, 'terrorism' is taken across the paper to be racially formulated: in other words, the figure of the terrorist isand the act of terror are perpetrated bya racialized, gendered, classed other (Said 1987;Puar and Rai 2002;Boukalas 2016;Ali 2020).The suffering produced by violence labelled 'terrorism' is obviously substantial for those affected and their relatives.The focus of this paper at hand is not to think through such suffering, but instead to illuminate the sophisticated and racialized imaginaries of security that write such violence as governable and manageable.Given the remarkable policy attention directed towards the prevention of terrorism (Stewart and Mueller 2014), the article focuses on how security is conjugated through economies of space and affect, in this policy context. Through a discourse analysis of the Manchester Arena Inquiry report and interviews with two disaster experts involved in the response to the bombing, the discussion illuminates three areas of localized conduct where the scope and recommendations of the reconstitute security responsibilities.The paper first analyses how the Inquiry situates a transition from watchfulness to anxiousness in the practice of local security operatives, framing this as constitutive of greater security.Secondly, the paper examines how the Inquiry criticizes and reconstitutes the embodied, performative movement of staff around the venue, configuring bodily displacement of local workers as crucial to counter-terrorism success.Finally, it explores how the Inquiry requires local staff to explain and justify their conduct under oath, rendering the performative power of their confession as necessarily constitutive of security (Mills 1995).In this final section I draw from two interviews I conducted with senior UK disaster experts closely connected to the response to the 2017 attack, in order that the implications of the local positioning of security across the Inquiry might be made more visible. Ultimately, the paper traces the investigative trajectory and recommendations of the first report of the Manchester Arena public inquiry, as an inquiry which has been much more active in offering specific and extensive recommendations for the production of security than other inquiries, notably the Iraq Inquiry (The Iraq Inquiry 2016).As it makes explicit "recommendations as to what the key elements of a Protect Duty should be" (Manchester Arena Inquiry 2021, 61), this inquiry goes beyond other inquiries in its wide-ranging assertions to inform policy about how local (security) knowledge and practice should be differently enacted.Analysing the productive inclination of the Manchester Arena Inquiry can help make sense of the (evolving) place of inquiries in the making of security knowledge-praxis, and in their place in writing what is known about security failure and its aversion.The discussion illuminates the politics of security knowledge articulated across the Manchester Arena Inquiry report, showing how the process of its investigation and recommendations localizes the site of risk and risk-management by mobilizing workers' anxious engagement with/in space (Massey 2005;Purnell 2021).This article connects the production of security knowledge in the wake of disaster, to the implications for bodily and spatial security enactments. The productive power of investigation With attacks of terror from the past increasingly being reconfigured productively in terms of the missed opportunities they presented (BBC 2021; Smith 2021), the proposed Protect Dutyprefigured by the Manchester Arena Inquiry investigationconstitutes a continuation of security logics that illuminate the manageability of terrorism threat, where the investigation into the 2005 London bombings asserted the attacks could not have been prevented (de Goede 2014).What can analysing the Inquiry's scope and recommendations tell us about how security logics are written into public space: how is the conduct of event staff imagined through its recommendations, and how are these staff expected to enact national security priorities, particularly in the perpetual absence of terrorism danger almost all staff face?This article connects local embodiments and spatialities of security, with implications for the role of inquiries within a governmental framework as a response to terror (Closs Stephens et al. 2021). Much existing literature on inquiries asserts that 'truth' might be determined by formal investigation, and that more transparency and accountability might aid this process (Beer 2011;Roach 2014;Thomas 2015Thomas , 2020;;Robinson 2017).This scholarship often scrutinizes the technical functionalities and histories of inquiries and how they might better restore public trust (Burton and Carlen 1979;Beer 2011;Hills 2015;Robinson 2017), often assessing the balance of public interest with regard to secrecy, or the processes by which inquiries occurfocusing for instance on openness (Thomas 2015(Thomas , 2020)).In his seminal work Public Inquiries, Jason Beer QC outlines a number of functions that a public inquiry fulfils, writing that "the first function of an inquiry is often said to be establishing the facts" (2011, 2).Indeed, inquiries are often framed in public discourse around establishing 'truth' through a "full and fair account of what happened" (PASC 2005;9), with literature equally centring on the processes through which these 'facts' emerge (Beer 2011;Scraton 2004;Thomas 2015Thomas , 2020)).Yet, what constitutes 'the facts' or 'truth' is situated, as post-colonial, feminist and other scholarship makes evident (see : Spivak 1988).Indeed, as Thomas (2017) illuminates, inquiries are often structured through juridical individualism.In other words, blame and responsibility for security failure is often attributed to individuals rather than the conditions that make particular decisions possible.This paper builds on Thomas' critique of the actor-centric gaze of inquiries (2017) by assessing the active imperative of the Manchester Arena Inquiry to provide extensive recommendations, and argues that post-disaster investigation itself contributes to the contingent production of truth (for more work on the productive nature of post-disaster investigation, see : Edkins 2008;de Goede 2014).Moreover, such scholarship on the public inquiry only peripherally captures the performative capacity of a public inquiry. To date, analysis of the Iraq War dominates most of the literature around inquiries (Hills 2015; Hills 2015 ;Robinson 2017;Thomas 2015Thomas , 2017Thomas , 2020)yet the recommendations of the Iraq Inquiry were sparse and discrete.The Chilcot Report simply laid out its findings of fault and failure, which left "Partners Across Government [to] develop a… 'Chilcot Checklist'" of their own in response to these findings (MOD 2017).The Manchester Arena Inquiry report, however, offers extensive and specific recommendations designed to be integrated into the forthcoming Protect Duty, articulating new forms of being and movement within/through space of local security practitioners and event security teams (Massey 2005;Purnell 2021).The paper builds upon and advances existing scholarship about public inquiries, and argues that 'the official inquiry'through its scope and emphasis on making recommendationsoperates as a tool of governmental power (Closs Stephens et al. 2021;Thomas 2017).Assessing the politics of a post-terror response, the main contribution of this paper is therefore to analyse how the public inquiry plays an important and contested role in reproducing particular discourses about national security, and how they contribute to the affective "trigger[ing]" of workers' bodies and the space within which they operate (Aradau and van Munster 2012: 235; also see: Massey 2005;Butler 2006;Massumi 2015;Purnell 2021). Watching and alertness This first substantive section explores how visual security is navigated by the Manchester Arena Inquiry, by examining how local security operatives are critiqued as watchful observers during the proceedings, and how their roleand securityis rearticulated and reconstituted by the Inquiry's recommendations.Security measures, from airport bag checks to the reporting of suspicious activity in a crowded venue, are concerned with categorizing unfamiliar and unknown entities, so that they can be known and acted upon (Aradau and van Munster 2007).Importantly, what counts as unfamiliar and unknown is conjugated through racialized borders, borders which are also gendered and classed (Sian 2017;Ali 2020).In other words, the process of identifying possible security risk is constitutive of the safe, good, knowable 'us'positioned as distinct from unknown, unfamiliar, racialized 'Others' (Noble 2015), as becomes clearer later in the paper.But for the purposes of this section, seeing is a practice central to this practice of identification and recognition, and was crucial to how the Inquiry investigated the failure to stop the bombing in 2017.Claims of recognition are made by inscribing threat onto Others, which often take place through visual interventionssuch as people racialized as Muslims being situated outside of whiteness and safety, for the physical attributes like having a beard or wearing a hijab (see Sian 2017;Younis and Jadhav 2020).The centrality of sight and seeing in the production of threat narratives has been recognized in the recent turn towards examining the visual politics of security (Amoore 2007;Sian 2017;Bleiker 2018;Martin 2018;Ali 2020;Krahmann 2020).Adjustments to staff practices situated as necessary by the Inquiry highlight how visuality is far more than just a cognitive exercise: instead, watching and observation necessarily incorporate systems of embodied routines and rituals.These 'visual' rituals are then castigated for being insufficient, in light of narratives about ubiquitous threatdespite the continual absence of bombings almost every member of staff will ever face.The public inquiry is shown to provoke new forms of embodied exercises to motivate forms of alertness that can be defended as sufficient under the interrogative questioning of authorities, as this response to terrorism remakes what is known about the aversion of disaster. Issues of visual vigilance and watchfulness are centred around the use of CCTV in the Manchester Arena Inquiry report.Staff were criticized for not having identified bomber Salman Abedi during his hostile reconnaissance of the building, with CCTV articulated as a primary means by which prevention could have occurred.Three main areas of concern were raised by the Inquiry with regard to CCTV use: the existence of a Blind Spot without camera coverage; inconsistent observation of the CCTV screens; and an insufficiently-suspicious gaze from those monitoring the screens.The Inquiry highlighted that had these practices been sufficiently rigorous, the likelihood of the attack being as devastating as it was would have been diminished.These three factors are then articulated as points of failure, and as spaces for learning in the production of security, reconstituting what 'national security' means for security and the body at the local level. Despite the near-constant CCTV observation around the time of the concert, the Inquiry report argued that this approach was inadequate, both on technical (near-constant) and conceptual grounds (mere observation).Several minutes were spent away from the CCTV screens by the security team in the operation room on the night.The team was not criticized primarily for this absence in the report, not least because they were doing other tasks they had been instructed to perform.Instead the report highlighted the lack of restless, active engagement with the screen by staff as comprising a security failure, and a necessary point of future learning: There was a general problem with SMG's CCTV system and its approach to it.During show mode, those… who assumed responsibility for control of the CCTV system did not monitor it constantly.What I mean by monitor in this context is a person constantly reviewing images in real time, proactively, with a view to identifying suspicious activity (Manchester Arena Inquiry 2021: 129). Here we see that the constant watching of screens is articulated as necessary, but more notably that the watching of CCTV screens alone is insufficient.Instead, security staff should have produced, and should in future produce, security in every moment by being "proactively" alert to potential threats.The notion of "vigilant visuality" (Amoore 2007) does not sufficiently capture the forms of subjectivity being produced here.What is rendered necessary of staff by the Inquiry, as it foreshadows the forthcoming Protect Duty, is not mere watchful vigilance: they must instead pioneer, and enter new ground, in order to open up space for more security negotiations.Rather than vigilant or anxious watchfulness, the desirable subjectivity here is an embodied, watchful anxiousness.This anxiety, diffused through an engaged drive "to identify suspicious activity", must be constantly mobilizedparticularly in the absence of danger (Massumi 2015).This hyper-vigilance, or state of anxiety, is produced by a range of more fully embodied practices, as we will see later in this section. Rearticulating anxious vigilance or watchfulness to vigilant or watchful anxiousness is necessary to illuminate how the visual gaze is displaced as a tool of security by an altogether more affectedor emotionally-chargedbody.We can observe this displacement of visuality by analysing how the Inquiry report describes the performance of alternative, desirable modes of staff conduct.Remembering that new imposed duties under a Protect Duty will affect venues where danger from terrorism is almost always perpetually absent, the report maintains that: In order for necessary security procedures to be maintained, each person needs to be reminded of the counter-terrorism aspect of their activities.The message that counter-terrorism measures are vital needs to be constantly reinforced… Those giving the warning need to be aware of [staff members becoming]… "desensitized"… and must try to refresh the message so that it is sufficiently updated and relevant to attract the attention of the listener (Manchester Arena Inquiry 2021: 151,152). Persistently reminding staff of the threat of a terrorist attackand the role of staff in its preventionis situated by the Inquiry as a necessary exercise, which is precisely because of the perpetual absence of such danger in the embodied experience of nearly every worker around the country.The necessity that listeners' attention is redirected towards this hypothetical threat through creative means highlights the disconnect between experience and the imaginable.Because of the constant safety from terrorism at essentially all public venues across the country, the message that remakes watchfulness into anxiousness requires more than just the eyes: the whole body must be integrated to ensure the potential threat can be remembered in ways that affect how security operatives perform their duties (Purnell 2021: 46-47). Across the Inquiry report we see how this shift, from watchful eyes to a triggered whole-of-body, is cultivated also by an array of rituals and practices that situate local operatives within the nationalrather than local or experiencedthreat imaginary.Attentive anxiety about an abstract (but always possible) danger is obtained not just with more engaging messages from managers and trainers, but through physical, embodied immersion of routines and paperwork.A number of suggestions are made in the Inquiry report about new practices that might be undertaken by staff to reinforce this anxious positionality.For example, the Inquiry recommends that formal riskassessments are made statutory in each venue, to encourage behavioural change: It was suggested during the evidence that [conducting terrorism risk-assessments] was unnecessary, as everyone knew the threat level of a terrorist attack and would have regard to it in the way they behaved.I do not agree.While in theory that may be true, the discipline of undertaking a risk-assessment will assist in keeping the threat of terrorism at the forefront of the minds of those who prepare for the event (emphasis added; Manchester Arena Inquiry 2021: 150). The italicized phrase in the extract highlights the recognition in the report of how risk-assessment and a multitude of other embodied practices might produce a more watchfully-anxious staff (including those in the CCTV room).The mere discipline of performing tasks themselves generates not only new awareness of possible risks but a more enhanced and embodied experience of alertness.Amoore writes (2009: 134) that "the flat surface of the [CCTV] screen… is given depth by the layers and leaves of data"in other words, the meaning extracted from the screen's flat surface is generated by watching-officials.It is important to note that the embodied practices and conduct of staff situate this watching (Purnell 2021): the observation does not just begin when staff judge the images on the Instead, the depth and meaning of the CCTV screen exists amidst broader societal conceptions of risk and security (Younis and Jadhav 2020), and is (re)produced through drills and training of venue staff.As Crary writes, attentiveness is not "primarily concerned with [simply] looking… but rather with the construction of conditions that individuate, immobilize, and separate subjects" (1999,74).The Inquiry, in mapping the forthcoming Protect Duty, therefore advises more training of staff, for example in hostile reconnaissance spotting (more on this in the following section).Such drills and training situate those tasked with watching further into ongoing embodiments of (racialized) security.In other words, the direction of the (in)security operative's attention does not just concern visualities, but also concerns social norms, cultural practices, and other embodied modes of conduct.This makes rehearsal an important part of processes of disastermanagement (Anderson and Adey 2011), not only in the form of response and recovery drills but in other embodied rituals, like the filling out of risk-assessment forms and the undertaking of training.As we can see from the quote above, the performance of these routines are articulated as important components of more rigorous security procedures (Manchester Arena Inquiry 2021: 77).It is, according to the report, only through far more rigorously-embodied procedures that any meaningful security (through the engaged watching of CCTV, in this instance) might occurprecisely because this monitoring is now characterized by more engaged nervousness, alertness, and insecurity.The filling in of paperwork, and the linking of risk-assessments to performed practice, and other rituals are all purposedin the terms laid out by the Inquiryto remake the body as an entity fuelled by anxiety and insecurity that can be watchful, and which can remain vigilant and alert, in the continuing absence of threat. It should be noted that messaging from managers, and these embodied practices of filling in paperwork, are not primarily centred around the material production of security.Instead, they are processes through which legally-defensible security procedures can be generated.This topicthe production of defensible working practices rather than material reductions in violenceis covered more extensively later in the paper.For now, it is important to note that the idea of embedding routines appears more crucial to the Inquiry's purposes than interrupting material violence itself.In the following extract, no connection is made to how these practices might produce security: Instead, the overwhelming emphasis on meticulous routines reveals a certain disinterest in the materialities of security.The extract is taken from a section within the Inquiry report about how security procedures between the British Transport Police, and SMG and Showsec (two companies responsible for providing security at the Arena), might be better integrated.It tells of how routine itself will generate new, creative ways of knowing 'risk' through even more entangled security networks: The discipline of creating, updating and working to a written plan is likely to have uncovered further deficiencies in BTP's approach to policing events at the Arena.It would have provided an opportunity to reflect upon and develop arrangements for collaborative working with SMG and Showsec before, during and after events.This would have strengthened the relationship between the three organisations and ensured that there was effective communication, coordination and co-operation (Manchester Arena Inquiry 2021: 141). The banalization and bureaucratization of counter-terrorism is patent here (on issues of banality in counter-terrorism, see: Pettinger 2020).The routinized practices of security workers evident in the extract is emphasized consistently across the report, in place of how workers might engage with potential threat objects like a bag of wires or a person with a possible weapon.The reframing of watchfulness to anxiousness is necessary to highlight how the process of being made constantly suspiciousthrough the performance of security exercises like the ones in this quoteis a central theme of the Manchester Arena Inquiry report.The disinterest in material security relative to embodied anxiousness was manifested particularly where the report considers how venues' riskassessment processes might be reconstituted: One option is to remove consideration of likelihood from the process altogether.This will result in a focus on what can be done without providing for an opportunity for the thought, 'It will not happen to me', entering the process (Manchester Arena Inquiry 2021, 163). Destabilizing notions of likelihood from risk-assessment processes reveals the contested place of the body and its local experience (Purnell 2021), in the remaking of security knowledge and praxis."Terrorist" violence, being almost a statistical impossibility at each venue, must be remembered by each worker made responsible for security in the forthcoming Protect Duty.The notion of diffuse responsibility for counter-terrorism has somewhat broad support, as we can see by looking at the Protect Duty's consultation report.In line with claims made across the paper, it shows a significant perception amongst those potentially affected by the Duty that terrorism can occur anywhere, to a point nearly one third (28%) of respondents recommended the Duty should be implemented at every public venue regardless of size (Home Office 2023).The Inquiry, in proffering potential policy options here, makes evident the extent to which workers must resist succumbing to their experience, and instead embody anxiety and insecurityso that even more integrated security procedures can occur (Mythen and Walklate 2008).Massumi (2015) writes that the space within which conflict takes place is less about physical materialities, and much more about the perceptive and temporal.The Manchester Arena Inquiry report illuminates how new sites of security (are made to) emerge, and how these spaces are negotiated through remaking the body and its relation to its environment.Importantly, work like Ontopower (Massumi 2015), as with much other existing scholarship on security, writes out the reproduction of racial orders (for a comprehensive discussion on this issue see: Ali 2020).As we will see in the following sectionwhere the pursuit of a person rendered Asian and male was pursued by security operatives was commended in the Inquiry for its own sakewhat is able to count as risk functions though racialized and gendered parameters. By analysing the scope and recommendations the Inquiry, therefore, we find how the postdisaster investigation situates proactive, engaged anxiety as constitutive of security.Watching for the purposes of security is therefore not just (or even primarily) a visual experience, but is situated within routinized and embodied rituals.Listening to messages reminding staff of terrorism risk amidst its perpetual absence, undertaking training and writing risk-assessment forms to trigger the body's anxiousness, are all central to how the Manchester Arena Inquiry report narrates enhanced terrorism-prevention.The security operative is rendered a compromised agent: at once deployed to enact security yet simultaneously productive of perennial insecurity (Massumi 2015).That security operatives must become a sort of infiltrator, and that bodies must become detached from its local knowledge, reveals the place of the Inquiry in the (re)production of anxious structures and rationalities that in turn require soothing (Pallister-Wilkins 2021). Walking and moving The Manchester Arena Inquiry also consistently asserted that processes should in the future be followed more stringently regarding the movement of staff around the venue, especially through the use of patrols.This second section analyses how movement of the body through (local) space, is situated as a key mechanism through which greater security might be advanced.We will see how the patrolling body in turn destabilizes local space, leaving a trail of insecurity in its wake, in turn rendering the future patrol over that space essential.The physical movement of staff, according to the Inquiry report, constitutes an "important measure" in the prevention of terrorism (Manchester Arena Inquiry 2021: 92).Staff walking around a venue generates opportunities for more security encounters, through their greater exposure to the space through which they traverse.In effect, whilst the Inquiry acknowledges that technological options exist to enhance counter-terrorism, such as improving communications devices, it stipulates the importance of the body's proactive engagement through space in securing a public venue.What does this articulation of counter-terrorism responsibility do to the bodyand what becomes of the space in which this more mobile and (in) security-engaged body exists (Purnell 2021)?With (in)security generated through patrols and movement of staff within venue spaces, the local is subsequently reconstituted as a criticaland always insecuresite where national terrorism prevention must always be enacted. With over 100 references to the word "patrol", we can easily see the centrality of the body and its relation to space across the Manchester Arena Inquiry report.The report identifies, along with more engaged (or embodied-anxious) use of CCTV, how "regular and thorough patrols might have prevented, or reduced, the impact of an explosion in the City Room" (Manchester Arena Inquiry 2021: 75).The report also outlines how the bomber conducted hostile reconnaissance in the days preceding the attack, in a Blind Spot not covered by CCTV.The report claims that greater visibility of the spot through increased patrols would have meant the bomber's "activity would have been identified" (Manchester Arena Inquiry 2021: 19).Rather than hypothesize about the validity of such claims, this paper reveals the place of an inquiry in legitimating the body's mobilization, displacement, and destabilization, in order that more opportunities for (in)security encounters might be produced.What this paper reveals instead is the place of an inquiry in legitimizing the body's mobilization, displacement, and destabilization, in order that more opportunities for (in)security might be produced.Walkingand movement itselfis a highly political act, having many associations with issues of security.The (in)ability to walk and move is implicated by the presence of borders, prison walls, and guards (Purnell 2021: 54-55).Jonathan Skinner (2016) reminds us of the embodied implications of walking, and writes that it is regularly used by social and protest movements as an act of resistance on the streets.Marching has also been an integral feature of armies engaging in war, including to disperse protests on the streets.In other words, walking within a venue's perimeter to enact security is therefore far more than just a routine to collect, interpret, and on datait performatively situates the worker within ongoing relations of security, as a "habituating spatio-temporal practice" (Skinner 2016: 26).With the Inquiry recommending that staff in effect walk miles during a shift around and between possible terrorists to enact counterterrorism security (Mgbeoji 2018), requires the committed engagement of a worker's whole body. Local venue security workers walking around a venue therefore becomeand are always in the process of becoming (Massey 2005;Butler 2006)engaged counter-terrorism officials.By making embodied judgments on the threat-potential of others inhabiting local venues (through the walking towards, questioning of, and confronting of possible terrorists), their identity becomes more meaningfully rearticulated around this responsibility.As the Protect Duty moves towards becoming enshrined in law, workers walking with the statutory responsibility to enact counter-terrorism, along with the concomitant requirement to approach suspects and search bags for bombs, co-constitutes duty-and-body: put simply, the very act of patrolling as a counter-terrorism officer (previously just a local venue security operative) functions "to retain interest at the end of a long day and make real the seriousness of the imagined crisis by creating a pressurised situation" (Anderson and Adey 2011: 1092), Through the patrol, workers operate on the borders of peace and conflict.The performance of counter-terrorism officers patrolling, detached from material likelihood of experiencing violence, reconfigures not only themselves but the space within which they move (Massey 2005, 10; also see : Butler 2006), as we will now see. How space is engaged with by the body is not just momentary, nor does it stop with the body.Echoes of the activity in a space in turn rewrite the identity of that space/place, making it new (Purnell 2021).Space is always being (re)produced through constant and dynamic negotiations within, through, and about that space (Massey 2005).Intangible components of an environment have significant implications for how particular places are perceived, experienced, and interacted with (Malpas 1999;Coaffee et al. 2009).The emotion of venue workers enacting counter-terrorism priorities is mobilized (Massumi 2015) by them walking around and, with their legs and eyes and hands investigating possible national security threats.Echoes of (in)security begin to resound, unsettling the space and others' interactions with it.As Purnell writes, "We enter and exit spaces filled with emotion as our very presence and activities simultaneously affect the atmosphere" (2021: 46).The compromised and newly-anxious body, as recommended by the Inquiry, therefore compromises the space through which it moves.Otherwise safe or secure-enough places are destabilized as perhaps-insecure and as filled with destructive potential, by the marching of the counter-terrorism officer hunting for insurgents amidst shoppers and mothers harried by their children.This space is made and remade by the embodied anxiety of counter-terrorism operatives patrolling through the venuehaving been triggered by new training stipulations, the filling out of risk-assessment forms, and their movement across the venue floor looking for potential threats. The Inquiry's recommendations about the renegotiation of space have significant temporal implications: as trails of insecurity are left in the counter-terrorism officer's wake, it becomes paramount to impose new (in)security processes in that space.As the anxious body moves through a space and into another onefrom the main floorspace, up the stairs, behind the binsit must always cycle back and return to the previously-occupied space, precisely because it has become alwaysinsecure by the counter-terrorism officer having occupied it.Anderson and Adey's analysis of the function of disaster performance is compelling when they argue, "By making futures present at the level of affect, exercises function as techniques of equivalence that enable future disruptive events to be governed" (2011: 1092).However, the purpose of the exercisepatrolling, in this casecan perhaps be even more powerfully articulated around the production of certain forms of subjectivity than the governing of potential disaster.After all, the national disaster is statistically an impossibility at each local venue, and on the particular duty of each officer. Discussing the implications of invisible security, Coaffee, O'Hare, and Hawkesworth remark that the effects are highly racialized (2009; also see: Rogers 2012).Possible insurgents identified by the counter-terrorism officer are not the harried, white mother.Instead, threat is observed through racialized and gendered frames (Ali 2020;Sian 2017;Younis and Jadhav 2020).Indeed, the Inquiry report referenced how a potentially suspicious body entered the Arena a few days before the attack took place: this person was racialized and gendered "an Asian male 'acting very suspiciously wearing all black with a large black bag'" (Manchester Arena Inquiry 2021: 111).Person was followed back to the train station and monitored until they departed on a train.Although identified as not being the bomber that attacked the venue days afterwards, and the encounter was therefore materially useless in intelligence terms, the pursuit of this possible threat object was lauded in the report for simply providing opportunities to practice "greater awareness [and] vigilan[ce]" (Manchester Arena Inquiry 2021: 111-113).Younis and Jadhav write of the way in which associations of Muslimness are made visible and quickly hidden as signifiers of terrorism risk, enabling the performative "colour-blindness [of security to be] maint[ained]" (2020, 613).In the context of the Manchester Arena Inquiry, a person was inscribed with a racialized, gendered subjectivity in the context of threat managementa practice praised on its own merit irrespective of a bomb being detonated.Through this moment we can see that the effect of writing (in)security onto local bodies-and-spaces is therefore not (just) concerned with the preparation for a material disaster, but about the generation of racialized encounters in local spaces through bodily movement, and the production of even more (racialized) insecurity. Just as the use of space is always being negotiated, ownership and responsibility over space dynamically emerges throughout the Inquiry's proceedings.Extensions of (in)security space are often inscribed by physical markers, such as walls, police tape, and the placement of baggage checks.The Inquiry report discusses how making physical extensions to the security perimeter might have averted the attack, given a number of hypotheticals: There existed the opportunity for SMG to make hostile reconnaissance more difficult for [the bomber] during events by pushing out the security perimeter of the security operation.This could have been a missed opportunity, depending on how the new security perimeter operated.It may have had the effect of deterring [the bomber] from attacking the Arena… Setting aside the issue of the perimeter, had things been done better by SMG and Showsec, and had BTP officers been more alert to the possibility of hostile reconnaissance, the prospect of detecting it would have been increased (Manchester Arena Inquiry 2021: 14) The reliance on counterfactuals in this extract highlights how borders are never secure, but are instead characterized by insecurity (Purnell 2021).The localized embodiment of the nation state manning new borders in these local spaces is therefore, unsurprisingly, characterized by perennial anxiety and a need to calibrate and recalibrate possible risk-aversion measures.The movement of borders and perimeters is a tool through which contemporary security is configured.There is always more to do, and as Mythen and Walklate argue, it becomes "difficult… to identify safe spaces" (2008: 225).This performancethe calibration and recalibration of local details within a system characterized by insecurityproduces subjects that can less easily agitate for systemic, structural reform.They become consumed with whether the perimeter was extended far enough in that moment, what effects a week's extra training might have on CCTVoperators, and to capitalize on the identification of racialized and gendered bodies (remembering the pursuit of the person rendered "Asian male") in producing more security.The localizing of national security priorities provoked by the Inquiry in its pursuit to inform policy thereby short-circuits alternative ways of seeing and knowing from emerging, inscribing the present as good-enough.Writing about "post truth", Adébísí's analysis is compelling in considering the extension of (in)security space, and how this experience with insecurity is well-known around the world: who live in what is designated the Global South have always tasted the waters of [insecurity], we have bathed in it, been immersed in it.What is startling to us now is that the frontiers of [insecurity] are moving north, and those frontiers are moving very, very rapidly and being dispersed widely (Adébísí 2019). Through its juridical individualism, its rejection of probability judgments, and its focus on local contexts, the Inquiry disperses these frontiers by mobilizing local workers and destabilizing their engagement with their environments. This section has examined the forms of subjectivity produced by the way in which the Manchester Arena Inquiry encourages staff to move through space.We see how local venue workers, patrolling across venue floors tasked with counter-terrorism duties, are situated as crucial nodes of security.Complemented by physical bollards, security perimeters, and bag checks, the Inquiry provokes staff to embody a perennial insecurity amidst an ongoing absence of dangerreconstituting local space as the crux of security politics.This emphasis on localization privileges certain forms of knowing 'risk': other forms of violence excluded from the conversation around what constitutes 'terrorism' risk through the performance of these daily patrols (Thomas 2017).As forensic investigations of security failures take place on this local basis, the ability to consider structural causes of violence is foreclosed. Explaining and confessing This final section interrogates the place of the Inquiry in localizing national security with regard to communicative processes.It outlines how the Inquiry demands that staff engage in rituals of explanation, being required to provide intricate defences for their every thought and movement.In examining the Inquiry's practice of making local workers speak, the section highlights how principles of legal defensibilityrather than accountabilityare central to the production of security.Prioritizing the performativity of these workers positions terrorism as a local problem, solvable by making local counterterrorism officials justify their every move under oath.Looking at how communication is portrayed in the Inquiry, and how a resilience expert responds to its articulations of responsibility, the section highlights contestations over this security knowledge. As noted earlier, the productive capacity of routinized procedures themselves is a consistent theme of contemporary security, which can be identified in the Manchester Arena Inquiry reportwhether that be through developing risk-assessment protocols, training, or walking around the venue.The role of communication is similarly identified by the Inquiry as critical in generating more integrated working practices.More information is articulated as necessary throughout the Inquiry's report, with communication and information-flow failures described across the document as contributing to the 2017 attack.However, these flows of knowledge are not just streams of data, but simultaneously produce more rigorous processes: Involvement of Showsec would have brought more benefits than just ensuring accurate information.It would have embedded Showsec into a process focused on counter-terrorism.It would have caused Showsec to think more about counter-terrorism.It would have led to more discussion about counterterrorism between SMG and Showsec… The greater the communication, the better the coordination will be (Manchester Arena Inquiry 2021: 67). The report here is asserting that more security can only be attained through more entangled, interconnected networks which operate in dynamic and diffuse ways at the local level, ultimately where each operative performs a vital communicative role.The point being argued is not about whether these more integrated networks actually generate material security, but about the processes through which security is being articulated.The scale of communication, and the pace at which information flows upwards-and-downwards through a security is central in making "the machinery of war self-correcting, giving it a built-in capacity to evolve, [and where] the evolutionary feedback must operate in as close to real time as possible" (Massumi 2015: 96).The Inquiry encourages the reporting of suspicious behaviour, recognizing that it might be frustrating that so many false alarms are created.However, it asserts that even these false alarms can be productive learning points for staff: Where hostile reconnaissance is suspected it needs to be properly recorded and reported to the police.The police should investigate it and report back.Briefings to security staff need to include details of the suspected hostile reconnaissance.This is so that staff know what has happened and know what to look out for (Manchester Arena Inquiry 2021: 152). Remembering the racialized formulation of the imaginary of threat (Ali 2020;Sian 2017;Younis and Jadhav 2020), the recording and reporting of possible hostile reconnaissance simply folds the structural nature of racialized imaginaries back onto local contexts (for a more comprehensive discussion on this issue see: Younis and Jadhav 2020).As the importance of in-the-moment and forensic flows of local information is prioritized, the local context becomes the pivotal site of security. We see across the Inquiry that local workers within every venue must defend the logic of preemptive counter-terrorism and their responsibility for it, in their own words.In turn, we see how the public inquiry actively writes the local as the crux of security politics.Dozens of workers, all with some responsibility for enacting security within Manchester Arena, gave evidence at the Inquiry, testifying under oath and detailing minute-by-minute accounts of their actions.What did they see in the seconds, minutes, and hours preceding the attack; what did they think about what they saw; what was their mindset in that moment; why were they standing where they were standing; how else could they have moved; how else could they have perceived that might have contributed to the attack being less deadly?The explanations of local operatives to the Inquiry, with their every move and perception confessed in full display, situates the frontlines of conflict as the perceptive and the local (Butler 1993, 13).This positioning of the locus of risk into the cognition and bodies of local venue workers makes absent any focus on foreign policy, relative deprivation, or state violence, in producing violence.The voices of senior security officials who attest that "terrorism" risk (also) emanates from unjust Government policies are therefore marginalized (Manningham-Buller, quoted in Norton-Taylor 2010).Interviewee 1, a disaster recovery expert who assisted the Government in responding to the Manchester Arena attack, noted there exists "very limited space" to question mainstream terrorism narratives in the aftermath of an attack (Interviewee 1; also see: Zulaika and Douglass 1996).This is particularly pertinent to remember considering the dynamics of power between the questioning of a public inquiry, and the workers being required to speak.What it means for local officials to embody national security through justifying their micro-decisions retrospectively under oath, has significant bodily implications, as we will now see. The sometimes-excoriating investigation by the Inquiry of local workers' performance prior to the Arena attack led the panel to make conclusions contrary to its own assertions that "the responsibility for the events of 22nd May 2017 lies with Salman and Hashem Abedi, his younger brother" (Manchester Arena Inquiry 2021: 1).As the Inquiry explored how attacks might be less damaging in future, it interrogated emergency workers who responded to the bombing in 2017.This approach provoked the ire of some resilience experts.Speaking with a civil protection and resilience expert, who was consulted for the parallel Kerslake Inquiry (which investigated the emergency service response to the attack), I asked him what he thought of the Manchester Arena Inquiry.He stopped for a moment, seemingly holding back tears, and said, always fill up, because people did their best… Nowfour years after the eventyou've got people going through this Inquiry process.And Jesus Christ they're pulling out stuff we [in Kerslake] didn't know about.Some of this stuff we were told [during Kerslake]: untruths, or stretched truths, things that weren't right that people knew were wrong… And it's coming out in the Inquirybecause people are under oath in the Inquiry.They weren't under oath when they were talking to us.But fucking hell, they were doing their best!(Interviewee 2) We can see in this excerpt the implications of making local workers explain themselves: the stress, humiliation, and embarrassment of the detailed interrogation produce tears of empathy from an expert in disaster and disaster response.Reid and Chandler argue that the responsibilized subject is a "degraded" subject, whose agency and personhood is compromised (2016).Processes of localization implicate the body, making it responsible for the anxiety of the global.Nowhere is it considered that attacks like the 2017 Manchester Arena bombing might be an inevitable consequence of British domestic or foreign policydespite warnings to the same from key officials within the UK's intelligence services.Instead, this paradigm adopted by the Inquiry assumes the position that "the events of 22nd May 2017 demonstrate so devastatingly that [future attacks] must be prevented" (emphasis added: Manchester Arena Inquiry 2021: 79).As Interviewee 2 remarked, this poses a particular problem for venue workers largely in "minimum-paid jobs [because] there's no empowerment [with that wage]." The requirement to make local venue workers speak does not make them democratically accountable for their actions within the system, nor does this mechanism integrate accountability into the process of investigation.The Protect stream of the UK's counter-terrorism strategy has a sibling policy, Prevent, which was investigated by the Communities and Local Government Committee.The committee critiqued Prevent, recommending it should foster "greater empowerment and civic engagement with democratic institutions, to strengthen the interaction and engagement with society not only of Muslims, but of other excluded groups" (2010: 3-4).Rather than develop Protect around similar themes of inclusivity and as suggested by the Committee, the Manchester Arena Inquiry reportagain, which prefigures the forthcoming Protect Dutyvalorizes the pursuit of people racialized and gendered as "Asian male[s]" for the sake of pursuit, as mentioned above.Despite the man demonstrating no patent threat, he was followed out of the venue and into Victoria Station, and observed on the platform until he departed on a train.This practicepursuing citizens outside of the venuewas recommended as an approach for how it encourages greater security-mindedness (Manchester Arena Inquiry 2021: 111-113).Concerned that this conduct was situated by the Inquiry report as desirable, and that such norms might compromise local workers, I queried Interviewee 2 about whether conversations about accountability and transparency took place in forums organizing preparedness for and response to disaster.He replied, I think a useful word is defensibility.Because what you're dealing with emergency response particularlywhich is coming out in both inquiries at the momentis the defensibility of your actions.I think that's slightly different from accountability.Because defensibility allows you that space to position your actions within your own experience [against] a third person's expectations against what you should have known.It can be understandable why you did something.Which is very different from formal accountability which just sounds like a sentence written in a Standard Operating Procedure to me (Interviewee 2). Although the interviewee here is technically talking about emergency response rather than security, the notion of accountability appears to be written out of contemporary disaster planning frameworks, superseded by the ability of an operative to defend their position in relation only to security.That a thoughtful and highly-experienced resilience expert considers "accountability" essentially useless jargon, in the context workers' explanations, shows how far contemporary security policy is from the recommendations of the committee that examined Prevent.Centring the (degraded) security operative's own justification without processes to consider how particular, often racialized, groups might be affected (Coaffee et al. 2009), renders accountability a distraction.The radical focus on the microscopicand not the politicalevident in the Inquiry's forensic interrogation of workers' minute-by-minute movements, performatively situates the local space as responsible for preventing violence, because broader questions about the process of securitymaking are necessarily not being asked.Making it "understandable why [an operative] did something" (as the interviewee mentioned) therefore constitutes a mechanism through which localization is generated, upheld, and normalized.Yet these workers remain on a minimum wage: in other words, as implied by the interviewee above, local workers exist without proportional empowerment to the expectations placed on them by such articulations of security. We have seen how the Inquiry compromises local workers by requiring their explanations for their every perception and movement to enact counter-terrorism security, amidst the perpetual absence of danger faced by staff.These workers must rewrite the local space as the only space worth examiningin their own words, under oath.These "confessional practices… guide the subjectivity" of venue staff, repositioning them in the midst of the production of security as counter-terrorism officers, as they reproduce risk narratives through their testimony under oath (Elshimi 2014: 116; also see Mills 1995). Conclusion The article has assessed how the Manchester Inquiry navigated knowledge about national security, through forensic analysis of the minutiae around the attack.By situating specific moments of inattention as missed opportunities, the Inquiry contributes to governmental consultation on a 'Protect Duty' by recommending the statutory diffusion of counter-terrorism responsibilities to local security practitioners and venue staff.The Inquiry conceived of security through the identification and eradication of procedural shortcomingsin a context where notions of likelihood and probability must be written out.These procedural shortcomings should be rectified, the Inquiry suggests, through affective and embodied demeanours of security where staff must be constantly reminded of, and destabilized in order to enact, their vital role in enacting counter-terrorism. In the first section we saw how the local worker is rearticulated from an observer of CCTV screens to an always-anxious body that must actively identify risk.By training these workers through a myriad of embodied rituals, like creative training exercises and writing risk-assessment forms, the professional responsibility of the CCTV worker is constituted around a security trigger.Notions of likelihood and probability must be erased for ritualized counter-terrorism duties to make sense.Rather than broader contexts of (racialized) risk being opened up for interrogation, the local worker must simply respond through muscle memory to movements and behaviours depicted on the screen.Similarly, the Inquiry recommended that workers must move through the venue, patrolling spaceperformatively constituting that venue as (in)secure.As they move, they affect the space in turn (Purnell 2021), walking up to and around potential terrorists, their bodies on the borders of conflict.Patrolling becomes, in the Inquiry's recommendations, an embodied solution to the 'shortcoming' of static personnel not fully engaging with the need for always-more security.In the final section, the paper explored the contested, confessional dynamics of the Inquiry which contribute to the constitution of event space as a key site of the production of security.Low-paid workers were made to defend under oath their every perception within the local space, legitimizing the performative constitution of the venue as the locus of security. Recommendations made across the Inquiry will soon be reified in the Protect Duty.The articulations of security across the Manchester Arena Inquiry reportand therefore the principles of forthcoming Protect Dutyfunction to uphold structural continuity (Wall, Middleton, and Shah 2021).Situating threat as emerging only in local spaces and as a result of local inaction, forecloses space to consider violence as emanating as a result of political and systemic injustices.The role of states and companies, or broader structural violence, in producing economic and environmental suffering is more easily written out of public narratives, through the intricate fixation on local workers and spaces as the crucial node in the production of national security.The Manchester Arena Inquiry, in prefiguring the forthcoming Protect Duty, therefore reveals the public inquiry as an importantand contestedsite in the (re)making of localized security knowledge and praxis. Interviews Cited Interviewee 1. Disaster Response and Recovery Expert, Online Interview, 8 Sept 2021 Interviewee 2. Resilience and Civil Protection Expert, Online Interview, 24 Sept 2021
2023-08-31T15:13:06.254Z
2023-08-29T00:00:00.000
{ "year": 2023, "sha1": "f273b382278660c186a8bb6fb712b6d0edcdd235", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23996544231195050", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "6025c295833b4a69fcc201026098ca724d33de08", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [] }
237654693
pes2o/s2orc
v3-fos-license
An urgent call to think globally and act locally on landfill disposable plastics under and after covid-19 pandemic: Pollution prevention and technological (Bio) remediation solutions Graphical abstract Introduction Since early 2020, worldwide public health and economy have been severely affected by the COVID-19 pandemic, an acute respiratory disease caused by a highly infectious novel coronavirus -the SARS-CoV-2 (also known as coronavirus 2) [1]. By July 2021, the COVID-19 disease affected over 187 million people and caused 4.0 million deaths worldwide [2]. During this health crisis, the protection of lives and livelihoods has become a priority for governmental decisions and actions. The World Health Organisation (WHO), Centres for Disease Control and Prevention, and local governments have announced several guidelines to reduce the spread and health risks associated with COVID-19, including frequent home, regional or state-wide quarantine, restricted travelling, handwashing, and social distancing [3,4]. Besides, the use of personal protective equipment (PPE) such as surgical and medical masks, non-medical face masks (including self-made or commercial masks of cloth, cotton, among others) and face shields, was highly recommended for ordinary citizens. In contrast, other PPEs (including gloves, goggles) became mandatory for frontline health workers [5]. As COVID-19 intensified all over the world, the use and consumption of PPE and other single-use-plastics (SUP) increased drastically, resulting in a massive upstream PPE supply chain disruption, a drawback on prevailing SUP bans or restrictions in several countries (e.g., Canada, some states in the U.S.A), and downstream waste disposal challenges [5]. Natural environments (e.g., beaches, rivers, seas), which at the beginning of the pandemic benefited from reducing litter and improved water quality from decreased tourism [6], are now becoming tainted with COVID-related waste [7] . A widely known example is related to the dozens of disposable masks observed in a 100 m stretch in Soko's islands beach, Hong Kong (compared with only one or two items observed per month) [8]. In Kenya, this litter type was present in beaches under the concentration of 0.1 item m − 2 and represented 0.43% of total items but reached 55.1% in urban beaches [9]. Alongside, a significant share of COVID-19 plastic waste (particularly PPE, gloves, and plastic materials discarded by ordinary citizens as mixed waste) is being landfilled [10,11], instead of being incinerated as recommended/prioritised by several international and national organisations (as reviewed by Parashar et al., [12]). Landfills are, thus, becoming overloaded with COVID-19 waste, which (in the long run) can result in space crush, illegal dump, and the release of toxic pollutants [13]. This is of particular concern in developing countries, such as Cambodia, the Philippines, India, Indonesia; where uncontrolled landfilling and indiscriminate dumping were prevailing before COVID-19 [11,14,15]. In this sense, this article aims to address the challenges raised in the pandemic and post-pandemic world on landfills, including potential environmental and health implications that might drive us apart from the 2030 U.N. sustainable goals. Also, it highlights some innovative mitigation technologies and improved management strategies that can pave the way to environmental recovery. Such integrative but focused discussion (on landfills) has been missing in recent publications e.g., [5,6,9,12,13,16,17]. Plastic waste generation during covid-19 pandemic and implications on landfills Lebreton and Andrady [18] predicted a world production above 200 megatons of municipal plastic waste in 2020 under a business-as usual-scenario for plastic consumption, of which 43% would remain mismanaged (i.e., ending up in landfills, open dumps or littered in natural environments). However, the COVID-19 pandemic induced a significant change in waste dynamics and composition, mostly due to a considerable worldwide increment in infectious waste [12,19]. In March 2020, the WHO estimated a global demand of 89 million facemasks per month [20]; but such estimation soon became surpassed. One month after COVID-19 being declared a pandemic (i.e., on April 2020), Germany alone was demanding 17 million FFP facemasks and 45 million surgical masks per month [21]; and after eight months (in November) Germany's Federal Ministry of Health was preparing to distribute 290 million masks to healthcare facilities until 25th December to suppress the second wave [22]. In Italy, and according to the information released on 12th November 2020 by the Extraordinary Commissioner on the free distribution to Regions and Autonomous Provinces, 1,040 million face masks were distributed by health personnel, law enforcement agencies, public service providers, Public Administration, nursing homes, local public transport and police; of which 57 million were used in seven days [23]. In the UK, 7,800 million PPE items have been distributed from March to November 2020 to health and social care services, namely adult social care providers, wholesalers, community pharmacies, dentists, and local resilience forums [24][25][26]. Added to these numbers is the voluntary or mandatory use of PPE by the public outside healthcare services, which significantly impact municipal solid waste (MSW) composition. In Saudi Arabia, facemask consumption has been estimated to be 5,336-38,426 million per year [27]. In Asia, face mask use can reach 2,300 million per day, resulting in 16,659 tonnes of medical waste per day [16]. Another significant contribution to plastic waste generation during the pandemic is related to disposable plastics for COVID-19 diagnosing. For instance, worldwide, 15,000 tonnes of waste was generated from Polymerase Chain Reaction (PCR) tests up to August 2020, with 97% of PCR material referred for incineration [19]. Plastic waste generated in health/medical care facilities, laboratories and other contaminated health/social facilities should be treated and managed in accordance with the international/country/state law on hazardous waste (e.g., EU law on waste, especially Directive 2008/98/ EC, articles 17, 23, 24, and 25 on hazardous waste), i.e., be incinerated/ disinfected followed by safe disposal (e.g., sanitary landfilling of the ashes). Whereas developed countries (or countries with high income and well-implemented/distributed incineration/waste-to-energy facilities) can manage COVID-19 medical waste properly. Some successful examples include South Korea, India, and Spain. After the first outbreak, the Ministry of Environment of South Korea released "the extraordinary measures for safe waste management and disposal", which included (among other guidelines) daily incineration of COVID-related waste (where before could be saved for 7 days) [28]. China deployed mobile waste treatment stations along with the plan to convert industrial waste disposal plants into bio-medical waste (BMW) treatment facilities due to no industrial activity during the lockdown period; whereas in Catalonia, Spain, the existing incinerators facilities have put on the job to a priority disposal of medical waste [28]. Conversely, other countries struggled to follow such proper procedures, disposing of such medical waste in landfills or open dumps [12]. Some Asian countries (e.g., Thailand, Philippines, India) are known to dump solid wastes in open landfills due to the scarcity of resources for waste management, with an increased public health risk from the spread of infectious waste from the pandemic, in addition to the recurrent environmental problem [16]. Household solid waste (or municipal solid waste -MSW), on the other hand, should be double-bagged to be further incinerated (preferably) or, as the last resource, landfilled [3,12], with a minor share being potentially recycled after disinfection. Before the COVID-19 pandemic, landfilling was already the common practice of waste management all over the world, with developing (or low income) countries presenting a higher landfill rate ( Fig. 1) [29], as it remains an easy, low-tech, and low-cost method compared to incineration and recycling [30]. Thus, such a waste disposal route remains popular for MSW during the COVID-19 pandemic [31]. At the beginning of this health crisis, particularly during the lockdown, the MSW decreased in most cities, particularly those with a higher tourism rate. For example, Milan (Italy) decreased MSW generation by 27.5% [32], and Barcelona (Spain) decreased by 25.0% [33]. Nevertheless, the world plastic share in MSW and MSW generation seemed to increase as the economy started to re-establish still under the persistence of COVID-19. For example, in Romania, the MSW amount increased ten times from the 26th of February to 15th June 2020 [34]. This is not surprising as the worldwide demand for, use, and consumption of PPE remain high to prevent transmission. Based on the current consumption patterns, one can estimate a worst-case scenario waste generation resulting from disposable facemasks wearing. Considering a global population of 7,800 million, the average use of disposable masks by 73.7% of the population (based on updated information from Covid.wordometer.info), and the estimated need of 1 mask hab -1 day − 1 , this would translate into the necessity of 5,746 million disposable face masks per day, corresponding to 2,097 billion per year (see Table 1). Considering that PPE is mostly disposed in mixed waste and applying a global landfilling rate of 42% [35,36], it is expected that 293 thousand tonnes per month, or 3,524 thousand tonnes per year of disposable masks (4 g) might be landfilled worldwide, with greater pressure upon developing countries (e.g., India, Bangladesh) ( Table 1). The worldwide monthly estimation of mask-related waste generation for landfills is actually in line with the 4,312 tonnes of COVID-19 related wastes (mostly collected as mixed waste) in Romania from 25th February to 15th June 2020 (~39 tonnes day − 1 ; 1,176 tonnes month − 1 -which translated to the world population, it would result in 478 thousand tonnes month − 1 with 206 thousand tonnes month − 1 being landfilled) [34]. To PPE consumption adds up the single-use plastics (SUP, particularly plastic packaging), which was projected to grow by 5.5% due to pandemic response [37] as a result of postponed or withdrawal of several national, state-wide and/or international plastic policies [5]. In 2017, about 2,010 million tonnes of MSW were globally generated [37]. Thus, assuming a global landfill rate of 42% (resulting in 840 million tonnes), the PPE contribution to MSW would only be approximately 0.4%. However, concerns arise when considering the plastic share of MSW. According to the World Bank [38], the global share of plastic waste in MSW is about 12%, resulting in a contribution of 3.5% of PPE in the plastic share of MSW being globally landfilled in 2020. Notwithstanding, such contribution is variable. Countries such as Sweden will likely have < 1% of PPE contribution on plastic share in their MSW (which most of them was being recycledmechanically or chemically); whereas in Portugal and Canada, such pressure can be higher > 4% Table I). In addition, according to the World Bank [38], the global landfilling rate can be even higher and aggravated in the coming years if no mitigation action is taken. The increased pressure on landfilling (either from reducing recycling activities, the increased use of plastic packaging, and the general use of PPE) may compromise sustainable development goals. For instance, the European Union targeted landfilling to a maximum of 10% of MSW and recycling to a minimum of 65% by 2030 [39]. However, only in 2020, illegal plastic waste disposal has risen by 280% worldwide, and the global recycling rate is estimated to decrease by 5.1% [40]. Public health implications of landfilling in the post -covid world Countries that relied on landfills as major disposal routes (e.g., Brazil, China, USA, India, Fig. 1) are, therefore, receiving intense loads of MSW daily (with a substantial contribution from PPE and SUP). Thus, such intense loads can exhaust landfills capacity, which likely results in space crush, plastic leakage, and leaching of toxic chemicals [13]. In addition, instead of attempting to reduce the amount of residues landfilled following a circular economy, the current pandemic may increase the need for more landfills, which require increased land use with higher entrenchment in the natural world. A brief overview of landfills' environmental implications (which affect human and animal health) can be depicted in Fig. 2. Plastic leakage, dust generation, propensity for landfill fire While environmental agencies recommend a daily coverage of residues in landfills (e.g., the Portuguese Environmental Agency [41]), this may not be possible in all cases around the world. Intense accumulation of waste on open landfills is known to provide breeding sites, burrows and nutrient supply for opportunistic species (e.g., rats; [42]). For instance, white storks (Ciconia ciconia) have been reported to feed on landfill sites, including in the short moments between the arrival and coverage of waste, with municipal waste comprising 68.8% of the diet of these animals in Spain (Avila, Salamanca, Zamora) [43]. A considerable amount of landfill waste was also observed in overwintering gull species (Herring gulls Larus smithsonianus, great Black-backed gulls Larus marinus, and Icelandic gulls Larus glaucoides) [44]. Thus, organisms relying on waste for food supply can end up entangled or ingesting plastic waste affecting their survival, feeding, health status, or fitness. The frequent ingestion of landfilled waste by the overwintering seagulls (although most is regurgitated) has been associated with a significant decrease in their reproduction and significantly increased chemical body burdens [44]. This is not surprising as plastics can absorb and carry heavy metals, organic compounds [45], and pathogens [46]. Despite its activity under landfill conditions not being currently known, SARs-CoV-2 can persist face masks for up to 21 days at 20 • C [47]. Thus, landfill waste can transport many contaminants and pathogens, posing a severe risk to animal and human health, especially when considering the role of larger organisms that may feed on this waste and act as carriers/vectors of pathogens. For instance, common seagulls in Porto, Portugal, are known reservoirs for multidrug-resistant Escherichia coli [48]. Public health hazards caused by open landfills are not limited to pathogens and adverse effects on biota. Landfilling generated dust and fires, contributing to unfavourable odour and air pollution around these sites [49]. Particulate matter with an aerodynamic diameter of > 30 μm (such as microfibres released from masks; [50] can be carried out by wind up to 100 m from the source; whereas particles with a diameter 30-10 μm and < 10 μm can be deposited as far as 250-500 m and 1 km, respectively [51]. As part of these particles, microplastics from landfills could be resuspended and contaminate the nearby areas, an issue that requires more attention in the future [52]. Airborne microplastic contamination in large cities is already recognised, for instance, with outdoor air concentrations of 0-4.2 microplastic particles m − 3 in Shanghai mostly originating from textile and abrasion of plastics [53]. Long-term exposure to high concentrations of airborne microplastics, or exposure of susceptible individuals, may lead to airway or interstitial inflammatory responses in the lung, coursing with dyspnoea [54]. Besides a public health threat, these airborne microplastics can deposit and contaminate other matrices, such as soil and water. In Yantai, China, a yearly deposition of airborne microplastics of 23 trillion particles or 0.9-1.4 tonnes is expected in 100 km coastline [55]. The contribution of landfill resuspension to airborne microplastics, and its impacts on public health and environmental contamination, have not yet been addressed and require further attention in future studies. In addition to resuspension and direct release, landfills fires can also contribute to air pollution. Landfill fires, caused by heat released from intense aerobic biologic activity, will likely increase soon due to global warming and the increasing loads of COVID-19 waste. The World Health Organization (WHO) estimates that air pollution exposure causes 7 million deaths annually [56]. Both dust emissions and landfill fires are already known to significantly harm the environment and human health due to emissions of heavy metals, dioxins, PCBs, and furans [30]. Therefore, further landfilling of wastes and consequent generation of related air pollution in the surroundings can further exacerbate these numbers. Higher effects will be felt by communities surrounding landfilling sites, which may translate into an increased risk of low birth weight, congenital disabilities, and certain types of cancer [57]. Communities living near landfill sites are usually those with lower incomes, which are already burdened by many stressors (e.g., poor nutrition, lower access to health care), exacerbating social injustices. Biogas and landfill leachates generation Another environmental concern related to intense landfilled waste is the formation of biogas and leachates. Biogas starts forming 2-3 years after waste landfilling due to waste degradation and relies on waste composition, environmental conditions, and landfill age [30]. Such process emits a considerable amount of greenhouse gases (GHG; 1.9% of global GHG in 2016), although it can be reduced with an efficient energy recovery facility usually required (e.g., Directive 31/1999/CE). However, in most countries, particularly in developing countries, uncontrolled landfills are prevailing [11,14]. Thus, the environmental footprint of landfills will likely be aggravated in the post-COVID scenario. Disposable masks are mostly made with electrospun nanofibers from a diverse polymeric material (such as PP) and start losing properties (e. g., static electricity that confers the original filtering performance) when exposed to, for instance, water or moisture, losing their integrity and releasing micro-and nano-fibres along with hazardous chemicals as observed by Saliu et al., and Sullivan et al., [58,59]. As smaller are the plastic particles (e.g., micro-nano-sized), the higher is their potential to be biodegraded by microorganisms, and such biodegradation processes releases gas (mainly CO 2 , and depending on their biobased content, also CH 4 , H 2 ), likely contributing to landfill biogas. Recent studies highlight this hypothesis, with plastics biodegradation under simulated landfill conditions affecting biogas composition [60]. Thus, considering that the COVID-19 pandemic altered MSW that is now counting with significant contribution of PPE, it is likely to affect both biogases and leachates. In addition to their production and transportation, PPE landfilling contributes to additional GHG release, which should be further addressed through a Life Cycle Assessment (LCA) to pursue more sustainable alternatives and practices. Leachates start forming after the first waste disposal and intense rainy seasons (particularly in poorly covered landfills), as they are resultant not only during biodegradation processes but also through desorption/lixiviation from solid wastes (plastics, metals, among others) [60]. World landfills can release on average 5 m 3 ha − 1 .d -1 of severely contaminated leachates [30], and their composition often consists of nutrients (primary nitrogen), pharmaceuticals, other organic compounds, heavy metals [61] and microplastics [62]. With billions of disposable masks (mostly composed of plastics) ending up in landfills, microplastics release will increase in the future. Disposable masks under mechanical abrasion (although in aquatic medium) evidenced the release of thousands of microfibres along with leachable metals (i.e., lead up to 6.79 μg/L, cadmium up to 1.92 μg/L, antimony up to 393 μg/ L, and copper up to 4.17 μg/L) [58]. In simulated landfill environments, plastic wastes composed of PP and a PE and PP composite (which is, in fact, the major component of disposable masks) attained a weight loss of up to 10% during approximately one year [63]. Thus, landfilled disposable masks (made of polypropylene, PP) will fragment into microand nano-plastics and degrade through fluctuating temperatures and pH, deep-seated fires, physical stress, and microbial activities [64] releasing, concomitantly, leachable hazardous chemicals. A PP piece that has been landfilled for 5 years revealed colonisation signs with viable microorganisms, oxidation confirmed by carbonyl and hydroxyl indexes, increased crystallinity, delamination, surface cracks, and the formation of microplastics of diameters under 0.4 -6.9 μm [65]. While conditions vary with landfills, waste mixture, and each plastictype, a worst-case scenario of facemasks' landfilling can be estimated. The previously 3,524 thousand tonnes per year of disposable masks (made mostly made of PP) that might be landfilled in the world (Table 1), considering a decomposition of 10% weight over a year [63], would generate an amount of 2.3x10 21 microplastic particles (here assuming just the formation of particles with 7 μm in size) after a year of landfilling (see Supplementary Data). Aside from hazardous chemicals, as previously mentioned, landfill leachates can contain considerable concentrations of pathogens, as the avian influenza virus (H6N2) that can remain infective from 30 to > 600 days in landfill leachates [66]. Thus, being hydrophobic particles with a resistant carbon backbone, such small-sized microplastics can carry hazardous chemicals and pathogens [67], while supporting the growth of biofilms/microbiota with a high abundance of antibiotic resistance genes [68]. This fact will exacerbate the adverse effects of microplastics on biota (e.g., [69]), affecting ecosystem services and functioning, and human health, when released to the environment. Therefore, landfill leachates should be carefully processed to avoid aerosol formation during aeration or flushing in the leachate treatment plant [30], and avoid the release of potentially contaminated microplastics. Several technologies are available to treat landfill leachates (as well as wastewaters) via advanced oxidative treatments (e.g., ozonisation), photocatalytic treatments, biological processes, physical-chemical processes, among others [70]. Nevertheless, these can be ineffective with some small-sized microplastics that can have adsorbed contaminants/pathogens, which calls for more research and innovative technology [62], as explored in the next section. Without a proper mitigation treatment, such emissions of pollutants (either solid, gas, or liquids) produced in solid urban waste landfill sites can last approximately three decades or even centuries after the landfill site is closed [71,72], with continuous loads to the surrounding environments [30]. Geomorphological implications If we consider the predicted 5.8 billion masks of disposable facemasks consumed and discarded per day, of which 2.4 billion is eventually ending up on landfills, this will result in approximately 601 TIR containers landfilled daily around the world (Table 1). In small but highly COVID-19 impacted countries such as Portugal, it would result in 2.23 TIR containers being landfilled daily. Along with the technogenic disasters above mentioned (e.g., landfill fires, chemical substance leakage, among others), landfills overload and increased number of illegal dumps during COVID-19 (as it is happening in developing countries such as India; [11]) will likely result in concerning morphological changes and geohydrological impacts of local character. Landfill sites are often large underground structures with a complex mixture of municipal waste. Yet, most of them (particularly in the tropical and subtropical areas) are often placed in former sand, gravel or peat pits, wetlands, or waterlogged areas, where former excavations and drainage system complicates the collection of the leachate generated by the infiltrating precipitation [73]. Such deposits often result in the formation of landfill leachate plumes that impose a risk to downgradient water bodies; and consequently a threat to animal and human health [73]. Furthermore, when these underground structures encompass a greater land-use than expected (due to COVID-19 pandemic), they might also imply significant long-term geomorphic changes in various geomorphic features, such as riverbed and shoreline migration meanders and old riverbeds, as depicted in several geomorphometric analysis [74]. Such transformations of landscapes will eventually affect the ecological integrity of the area (including biodiversity loss) and interfere with the local microclimate. Strategies to reduce covid ¡ 19 plastic waste being landfilled Even though a vaccination programme against COVID-19 had been accelerated in several countries (see https://vaccine-schedule.ecdc. europa.eu), it remains a slow process towards global heard immunity. Thus, the use of PPE and disposable plastics for COVID-19 diagnosis and treatment will prevail, at least in the following semester. Based on our predictions, the amount of plastic waste generated and mismanaged during the COVID-19 pandemic is staggering, mostly due to a lack of efficient planning and policy intervention on plastic waste management. This will aggravate plastic pollution worldwide if no action is taken immediately. Thus, it is imperative to start developing/implementing robust policies and sustainable approaches/initiatives to improve plastic waste management to reduce their adverse environmental and human health effects. The scientific community has presented several recommendations to governments, policymakers, corporate sectors, and the general public to overhaul the existing plastic waste management paradigm and motivate appropriate actions [12,75,76]. Among such recommendations, it is highlighted the need to decrease plastic waste generation and increase recycling, which eventually decreases landfills and open dumps, allowing the implementation of proper mitigation/ remediation strategies. Sustainable production and use of PPE and SUP Several strategies can be put in use to reduce PPE and SUP waste generation significantly. Implementing strategies of public health protection beyond the use of PPE and SUP contribute to the reduction of waste production. For instance, the WHO recommends minimising the need for PPE through social distancing practices [77]. In healthcare, this translates into the use of telemedicine, physical barriers (e.g., glass windows), and restricted areas. The same principles can be applied to the general public by restricting the need to access public places (e.g., by implementing remote working). Along with PPE, the use of plastics in packaging increased during the pandemic. Both cases (i.e., PPE and general SUP) can benefit from improvements in design, such as reducing the amount of plastic used or substituting it for eco-friendlier alternatives whenever possible. In the case of PPE, reusable alternatives (e.g., cotton masks) or treatment of disposable PPE allows for reuse (e.g., N95 masks can be decontaminated by steaming, [78]) can reduce the amount of waste produced while still contributing to public health protection. Another alternative is the substitution of disposable plastics for bio-based solutions. For example, wheat gluten biopolymer (a by-product or co-product of cereal industries) can be electrospun into nanofibre membranes and subsequently carbonised at over 700 • C to form a network structure, which can simultaneously act as the filter media and reinforcement for glutenbased masks [79]. Such gluten material can be reinforced with very low amounts of lanosol (a naturally-occurring substance for microbe resistance; <10 wt%) together with the carbonised mat and shaped by thermoforming to create the facemasks [79]. Several biobased solutions are also available for other SUP, such as packaging that increased substantially during the COVID-19 pandemic. For example, poly-hydroxyalkanoates (PHAs) and homopolymers such as polyhydroxybutyrates (PHBs) extracted from algae biomass can present similar physicochemical properties as petrochemical plastics applied in such applications (e.g., polypropylene, polyethylene, and poly-ethylene terephthalate), with increased potential for biodegradation when desired (as reviewed by Patricio Silva [80]). Other strategies can be put into place to reduce plastic waste (even general waste) for landfilling. Governmental regulations may support the reduction of landfill streams, and diversion to other alternatives, by applying landfill taxes to municipalities based on waste being landfilled [75], providing recycling benefits to consumers (e.g., buy-back programs for bottles), or applying higher fees to mixed wastes than recyclables in a door-to-door collection or by using smart trash containers [81]. Improve PPE and SUP recycling/repurposing Implementation of structured waste management procedures, especially for the separate collection of COVID-19 pandemic wastes, is deemed necessary. For example, the use of colour-coded bags by individual households for the disposal of PPEs. Further, colour-coded bins must be deployed at the community level to ensure proper collection and disposal of such used PPEs. In Montreal, Canada, and Guimarães, Portugal, specific PPE-trash containers have been installed in several places around the city to motivate ordinary citizens to safely dispose of their masks and consider their potential decontamination for further recycling/repurposing [82,83]. Disinfection procedures of plastic wastes as PPE (e.g., U.V., ozone, heat, microwave, autoclave) and/or a quarantine period (>72 h) can allow safe recycling [27,76]. China, for example, applied on-site/mobile treatment facilities such as Sterilwave SW440 (applying microwave sterilisation at 110 • C, with a treatment capacity up to 80 kg/h as reviewed by [28]). After disinfection, biomedical plastic waste no longer threatened public health and could follow regular waste streams for proper end-of-life for these materials. Masks collected in specific bins can be thermo-recycled at 190-230 • C [84] or 300-400 • C (pyrolysis) [85,86], allowing the conversion of the polypropylene into liquid fuels that can be further used as a source of energy with similar to fossil fuels. Otherwise, it can be used for pellets manufacturing to make boxes, trays, etc. (e.g., UBQ Materials and TerraCycling enterprise [87], or can be used to make pavements [88]. Improved infectious waste treatment during pandemics, or other emergencies, can be promoted by creating guidelines based on the waste storage facilities (avoiding their use whenever possible) and increasing incineration capacity by installing more facilities, co-processing with other wastes, or by mobilising private facilities [89]. Germany and Sweden were able to couple with the intense loads of potentially infected waste from the COVID-19 pandemic due to their well-developed and distributed incineration (waste-to-energy) facilities, only relying on landfilling to bury the ashes (<1%). Encourage plastic waste recycling (even during a pandemic) Recycling companies worldwide were already facing an economic crisis due to the low cost of virgin plastics production compared with recycled plastics. However, this situation was severely aggravated during COVID-19 incited by the fear of virus transmission. The life-spam of the virus varies for different surfaces, remaining active for more extended periods on smooth surfaces [90,91]. However, several disinfectants are used to eliminate disease vectors while handling the waste (see [28] for more details). The application of such disinfection approaches can then allow safe recycling, which should be encouraged. A successful example comes from Hong Kong; which government introduced two bonus schemes to encourage waste recycling: (i) One-off Rental Support Scheme that allowed recycling facilities to pay 50% of their rent (or up to HKD$25,000); (ii) One-off Recycling Industry Anti-Epidemic Scheme that supports the operational costs of recycling facilities at a rate of HKD$20 000 per month [92]. Landfilled plasticstechnological approaches for mitigation purposes The concept of sustainable landfills relies on implementing optimal practices that allow the safe assimilation of wastes into the surrounding environment in a short time (i.e., in the lifetime of that generation) [93]. Polymer degradation under landfill conditions can be responsible for the release of greenhouse gases -GHG (e.g., long-term degradation of 1 kg of PE generating 3 kg of CO 2 ), monomers and additives (e.g., styrene from polystyrene), and contribute to acidification (e.g., HCl as a degradation product of PVC) [94]. With thousands of tonnes of plastic waste (mainly PPE and SUP-packaging) being landfilled daily, particularly in developing countries, urges the need to upgrade such facilities. Several (bio) technological approaches can be prioritised to reduce and treat plastic waste on landfills and control, treat, and monitor landfill emissions to mitigate their negative environmental consequences. Reduction and/or pre-treatment of plastic waste before landfilling The implementation of a biorefinery located on landfill sites (or near them) will help reducing plastic waste on-site and, indirectly, the costs of waste-to-energy plants (consequently lowering logistical and supply chain costs related to waste transportation and lowering operating and capital costs by using existing infrastructure) [95]. Plastics shredding followed by thermal processing [95], Fenton oxidation processing [96], or biological pre-treatment (e.g., Pseudomonas sp., Bacillus cereus, Bacillus pumilus, and Arthrobacteia; [97] are also relevant to increase the life expectancy of the site. Thermal processing allows energy recovery, whereas Fenton processing and biological pre-treatment will facilitate plastic waste biodecomposition after landfilled. Another strategy to reduce plastics for landfills (here, only the waste volume) involves plastic compactors. Such technology melts plastic waste into a disk, reducing water and consequently the surface area available for biodegradation, adsorption of contaminants, and leaching of monomers and additives [98]. All the previously mentioned approaches require, however, the separation of plastic waste from mixed wastes. This process might also require prior decontamination (e.g., the Microwave technique as implemented in Sterilwave SW440 mobile facility used in China) to avoid the spread of infectious diseases (such as COVID-19). Acceleration of microbial degradation of landfilled plastics (including PPE) Bioreactor technology is already a reality in several modern landfills, and it uses enhanced microbiological processes to transform and stabilise MSW constituents within 5-10 years, significantly increasing the organic waste decomposition, conversion rates, and process effectiveness than conventional landfills [99]. Bioreactors can operate under aerobic, aerobic-anaerobic or anaerobic conditions, where waste (including plastics) are converted to gas with energy recovery. Anaerobic landfill bioreactors allow a faster degradation, and biogas formed has a high methane concentration, but it also produces hydrogen sulphide and high ammonia levels compared to aerobic bioreactors [100]. Nevertheless, and independently of the bioreactor type, this technology extends the useful life of landfills by reducing, for instance, the need to site new facilities as biodegradation occurs. Such technology can be even improved for plastics degradation with the help of key microorganisms. Different actinomycetes, algae, bacteria, and fungi have proven to degrade persistent plastics by converting them into environmentally friendly carbon compounds, with key enzymes identified (see recent reviews such as [101][102][103]). So far, plastics degradation proved to be more efficient in the presence of a microbial consortium, such as Actinobacteria with Firmicutes (which are already present in anaerobic digesters, [64]) and Bacillus sp. and Pseudomonas sp.; and Brevibacillus agri (2 strains), Brevibacillus brevis, and Aneurinibacillus aneurinilyticus (with high potential for aerobic bioreactors) -where some bacteria use monomers and excrete byproducts that become substrates for others to grow [101][102][103]. Key enzymes involved in the biodegradation process includes laccase, manganese-dependent peroxidase and hydrolase (urease, protease, lipase). The degradation rate of plastics by naturally occurring microbes remains a relatively slow process, as it depends on several factors (e.g., polymer characteristics and environmental factors). A potential solution is modifying key enzymes through protein engineering to design microbial strains with better degradation efficiency. However, this approach requires in-depth knowledge on the biochemical and structural properties of such vital enzymes involved in plastics biodegradation, which remains so far poorly covered. In addition, pretreatments and additives (e.g., nanoparticles) also seem to play a role when improving microbes performance towards plastics degradation, which also needs special attention [103]. Control and treatment of emissions (landfill gas and leachates) With MSW receiving more plastic waste (PPE related), it is expected that biogas formation will be affected. For instance, the presence of HDPE, PP, and PS on food waste inhibited biogas production in anaerobic digesters [104]. Yet, the structure of the plastic includes a carbon backbone; thus, its biodegradation in landfills (which occurs at a lower rate) will increase the share of CH 4 , H 2 , and ultimately CO 2 in a longer run, along with potential other volatile compounds (e.g., added as additives). Parallelly, the presence of persistent plastics in landfill conditions will fragment (before being microbiologically degraded), originating microplastics (i.e., plastic debris < 5 mm in size), hazardous chemicals from additives, and non-intentionally added substances that will enter leachates constitution. These particles have already been reported in landfill leachates from northern European countries and in China [105,106], which raises concerns as these small particles are known vectors of hazardous contaminants and pathogens [62]. By law, sanitary landfills should control and treat biogas and leachates, but the technological approaches are mostly dependent on the infrastructures' financial support. The first approach to control leachates formation and, to some extent, biogas release relies on selecting landfill cover and multilayer liners. For example, novel technological applications in the construction of multilayer liners involve the combinations of waste (e.g., compacted plastics and fibre material [107]; and geosynthetic materials (e.g., geosynthetic clay, granular bentonite, Table 2 Overview of the main (bio)technological approaches that are/can be implemented on landfills for biogas purification for further use, leachate microplastics removal, and ex-situ/in-situ plastic-waste bioremediation. Landfill gas purification (as reviewed by [94,107] Landfill Leachates (as reviewed by [61,62] Ex-situ/in-situ bioremediation on landfills (as reviewed by [95,96] Traditional approach Physical absorption (e.g., high pressurized water scrubbing) Chemical absorption (e.g., amine swing absorption) geotextiles [108]); providing improvements in the barrier function and reducing costs of the operation [109]. As topsoil covers, the application of biochar (e.g., carbon-rich solid derived by thermal decomposition of biomass) [110], biocovers (which consist of a compost cover, highly rich in methylotrophic and methanotrophic microorganisms) and phytocovers (mostly suitable vegetation) [111] promotes soil remediation by increasing fertility, plant growth, and soil bacterial communities diversity, and immobilisation of contaminants. Such covers also allow carbon sequestration, slope engineering (e.g., through friction and cohesion) while significantly reducing the amount of greenhouse gases (GHG) released and leachate formation on landfills. However, its implementation can be complex due to the high surface area, as it often involves a very extensive gas distribution system, which raises maintenance costs. This can be overcome by the application of bio-windows (i. e., gas drainage systems) and biofilters outside or beside the landfill area itself (gas capture system to be treated) [112] Several (bio)technologies proved efficiency in collecting and processing/treating biogas and leachates, which has been scrutinised in recent critical reviews towards their efficiencies and drawbacks for cleaning and upgrading steps (e.g., [113]). For biogas, upgrading technologies for purification and concentration processes for its further use in numerous applications (e.g., electricity, liquid gas, fuel) include water scrubbing, cryogenic separation, physical absorption, chemical absorption, pressure swing adsorption, membrane technology, and biological upgrading methods ( Table 2). Assuming that the presence of PPE (which composition is mainly PP, PE) will likely contribute to the increment of CO 2 , CH 4 , and heavy metals emissions [58,114], the most efficient technological approach highlighted in the literature to treat biogas enriched with such gases is chemical absorption scrubbing [113]. Such an approach achieves the highest purity for biomethane (CH 4 ; >99%) with low losses (<0,1%) and high carbon dioxide (CO 2 ) elimination, all this without the need for pressurisation. Yet, it requires high investment, heat demand for regeneration, and it often undergoes corrosion and salt precipitation [113]. Cryogenic separation also allows a high purity for CH 4 , and CO 2 is obtained as a byproduct [113]. However, it implies high costs for capital and operation, and it still under development to be implemented at a larger scale (such as landfills). For low-income countries, the cheapest (and easy to use) technology is adsorption (e. g., granular activated carbon, zeolites, metal-organic framework), which allow adsorbing relatively high quantities of CO 2 (dominant gas in aerobic landfill gas) and CH 4 to a greater extent under anaerobic conditions [100]. Nevertheless, the success of this application relies on low/absent moisture conditions. Several treatments are also available for leachate treatments with the potential to remove microplastics, which includes photochemical and chemical processes, coagulation, reverse osmosis, dynamic membrane filtration, bioreactors/biological degradation, sequencing batch reactors, among others (Table 2). Among them, the sequencing batch reactor proved high efficiency (100%) for the removal of microplastics > 50 μm in size from landfill leachates [105]. Yet, such a technological approach has low efficiency in removing pathogens, requires skilled personnel and dependence on uninterrupted power supply (high maintenance). Other possibilities include microplastics photocatalytic degradation (e.g., with zinc oxide), which stands out as a viable and energy-efficient method (which also removes plastics at a nanoscale) [115]. However, some end products from photocatalytic degradation may impose a risk to both animal and human health. A solution may involve the application of highly efficient sources of U.V. radiation and the use of catalysts that absorb radiation from the visible spectrum [116]. Fenton's oxidation (another catalytic process) combined with biological treatment seems to be the best compromise (so far) between microplastics removal, effectiveness in treating hazardous chemicals, cost/benefits ratio [116]. However, the implementation of any innovative technologies (for the treatment of biogas and leachates) is sitespecific and case-sensitive, depending on the utilisation requirements and local specifications. Implementation of integrative monitoring programs Along with the in-situ monitoring studies (mandatory in most countries; e.g., see [117] to assess quantity and composition of landfill biogas and leachates, it is also crucial to address their potential environmental risk. For this purpose, the implementation of frequent aerial, geomorphological/geodetic and/or geoelectrical surveys can provide essential insights on the impacts of landfills on their surrounding environments [118], along with the spatial displacement of landfill areas [119], and spread of contaminated plumes [120]. These studies can then be allied with integration software (e.g., RES2DINV ERT and Oasis Montaj modelling software), serving as a proficient metric for delineating landfills' impact on humans, ecosystems, and water-bearing structures, both at the ground surface and underground features [120]. Such risk assessment studies should be coupled to other metrics such as Plastic Waste Footprint (i.e., metrics that encompasses the impact of plastic on natural resources and contribution to greenhouse emission, plastic pollution, and climate change) and used as a tool for decision making/ policy creation and public engagement, as it provides a numerical form of environmental burdens for use by non-specialists [121]. Final remarks COVID-19 pandemic has led to significant disruption in plastic waste management, with severe environmental challenges. Landfills have been the most recurring disposal technology to deal with COVID-19 pandemic plastic waste that goes along MSW, particularly in developing countries. This is of particular concern when forecasting future pandemics scenarios, as they proved to be recurrent. It is time to rethink plastics (prioritising bioplastics) and current plastic waste management strategies while improving waste collection and treatment facilities and implementing strong and effective plastic policies towards a circular bioeconomy and environmental sustainability. Landfills extinction is still a long road ahead, especially for developing countries as they have limited financial support to implement and prioritise recycling and waste-to-energy options. Thus, it is crucial for such countries to enforce and provide good policies and guidelines on MSW management, particularly during pandemic scenarios, to avoid overloading such facilities and illegal dumping. Although South Korea be among the countries with high income, its success in biomedical waste management relied on the implementation of extraordinary and tightened measures for safe waste disposal and management (previously applied against MERS) against COVID-19 even before being considered pandemic (i.e., January 28, 2020). Through a volume-based waste fee system (VBWFS) for MSW, South Koreans could purchase standard coloured bags for each type of waste (e.g., yellow for food waste; blue for general waste). During COVID-19, they had garbage bags labelled "waste for incineration" (here to include PPE) and "waste bag for landfill" still through the VBWM system. This helped managing which waste was following to landfills while imposing correct public behaviour. Hong Kong, Korea, and Japan introduced bonus schemes to encourage waste recycling. In Wuhan, China, and Bangkok, Thailand, implemented specific bins "for facemasks only" (as implemented in Canada and Portugal) allow to collect masks for correct end-of-life, including repurposing safely. A similar strategy worldwide implemented, allied with a significant engagement of ordinary citizens, and basic infrastructure establishment and capacity improvement of the new proposed design of medical contaminated waste treatment would reduce the PPE amount going to landfills. In addition, governmental actions should include the reinforcement of 3R' (reduce, reuse, recycle) policies by implementing incentive/ reward programs; engagement of the general public on recycling activities including PPE by providing specific bins for new recycling streams for such equipment; reorganization of municipal solid waste collection and handling strategies to promote recycling and make up for the new PPE recycling streams; improvement of waste management facilities (priority should be given to flexible and decentralized approaches) through effective financial mechanisms; promotion of a sustainable assessment of technologies (SAT) for Best Available Technology (BAT) for waste treatment/management considering their technical, social, and economic aspects, along with the environmental performances. For instance, innovative and effective (bio)technologies and computational tools to improve landfills are already available and will continue to advance exponentially, but they must be prioritised in forthcoming financial programs. Alongside, monitoring and risk assessment of the impacts of landfills on-site on their surrounding environments is recommended, along with the implementation of frequent aerial and geomorphological surveys, to develop strict guidelines, limits and contingency plans. Synergisms between academia-governments-stakeholders is also fundamental to develop sustainable alternatives and implement active mitigation and remediation measures. Equal importance is given to the general population's involvement in education and science dissemination programs to support sustainable behaviour (e.g., preference for biobased products, prioritise recycling to close the loop), elucidate the environmental issues related to plastic pollution, and help phasing-out landfills by entailing a circular bioeconomy. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-07-16T13:14:07.797Z
2021-07-10T00:00:00.000
{ "year": 2021, "sha1": "43e084ce6692737e04cfae213edc1e9080233bc3", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.cej.2021.131201", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "652e39f3cdcf425109f31f978cf9e0a082874e50", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
228983177
pes2o/s2orc
v3-fos-license
A Study in Sorption of Cu + 2 , Fe + 2 and I 2 onto Graft copolymers of Cellulose with N-Vinyl Pyrrolidone and Butyl Acrylate , and Their Functionalized Derivatives Cellulose is of special interest due to its abundance in nature and it provides excellent materials for membranes both in native and derivatized forms. Graft copolymers of cellulose when used in separation and enrichment technologies have advantages over the conventional ones, due to chemical resistance, radiation stability and low cost of preparation. Cellulose based graft copolymers and hydrogels offer large hydrophilic area despite being insoluble in water and enrich or separate metal ion by binding, adsorption, chelation and ion exchange processes. Potential of cellulose as sorbent can be improved by radiation and chemical grafting, crosslinking and polymer analogous reactions on some graft copolymers. Grafting of suitable monomers with hydrophobic and hydrophilic/ionic moieties combine high degree of selectivity, permeability and longer stability, and enables complexation with low molecular weight species. Incorporation of functional groups like nitrile, hydrazino, hydroxamic acid and phosphate by derivatization and post polymer reactions enhances metal ion sorption capacity of Cellulosics. In present study an attempt has been made to study the sorption of Fe +2 , Cu +2 and I2 on select graft copolymers of cellulose with N-Vinylpyrrolidone (1-Vinyl-2-prrolidone, N-VP) and Butyl Acrylate(BuAc) on to cellulose and some of their functionalized derivatives, on the basis of lowest to highest percent grafting (Pg). An attempt has also been made to investigate selectivity in metal ion sorption and effects of structural aspects of functionalized graft copolymers to find their end-uses as cost effective and eco-friendly polymeric materials for waste water management technologies. water technologies. Extracted cellulose is a new backbone polymer. Chauhan and co-workers have used it for the first time to synthesize large variety of functional polymers by grafting as single or from binary monomer mixtures. They have also developed cellulose based hydrogels, and reported these as supports for enzyme immobilization, flocculents and metal ion sorption [1][2][3][4][5][6][7][8].The graft copolymers of cellulose with N-Vinyl pyrrolidone hereafter called Cellg-poly(N-VP) and those with Butyl Acrylate called as Cell-g-poly(BuAc) synthesized and reported earlier by Chauhan and present author Suresh [9][10][11][12], were used for present study. Cellulose derivatives and their graft copolymers like cellulose phosphate (Cell-PO 4 , Cell-PO 4 -gpoly(N-VP), Cell-PO 4 -g-poly(BuAc), Deoxyhydrazinocellulose (Cell-NHNH 2 , Cell-NHNH 2 -g-poly(N-VP), Cell-NHNH 2 -gpoly(BuAc) and some of the graft copolymers of cellulose with butyl acrylate functionalized to hydroxamic acid moieties (Cell-g-poly(-CONHOH) were selected and subjected to sorption of Fe +2 , Cu +2 and I 2 . Result were presented and discussed to define end uses of these polymers in water based technologies. Experimental 2.1. Materials Polymer networks of cellulose and derivatives as synthesized earlier with different monomers and few of their functionalized polymers were selected on the basis of considerable values of percent grafting ( P g ). Copper sulphate, Iodine and ferrous sulphate (analytical grade, CDH, Mumbai, India) were used as they received. Methods Sorption studies were carried as reported earlier by Chauhan [4,5,6] by immersion of polymer samples for 24hrs in50.00mL solutions of known strength. Filterate of solutions were analysed for concentration of rejected ions on DR20210 Spectrophotometer(HachCo.,Us) by using its standard pillow reagents. Using thismethod maximum limit of ion strength that can be studied is, 5.0, 3.0 mg/L of solution, respectively, ofCu +2 andFe +2 ions. Thus the residual filtrate was diluted to reach this range. All weights were taken on DenverTR-203 balance having maximum readability of 1.0mg. Different relationships used to express sorption behavior are as follows [6,[9][10][11][12]. Results and Discussions Interaction of metal ions with polymer occurs by way of binding, adsorption and ion exchange processes.Sorption is a common term used to express the nature of metal ion uptake by adsorption on anchor groups, ion exchange and also in the bulk of polymer hydrogel pores.Metal ions are effectively partitioned between polymer and liquid phase. Retention capacity of a polymer can be effectively enhanced and affected by hydrophilichydrophobic balance, nature of monomer and backbone as major factors. Ligand function also dictates reactivity, complexation ability and efficiency of polymer support. Effect of structural aspects of different groups of graft copolymers on metal ions is discussed. 3.1Sorption of Metal ions by Poly(N-VP) BasedCopolymers In present case N-VP is a good complexing agent. In case of cell-g-poly (N-VP) sorption of Cu +2 and Fe +2 ions is significant. Metal ions sorption decreases with P g increase meaning thereby the frequency of grafting may be more but grafted chain length is shorter in case of lower graft copolymer that provides a larger surface area to metal ions for sorption .Graft copolymerization onto cellulose derivatives such as Cell-PO 4 ,Cell-NHNH 2 and CEC results in increase in metal ion sorption (both Cu +2 and Fe +2 ) as compare to native cellulose with maximum P g as these have extra active groups for attachment of metal ions. Sorption of Fe +2 is substantially higher than that of Cu +2 (Table 1). Weight of dry polymer =50mg, Cu +2 and Fe +2 feed=10mg/L, iodine feed=11.25mg/L Sorption of Iodine by Poly (N-VP) Based Copolymers Poly (N-VP) has been reported to adsorb substantial amounts of iodine, poly iodide anion, sodium do-decyl sulphate, azo dyes and methyl orange.Cell-g-poly (N-VP) also exhibits this property. The biding force between adsorbate and adsorbent is formation of charge transfer complex (CT). Lactum (tertiary cyclic amide) tautomerises to lactim form which is responsible for the formation of CT complex. Adsorption of I 2 is also explained on this basis. Lactim form of poly (N-VP) is unstable and can be converted to lactum by desorption process that means adsorption of I 2 is a reversible process. In present case of cell-g-poly (N-VP) the adsorption decreases with P g (Table 1). Since lactim form is susceptible to hydrogen bonding as well as hydrolysis, with increase in P g polymeric association increases and binding sites are blocked. However, during derivatization process the structure of backbone polymer becomes more open and surface area for sorption increases resulting in higher adsorption of iodine by cellulose derivatives such as Cell-PO 4 , Cell-NHNH 2 and CEC as compare to cellulose. 3.3Sorption of Metal Ions by Poly(BuAc) Based Copolymers Different graft levels of cell-g-poly (BuAc) and their respective functionalized polymers, cell-gpoly (CONHOH) were selected for metal ion sorption. Results of metal ion uptake are presented in Table 2. In case ofcell-g-poly (BuAc) sorption of Fe +2 is less than cellulose and decreases with decrease in P g but the change is not very much significant. Hydrophobicity of the BuAc may be the major factor to influence the partitioning of ions between polymer and solution phase. It has been observed that Fe +2 ions sorption increases significantly in cellulose derivatives as well as functionalized cell-g-poly (CONHOH) as compare to its parent graft polymer. It is on the expected lines as both derivatization and functionalization open up the backbone polymer and exposes the active groups to metal ions. It can also beconcluded that increase in sorption is almost constant and is independent of P g .Trend in sorption of Fe +2 by cellulose and its derivatives follows the order: Cell-NHNH 2 > Cell-PO 4 >CEC> cellulose whereas their graft copolymers follow the order:Cell-PO 4 -g-poly(BuAc)>Cell-NHNH 2 -gpoly(BuAc)>CEC-g-poly(BuAc)>cellulose.The functionalized polymers established the following trend: Cell-NHNH 2 -gpoly(CONHOH)>Cell-PO 4 -gpoly(CONHOH)>CEC-g-poly(CONHOH). AcceptCell-NHNH 2 -g-poly (CONHOH), the sorption of Fe +2 decreases with decrease in P g implying that sorption is linear function of extent of polymer conversion. In case of Cu +2 ions sorption, a reverse trend is observed. Sorption of Cu +2 ions by cell-g-poly (BuAc) is higher as compare to cellulose and it decreases with increase in P g whereas in that of functionalized, cell-g-poly (CONHOH),it is again comparatively less than (almost half) the parent polymers but decreases again with P g . Thus selectivity is observed in sorption of these two ions. The order of Cu +2 ions sorption follows as: Cellulose<Cellg-poly(BuAc)>Cell-g-poly(CONHOH).Trends in sorption of Cu +2 ions in cellulose derivatives are as:Cell-NHNH 2 >CEC>Cell-PO 4 and in their graft copolymers it follows the order:CEC-gpoly(BuAc)>Cell-PO 4 -g-poly(BuAc)>Cell-NHNH 2 -g-poly(BuAc) without any definite relationship with P g . Similarly, trend in cell-gpoly (CONHOH) was observed as: Cell-NHNH 2 -g-poly (CONHOH)>Cell-PO 4 -g-poly (CONHOH)>CEC-g-poly (CONHOH) similar to the order followed by Fe ions. Results are presented in Table.2. Weight of dry polymer=50mg, Cu +2 and Fe +2 feed=10mg/L Conclusions Structure of polymer backbone and its modification by grafting and functionalization by post polymer reactions affected the metal ion sorption behaviour appreciably that can be viewed from the percent ion uptake of these polymers. It has been observed that sorption of Fe +2 ions is higher as compare to that of Cu +2 ions and graft copolymers based on poly(N-VP) are better sorbents than those based on poly(BuAc),and can be successfully used in water technologies.
2020-11-05T09:10:36.759Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "f7f4511698461d27c61e15f8d6c1538bc7c75b5f", "oa_license": "CCBY", "oa_url": "https://rspsciencehub.com/article_1362_1b09e334109777ac5e86d9dde90e2dad.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f3553ac89716c95f5283da47169b0b1cc6a1a891", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
208451031
pes2o/s2orc
v3-fos-license
Clinicopathological Profile of 66 Patients with Carcinoma Stomach in North-East Part of Bangladesh Background: There is wide variation in the prevalence of carcinoma stomach throughout the world. In some parts of the world it is decreasing whereas in the other parts it is increasing. Also there are variations in the risk factors of the disease. Objective: To see clinicopathological profile of patients of carcinoma stomach in North-East part of Bangladesh. Materials and Methods: Consecutive new patient diagnosed as carcinoma of stomach were interviewed and data was recorded in a data sheet. Results: Total 66 cases, age varying from 26−77 (mean 52) years, male 45 (68.2%) and female 21 (31.8%) were enrolled. Out of them 44 (66.7%) were above 45 years. People from lower economic group (51, 78.8%) and rural area (39, 59.1%) were predominantly affected. Commonest presenting symptoms were weight loss, abdominal pain and vomiting. Common site of lesion was antrum and common histopathological type was adenocarcinoma (64, 97%). Conclusion: Carcinoma stomach is a disease of older age group and male are predominantly affected. Smoking, tobacco and betel nut chewing are risk factors. Lower income group and rural people are more affected. Introduction In 2012, in the world estimated incidence of carcinoma stomach was 951594 and causing death of 723,073 individuals worldwide. 1 Overall it is the second most common cause of death in some Asian countries. [2][3] Worldwide the incidence of new cases of gastric cancer in 2002 was 934,000 of which 56% were from Asia. The incidence of gastric carcinoma is decreasing in western countries whereas increasing in the rest of the world. 4 There is a wide variation in the prevalence of gastric cancer throughout the world. It has been hypothesized that incidence of gastric cancer is determined by environmental factors rather than gastric factor. 5 Recognized dietary risk factors for gastric cancer are smoked foods, salted fish and meat, pickled vegetables. Smoking is another important risk factor. [6][7][8][9] Gastric cancer is three times common among male. 9 Person working in coal mine, nickel refinery rubber and timber processing industries and those exposed to asbestos fibers are more affected. 10 In this background this cross-sectional study was designed to see the profile of patients of carcinoma of stomach as well as associated risk factors in North East part of Bangladesh. Materials and Methods All consecutive patients of carcinoma of stomach newly diagnosed by endoscopy of UGIT and histopathology in North East Medical College & Hospital, Sylhet were enrolled in this study. History, clinical examination and laboratory findings along with demographic features were recorded in a predesigned data sheet. Statistical analysis was done using SPSS 17. χ 2 test was done to see the difference and p value <0.05 was taken as significant. Sample size was calculated using Fruchere and Guilford formula. Estimated sample size was 66. Results Total 66 patients, age varying from 26−77 years (mean 52.0, SD 11.30 years) were enrolled in this study. Among them 45 (68.2%) were male and 21 (31.8%) were female with male female ratio 2.1:1. Among them 44 (66.7%) were above 45 years age group while 22 (33.3%) were in 26−45 years age 13,14 Smoking, betel nut and tobacco chewing have been found as potential risk factors. In this study 41 (62.1%) subjects were smoker and 52 (78.8%) were tobacco chewer. It is also consistent with reports from India. 15,16 In this small group of patients housewives and farmers were mostly affected. Farmers are exposed to chemical fertilizer and insecticides which may be risk factors. But higher incidence in housewives could not be explained. This result is not consistent with reports from India. 17 In our study 39 (59.1%) patients are from lower economic group followed by 21 (31.8%) from middle class. Environment, food habit and nutrition may have a causal role here. This finding is consistent with reports from Addis Ababa. 18 Most gastric cancers occur sporadically 19 and it can occur in family occasionally. 20 In current study 3 (4.5%) patients had family history of gastrointestinal malignancy. In this group, 25 (7.8%) had blood group B followed by O group (21, 31.8%). But previous reports show that carcinoma of stomach is more common in people with blood group A. 21,22 These contradictions may be due to small sample size. Common presenting symptoms in our series are weight loss (74.2%), anaemia (72.7%), abdominal pain (62.1%) and vomiting (51.5%). Features of gastric outlet obstruction are more common in our series than in western country reports. 23 This may be explained as our people seek medical service late due to economic constrain and ignorance. In this series site of involvement is mostly antrum (62.1%) which is consistent with reports from other Asian countries (48%). But incidence of proximal gastric carcinoma is increasing in western countries with simultaneous decrease in distal lesion. 24,25 Commonest histopathological type in this series is adenocarcinoma (97%), which is similar to another Asian report. 26 Incidence of metastasis is 19.7% in our series, which is consistent with another report. 27 Due to routine screening gastric cancer is detected earlier in Japan. 28 But in our country such screening is absent. In addition poor socio-economic status and less awareness lead to delayed medical consultation in our country. In this study, features of gastric outlet obstruction and antral lesion are common findings in age group more than 45 years. Further study with large sample size may be done in future for proper assessment of demographic features as well as planning for early detection and treatment. Limitations Sample size was small. Presence of H. pylori infection was not looked for as a causal factor.
2019-10-03T09:06:38.572Z
2019-09-22T00:00:00.000
{ "year": 2019, "sha1": "36bc3a8977c3e5c201c689342566a9449fc89dd7", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/JEMC/article/download/43245/32008", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8a761e03dcabb6e3dba5fded8a0fce48059eacb2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
29162007
pes2o/s2orc
v3-fos-license
HIV Dementia with a Decreased Cardiac 123I-metaiodobenzylguanidine Uptake Masquerading as Dementia with Lewy Bodies Cardiac 123I-metaiodobenzylguanidine (MIBG) scintigraphy is a promising biomarker for dementia with Lewy bodies (DLB). However, we experienced a patient with cognitive decline, parkinsonism, and a decreased MIBG uptake who turned out to have HIV dementia. Normal dopamine transporter single-photon emission computed tomography reduced the possibility of comorbid Lewy body pathology causing the patient’s parkinsonism. The decreased MIBG uptake was most likely due to postganglionic sympathetic nerve denervation, which can also be caused by HIV. This case further emphasizes the importance of excluding other causes of autonomic neuropathy, including HIV infection, before interpreting MIBG scans. Introduction Cardiac 123 I-metaiodobenzylguanidine (MIBG) scintigraphy is a promising biomarker for dementia with Lewy bodies (DLB). However, we experienced a patient with cognitive decline, parkinsonism, and a decreased cardiac MIBG uptake who turned out to have HIV dementia. Case Report A 58-year-old salesman with a 10-month history of cognitive decline, decreased motivation, bradykinesia, and shuffling gait consulted a neurologist. He made mistakes at work and frequently forgot to take his antihypertensive drugs, but his short-term memory was preserved at the time. His Revised Hasegawa Dementia Scale (HDS-R) score was 27/30. Cardiac 123 I-MIBG scintigraphy showed a markedly decreased heart-to-mediastinum ratio (H/M ratio), both in the early and late phases (Fig. 1). The patient had no history of diabetes or cardiac diseases and had been taking no medication other than nifedipine. Brain magnetic resonance imaging (MRI) showed T2 hyper-intense lesions in the white matter, which were considered indicative of leukoaraiosis at that time ( Fig. 2A). No obvious abnormalities were noted in either striata (Fig. 2B). DLB was initially suspected based on the cognitive decline, parkinsonism, and decreased cardiac MIBG uptake. Despite treatment with levodopa-carbidopa and donepezil hydrochloride, his cognitive decline and gait disturbance rapidly progressed. He became bedridden within nine months from the first visit and was referred to our hospital. Daily fluctuation in his cognition and visual hallucinations also appeared during the course. He frequently slept in the daytime and woke up at midnight and stated that he had seen numerous insects in his house. He had no history suggesting rapid eye movement sleep behavior disorder. His HDS-R score at admission was 8/30 (orientation to time -4, serial 7 -1, reverse digit span -2, delayed recall -6, object recall -4, word fluency -5). In addition to cognitive decline, bradykinesia, rigidity, and postural instability, a neurological examination revealed moderate muscle weakness, increased deep tendon reflexes in all extremities, and extensor plantar reflexes, which were atypical for DLB. The patient had constipation and urinary dysfunction with residual urine in the absence of prostate hypertrophy. He was seropositive for HIV type 1, with a viral load of 1.1×10 5 copies/mL, and his CD4 count was 56/μL. The cerebrospinal fluid showed a normal cell count and cytology with an increased IgG index (1.65). Follow-up MRI showed enlargement of the symmetric T2 hyper-intense white matter lesions and diffuse brain atrophy without gadolinium enhancement, which was consistent with HIV dementia (Fig. 2C and D). Dopamine transporter (DAT) SPECT using 123 I-ioflupane showed a preserved uptake in both striata (Fig. 3), which was inconsistent with a diagnosis of DLB presenting with parkinsonism. The patient also had massive left pleural effusion that was attributed to malignant lymphoma associated with HIV. Contrast-enhanced computed tomography showed that the lymphoma was restricted to the left pleura, and spinal MRI showed no abnormalities at the spinal cord. The patient received a highly active antiretroviral therapy. Although his cognition and gait partially improved, he remained fully care-dependent and died due to pneumonia one year after the final diagnosis. Discussion The rapid progression of symptoms and white matter lesions revealed by brain MRI suggested that the patient's symptoms were a manifestation of HIV infection. However, a decreased cardiac MIBG uptake made it difficult to establish an early diagnosis. HIV is a known cause of dementia and parkinsonism (1) but is also frequently associated with autonomic neuropathy (2). Although the mechanism underlying the autonomic neuropathy in HIV-infected patients is unclear, HIV antigens and inflammatory cells have been detected in the sympathetic ganglia in autopsy studies (3,4). These reports suggest that HIV can cause sympathetic ganglionitis, which may have contributed to the decreased cardiac MIBG uptake in our patient, although the possibility that coexisting incidental Lewy body disease may have caused the decreased cardiac MIBG uptake cannot be completely ruled out without an autopsy. Cardiac MIBG scintigraphy reflects postganglionic sympathetic nerve innervation and is useful for differentiating Lewy body diseases from other causes of parkinsonism or dementia (5,6). It is widely performed in Japan, and since the revised diagnostic criteria for DLB have further emphasized its role (6), the test should be performed in suspected cases of DLB. However, caution should be practiced, as other conditions that cause autonomic neuropathy or cardiac diseases may also be associated with abnormal results (5,6). Due to the low prevalence of HIV infection in Japan, it has not been widely recognized that patients with HIV may also present with a decreased cardiac MIBG uptake. HIV infection likely underlies the T2 hyper-intense white matter lesions observed at presentation. These lesions, however, were initially difficult to differentiate from leukoaraiosis. Parkinsonism has been reported in 5-50% of HIVinfected patients and is considered to be associated with atrophy or hypometabolism of the basal ganglia, dopaminergic dysfunction, and subcortical lesions (1). Our patient showed a preserved DAT uptake in both striata, suggesting that the subcortical lesions or post-synaptic dysfunction mainly contributed to the parkinsonism. In conclusion, although cardiac MIBG scintigraphy is a useful biomarker for DLB, our case emphasizes that other causes of autonomic neuropathy, including HIV infection, which is potentially treatable, should always be taken into account in clinical practice.
2018-05-25T21:26:16.707Z
2018-05-18T00:00:00.000
{ "year": 2018, "sha1": "81679dd75d5f38e3bb198705f7181f3214b32e03", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/57/20/57_0876-18/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81679dd75d5f38e3bb198705f7181f3214b32e03", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208739424
pes2o/s2orc
v3-fos-license
Effect of Carrier Oil and Co-Solvent on The Formation of Clove Oil Nanoemulsion by Phase Inversion Technique Development of nanoemulsion is gaining considerable attention for use in delivering hydrophobic constituents such as clove oil in foods and agriculture system. The small size of the oil droplets in the nanoemulsion system offers many advantages such as high stability, optical clarity, and improved water solubility and bioactivity. This research was aimed at investigating the effect of incorporation of carrier oil and co-solvent on the formation of clove nanoemulsion. Clove oil-loaded nanoemulsions were prepared by a low energy – phase inversion technique involving a carrier oil (medium chain triglyceride/MCT), at different ratios to the clove oil (1:2, 1:1, 2:1), a co-solvent (glycerol) at ratios of 0 and 1:1 to the mixture of clove oil and MCT and a non-ionic surfactant (Tween 80)at a ratio of 1:1, with two concentration levels of mixture of clove oil and MCT (5% and 10%). The formation and characteristics of nanoemulsion were evaluated including particle size, polydispersity index, zeta potential, and freeze-thaw stability, as well as their possible mechanisms of destabilization. Particle sizes ranged from 45.98 to 220 nm with narrow ranges of polydispersity index (0.072 – 0.286) and zeta potentials (-12.8 and - 22.6 mV). Incorporation of carrier oil at low proportions gave smaller size of oil droplets, and the presence co-solvent enhanced nanoemulsion stabilization. Creaming accompanied by oiling- off was found upon destabilization of nanoemulsion with different rate and appearance as influenced by nanoemulsion composition. This study provides important information about stabilization of nanoemulsion by incorporating carrier oil and co-solvent suitable for foods and agrochemical formulation. Introduction The use of essential oils in the development of natural food ingredients has gained considerable interest due to their aromatic properties as well as other functionalities such as antimicrobials [1][2][3][4][5] and antioxidant [3,6]. Most of the essential oils are insoluble in water and are difficult to incorporate them into water-based food formulation. Transforming the oil into a colloidal delivery system such as oil-inwater (o/w) nanoemulsion is an interesting approach to overcome their water insolubility problems. In nanoemulsion system, oil droplets are sized down to below 100 nm [7], 20 -200 nm [8] to ranges below 300 nm [9] so that the tiny size of droplets generates unique properties providing potential advantages such as high physical stability, enhanced bioactivity and transparent appearance [10,11]. Fabricating nanoemulsion using low energy approaches has attracted extensive interest because of their practical application with low cost and high energy efficiency [12]. Many studies have been done to produce stable nanoemulsion using low energy approaches, but only a few uses oil model suitable for 10.1088/1755-1315/309/1/012036 2 the food system. In this study, clove oil was used as an essential oil model for the formation of nanoemulsion using phase inversion point technique. Clove oil was shown to have the largest inhibition against microbiota mainly found in food [13] clove oil nanoemulsions have been formulated using low energy approaches incorporating non-ionic surfactant with different physical and stability properties [13,14]. As a non-equilibrium system, nanoemulsion tends to undergo separation over time with many mechanisms such as Ostwald ripening, creaming/sedimentation, flocculation and coalescence [15,16]. Many efforts have been done to produce stable nanoemulsion by incorporating co-solvent and carrier oil. Incorporation of the carrier oil facilitates the small droplet formation as well as improve the stability. Co-solvents are used because they can modify the bulk properties of aqueous solutions (such as viscosity, density, refractive index, interfacial tension, and solubility) and also the structural properties of surfactant solutions (such as optimum curvature, critical micelle concentration, and phase behaviour. In general, nanoemulsion formation can be impacted by differences in oil phase viscosity, interfacial tension, and phase behaviour, while nanoemulsion stability can be influenced by differences in polarity and water-solubility of the oil molecules. Although small initial particle diameters can be achieved using carrier oil and cosolvents, the resulting nanoemulsions are often highly unstable to droplet growth during storage. It is important that the combination of surfactant and oil components used is able to form a microemulsion at an appropriate ratio that will break down and produce fine oil droplets. This present study aimed at investigating the effect of the addition of carrier oil and co-solvent on the formation of clove nanoemulsion and determining their formulation with surfactant and oil composition that provide stable nanoemulsion. Materials Clove oil was extracted by hydro-distillation of the leaves of Syzygium aromaticum obtained from Manoko Experimental Station, West Jawa. The oil was used as received without further process. Nonionic surfactant polyoxyethylene sorbitan oleate (Tween 80) and glycerol were of analytical grade (Merck, Germany). Medium chain triglyceride (Miglyol 812N) was purchased from Cremer Oleo GmbH Co. KG (Hamburg, Germany). Water used for emulsion preparation and particle size analysis was purified by Direct-Q ® 3 UV-R Water Purification System (Merck, Germany). Nanoemulsion was prepared using a low energy phase inversion technique. This technique involves the formation of the water-in-oil emulsion and the subsequent phase inversion into the oil-inwater emulsion. Clove oil was placed into a beaker and mixed with MCT and Tween 80 under magnetic stirring at 750 rpm for 30 min to form an organic phase. Separately, the water phase was prepared by mixing water and glycerol under the same stirring condition as the organic phase preparation. Nanoemulsion was then formed by adding the water phase into the organic phase upon stirring for 60 min to reach the final weight of the emulsion system of 50 g. The nanoemulsion formed was placed into sample bottles for further use of analysis. The compositions of nanoemulsion were described in Table 1. Experiments were done in two replications. Particle Size and Zeta Potential Analysis Emulsion droplet size was measured using a Malvern ZetaSizer (Nano-ZS, Malvern Instruments, Malvern, UK). Emulsion sample (2 drops) was placed into a cuvette and diluted in water (2 ml) for particle size measurement. Sample for zeta potential measurement (25 l) was placed into a capillary cell and diluted in water (2 ml). The particle size was measured as Z-average applying Stokes-Einstein relation with its corresponding polydispersity index (PDI). Measurement of particle size and zeta potential of each sample was taken three times. Freeze-Thaw Stability The nanoemulsion was placed into reaction tubes and tested for their stability against freeze and thaw cycles by keeping it at -18°C for 20 hours and 40°C for 2 hours. The stability was determined by evaluating the phase separation upon freeze-thaw cycles. Oil Droplet Size Oil droplet size varied with the differences in the concentration of the mixture of oil and MCT, the oil to MCT ratio, the concentration of Tween 80 and the concentration of glycerol, ranging from 46 to 220.2 nm. The smallest droplet sizes were observed in the use of the lowest proportion of MCT (oil to MCT ratio of 2:1) (Figure 1). With the presence of glycerol, the use lower concentration of Tween 80 and mixtures of oil and MCT resulted in larger particle size. Conversely, in the absence of glycerol, the lower concentration of Tween 80 and mixtures of oil and MCT produced smaller particle sizes were obtained in. In the clove oil-loaded nanoemulsion system which the density of the oil higher than water (1.06 g/cm 3 ), the presence of MCT with the density of 0.94 g/cm 3 acts as a lighting agent that reduces the density difference [17] and prevent gravitational separation. This might facilitate the formation of ultrafine droplet sizes under the certain surfactant-to-oil ratio. Addition of MCT in large proportions (oil to MCT ratio of 1:1 and 1:2) produced nanoemulsions with larger droplet sizes. In the higher MCT proportions, the larger density differences between the oil and water phase might result in the formation of larger oil droplet sizes. Similar results were also reported for a mixture of MCT with capsanthin and vitamin E [18] that used relatively low ratios of MCT to produce nanoemulsion with small droplet size. The presence of MCT might also modify the oil viscosity and solubility that determine the formation of small droplets in the nanoemulsion [16,19]. The addition of MCT with a viscosity of 27 -33 mPa.s could increase the watery viscosity of clove oil. At the lowest proportion of MCT (Clove oil to MCT ratio of 2:1), the minimum increase in oil phase viscosity possibly enabled the effective mass transport of Tween 80 through the oil and into the aqueous phase [20] and rapid droplet disruption during nanoemulsion preparation. The incorporation of glycerol into the aqueous phase modifies many physicochemical and molecular properties such as density, viscosity and refractive index that may affect the droplet size [18]. It has been reported that the addition of glycerol at high level decreased oil-water interfacial tension, increased the critical micelle concentration and decreased the HLB number thus facilitated the formation of small droplets. In this present work, glycerol was added at a relatively low level (a maximum of 10%) and resulted in a slight decrease in droplet size. Lower levels of glycerol addition (5%) were found to increase the droplet size. Similar results were also observed in the use of low concentration of glycerol in the preparation of vitamin E nanoemulsion [21]. The explanation can be found in the phenomenon of the formation of highly viscous crystalline liquid that complicates the breakup of the oil-water interface for the formation of small droplets. The concentration of oil (a mixture of clove oil and MCT) and Tween 80 played also roles in determining the oil droplet size. The use of high concentration of oil produced nanoemulsion with larger oil droplet size, particularly at oil to MCT ratio of 1:2 and 2:1. Although the use of high concentration of oil was also accompanied by the addition of high concentration of Tween 80, larger droplet sizes were observed. These results suggest that the physicochemical properties of the bulk phase together with the phase behaviur of specific surfactant-oil-water system are important in the formation of the nanoemulsion. Polydispersity Polydispersity index (PDI) varied between 0.045 and 0.285, and had different trends with that of the droplet size. The lowest PDI was found in the use of oil and MCT at the same ratio (1:1) (Figure 2). Incorporation of lower or higher proportion of MCT (oil to MCT ratio of 1:2 and 2:1, respectively) showed higher PDI. The presence of glycerol exhibited also the increases in PDI, both at low and high concentration. These could be associated with the physicochemical changes of the bulk phase with the variation of composition which altered the oil-water interface properties. Larger PDI observed in the presence of MCT at a high proportion (oil to MCT ratio 1:2) might be attributed to the higher viscosity of the oil and the expected slow droplet disruption during nanoemulsion preparation as discussed in the previous section. Furthermore, under the specific composition of oil and surfactant, destabilization process such as coalescence or Ostwald ripening might occur. The broad size distribution can be as a result of destabilization through the coalescence mechanism, while the narrow distribution can be attributed to the Ostwald ripening [22,23]. Coalescence takes place when the surfactant film covered the droplet surface ruptured upon contact each other [23]. In the Ostwald ripening, the larger droplets grow at the expense of smaller droplets [16,[24][25][26]. The presence of glycerol showed increases in PDI, both at low and high concentration. The high PDI at the addition of glycerol can be explained by the interactive effect of surfactant concentration It was interesting to note that the small PDI observed was not associated with the small corresponding droplet size (Figure 3). This might be related to the rate of droplet disruption during mixing and the rate of destabilization after the droplet formation. Often, ultrafine droplets produced at the beginning grow rapidly during storage, which may be due to the occurrence of coalescence or Ostwald ripening. Figure 2. Polydispersity index of nanoemulsion as influenced by the oil to MCT ratio and the organic phase and glycerol concentration. Zeta Potential Zeta potential of clove oil nanoemulsion varied with different composition, ranging from -12.80 to -22.60 mV. The addition of glycerol at high level showed low values of zeta potential for the whole combination of oil concentration, Tween 80 and oil to MCT ratio, while the incorporation of glycerol at low concentration gave higher values of zeta potential at the certain oil to MCT ratio (1:2 and 1:1). The presence of MCT at the same proportion with clove oil together with the addition of glycerol and the oil mixture at low concentration produced droplets with the highest value of zeta potential. Although non-ionic surfactant was used in this experiment, negative values of zeta potential were observed in all samples. This phenomenon was also found in the use of non-ionic surfactant in the preparation of some nanoemulsions [28] which might be associated with the interaction of oxyethylene group of Tween 80 and water molecules in the presence of hydroxyl ion at the oil-water interface. The addition of glycerol was reported to alter the solubility of surfactant monomer and partially dehydrate the hydrophilic head-group. These might also change the oxyethylene group of Tween 80 and water molecules that altered electrostatic forces surrounding the particles. It was observed that zeta potential decreased with the incorporation of glycerol at high concentration. This effect was more pronounced at the use of MCT at medium to high concentration. All samples had relatively low absolute of zeta potential (a maximum of 22.60 mV). Generally, the absolute values of zeta potential lower than 30 mV indicate low stability against aggregation due to electrostatic attractive forces between droplet molecules. However, a stable nanoemulsion against freeze-thaw test was produced within the specific formulation, indicating the role of other stabilization mechanisms than repulsive forces. Freeze-thaw Stability Freeze-thaw test provides an explanation about the instability mechanism of nanoemulsion upon chilling and cold storage. During freezing and thawing, some physicochemical changes take place including fat crystallization, ice formation, interfacial phase transition, interfacial layer conformation, and chemical or electrostatic interaction which can cause instability. Nanoemulsion samples had different freeze-thaw stability, varied between 1 cycles and 6 cycles. Samples containing a high concentration of oil tended to undergo separation at the first or second cycle. Large phase separations were observed in nanoemulsion with high oil to MCT ratio. Only one sample showed separation at cycle 5 and finally, only one sample was stable against separation after 6 cycles. The most stable nanoemulsion was observed in the use of a high ratio of MCT with low concentrations of oil, Tween 80 and glycerol. Upon freezing, the formation of ice pushes the oil droplets to move into close proximity. When the repulsive forces on the surface of the droplets are not strong enough, the droplets contact each other and the coalescence occurs. The use of small molecule surfactant such as Tween 80 forms a thin layer of membrane on the surface oil droplets so it gives less protection against coalescence during freezing (16). However, since the concentration of oil droplets used in this study was low (5%), they were less prone to coalescence during freezing. Droplet concentration is an important factor determining freezethaw stability as the droplet concentration might increase upon freezing resulting in coalescence instability. Conclusion Nanoemulsion of clove oil can be produced by low energy using phase inversion technique. The concentration of total oil (clove oil and MCT) and the incorporation of MCT as a carrier oil and glycerol as a co-solvent modified the characteristics of the nanoemulsion. The use of carrier oil at low concentration with or without glycerol produced small droplet size but susceptible to destabilization by the freeze-thaw test. The use of MCT at the same ratio with the clove oil both with and without glycerol gave the lowest polydispersity index. The use of low concentration of total oil together with the presence of MCT at high concentration and the addition of glycerol at low concentration produced larger droplet size but stable against 6 cycles of freeze-thaw test, indicating the role of MCT and glycerol in inhibiting destabilization. Further research is required to evaluate the possible mechanisms of destabilization.
2019-10-10T09:21:54.345Z
2019-10-09T00:00:00.000
{ "year": 2019, "sha1": "ee8e98d1018ffcc0f9bc5ab1b9579305fcb5141b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/309/1/012036", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e209b19a975c8b6f1766ebafd671e0474baeda76", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
123325845
pes2o/s2orc
v3-fos-license
Strategy for investments from Zipf law(s) We have applied the Zipf method to extract the $\zeta'$ exponent for seven financial indices (DAX, FTSE; DJIA, NASDAQ, S&P500; Hang-Seng and Nikkei 225), after having translated the signals into a text based on two letters. We follow considerations based on the signal Hurst exponent and the notion of a time dependent Zipf law and exponent in order to implement two simple investment strategies for such indices. We show the time dependence of the returns. Introduction Uusally analysts recommend investment strategies based e.g. on "moving averages", "momentum indicators", and the like techniques. [1,2] As soon as econophysicists discovered scaling laws in financial data, it was of interest to search for some predictive value from the laws through some extrapolated evolution. E.g. a technique known as detrended fluctuation analysis (DFA) which measures the deviation of correlated fluctuations from a trend was developed into a strategy known as the local (or better instantaneous) DFA in order to predict fluctuations in the exchange rates of various currencies, Gold price and other financial indices. [5,6] The statistical analysis of data was based on the value of the exponent of the so found scaling law, itself related to the fractal dimension of the signal, or also to the Hurst exponent of the so called rescaled range analysis. Mathematical extensions, so called q − order DFA and multifractals, can be found in the literature, though optimization problems and predictions on the future of fluctuations are apparently not so evident from these methods. A drawback in the DFA is found in the fact that it rather looks at correlations in the sign of fluctuations rather than at correlations in their amplitude. Another sort of data analysis technique is known as the Zipf technique, [3,4] originating in work exploring the statistical nature of languages. The Zipf analysis technique has also been used outside linguistic, financial and economic fields. [7] The technique is based on a Zipf plot which expresses the relationship between the frequency of words (more generally, events) and the rank order of such words (or events) on a log-log diagram; a cumulative histogram can be drawn as well. The slope of the best linear fit on such a plot corresponds to an exponent s describing the frequency P of the (cumulative) occurrence of the words (or events) according to their rank R through, e.g. There are many instances in which financial and other economic data can be described through a log-log (Zipf) plot : e.g., the distribution of income (Pareto distribution) [8], the size of companies [9], sociology [10], sometimes after translating the financial data into a text [11,12,13,14,15]. Thus it seems of interest to check whether such a technique can have some predictive value in finance. The present report is in line with such previous investigations. We present results based on considerations that financial data series are similar to fractional Brownian motion-like time series, and usually biased. [16] We examine whether a time dependent Zipf law and exponent exist and can be used in order to implement simple investment strategies. First, it is thus necessary to translate the financial data into a text, based on an alphabet with k characters and search for word s of length m. There are obviously k m possible words. They can be ranked according to their frequency on a log(frequency)-log(rank) diagram. A linear fit leads to consider the relationship as a power law. Moreover, in the spirit of the local DFA, a local (or "time" dependent) Zipf law or exponent can be introduced. [16] In this latter reference, we have also considered the effect of a linear trend on the value of the Zipf exponent. Here below we have translated seven financial index signals (DAX, FTSE; DJIA, NASDAQ, S&P500; Hang-Seng and Nikkei 225) each into a text based on two letters u and d. Based on the above considerations we have imagined two simple investment strategies, and report on the results (or "returns"). From the beginning we stress that a restriction to two letters is equivalent to examine only correlations in the fluctuation signs. However the Zipf method main interest is surely the capability to consider amplitude fluctuations, -by defining various fluctuation ranges. Data analysis The daily closing values of (DAX, FTSE; DJIA, NASDAQ, S&P500; Hang-Seng and Nikkei 225) indices, from Jan. 01, 1997 till Dec. 31, 2001 ( Fig.1) have been obtained from http : //f inance.yahoo.com/. They contain ca.1250 data points. After translating the financial time series into a text, one searches for words, and rank them according to their frequency. On a log-log paper, the best line fit slope is the Zipf exponent. Elsewhere we have already shown that the usual Zipf exponent ζ [3,4] depends on the normalization process used to calculate the ranks. If the frequency f of occurrence is normalized with respect to the theoretical one f ′ , i.e. that expected for pure (stochastic) Brownian processes, one has f /f ′ ∼ R −ζ ′ . The theoretical frequency expected for a letter in a text based on a binary alphabet, u, d takes into account the number n of characters, say of type d ( and u), in a word. Suppose that in the text, the frequency of a d (u) letter is p d (p u ). Usually, a bias exists, i.e. p u = p d . Therefore Whether or not the ζ and ζ ′ exponent depend on the bias has been examined elsewhere. [16] The p u and p d values for the seven indices are reported in Table 1, together with the bias defined here as ǫ = p u − 0.5. The linear tendency for the time interval is also given in Table 1. We have calculated overall Zipf exponent values, ζ (m,k) , and give the ζ (5,2) value for the seven indices in Table 1. In the spirit of the so called local (or better instantaneous) DFA method. We can consider that a Zipf exponent is time dependent, thus obtain a local Zipf law and local Zipf exponent. Only the case for m <8 letter words has been considered, but are not shown for lack of space. This m value is so chosen within the financial idea background having motivated this study, e.g. m = 5 is the number of days in a (bank) week ! In general a (one dimensional) financial index can be characterized by a so called Hurst exponent H, obtained as follows. The time series is divided into boxes of equal size, each containing a variable number of "elements". The local fluctuation at a point in one box is calculated as the deviation from the mean in that box. The cumulative departure up to the j th -point in the box is next calculated in all boxes. The rescaled range function is next calculated from the difference between the maximum and the minimum, i.e. the range in units of the rms deviation in the box. The average of the rescaled range in all boxes with an equal size n is next obtained and denoted by < R/S >. The above computation is then repeated for different box sizes s to provide a relationship between < R/S > and s, -which is expected to be a power law < R/S >≃ s H if some scaling range and law exist. If H = 1/2 one has the usual Brownian motion. The signal is said to be persistent for H > 1/2, and antipersistent otherwise. We have calculated the H urst exponent [17] by this rescaled range analysis [18] for the seven financial index signals. Their H value and the corresponding error bar are given in Table 1. The error bars are those resulting from a best linear fit and a root mean square analysis. Tests (not shown here) of the stochasticity (or not) of the data can be based on the surrogate data method [19] in which one randomizes either the sign of the fluctuations or shuffles their amplitude, and finally observes whether the error bars (or confidence intervals) of the raw signal and the surrogate data signal overlap. Returns and basic Zipf strategy The method is based on searching for the probability of a character sequence at the end of a word. We consider the case of what can happen the next day after a few (m − 1) days) only. Consider a word of length m − 1, and calculate in all boxes of size τ the probabilities p u (t) and p d (t) to have a character sequence (c t−m−3 , ..., c t−1 , u) and (c t−m−3 , ..., c t−1 , d) respectively, where c t represents the character at time t. Since only a k=2 alphabet is used, it is fair to develop a simple strategy based only on the sign of the fluctuations, thus use a strategy similar to that one implemented in the "instantaneous" DFA, i.e. when expecting correlated or anticorrelated fluctuations, in u and d. In order to avoid investment activity when the choice probability is low we have used a strength parameter for measuring the relative probabilities, i.e. varying between 0 and 1, its value giving the number of shares bought (or sold) at a certain investment time. Results are reported when windows (boxes) of size τ =500 respectively are moved along the signal. This value corresponds to a 2 year type investment window. Notice that the local exponents are usually larger than the average one, due to finite size effects. In the Zipf 1 (Z1) strategy, we consider that if p u (t) > p d (t), a "buy order" is given. A "sell order" is given for p u (t) < p d (t). No order is given when both probabilities are equal. Results reported in Table 2 pertain to m= 3, 5, and 7 at the end of the 5 year interval. In the Zipf 2 (Z2) strategy the local linear trend is subtracted before calculating p u (t) and p d (t). The time dependent returns for Z1 and Z2 in the case m=3, 5, and 7, and for k=2 are given in Fig. 2 for the seven hereby considered financial indices. A return r(t) (given in %) is defined from where Bq(t) and Bq(t 0 ) are the amount of money available at time t and at the beginning t 0 of the investment period respectively, for a share of value q(t) bought q(t 0 ) at the starting date. Conclusions It appears ( Table 2) that there is no immediate simple and general rule or universal optimum strategy. The latter depends on the volatility, i.e. the signal roughness and the local (m, k)-Zipf exponent value. From the implemented simple strategies, it occurs that "the best returns" are usually for Z1 with m = 5, except for NASDAQ for which a fine result arises from a Z1 with m = 7, (or Z2 and m=5) and for the FTSE, with either Z1 or Z2 and m=7. This choice of the m value and the Z1 strategy is conjectured to be good for large ζ ′ and "non Brownian" (large H) cases. However for quasi-Brownian signals (and high/low ζ ′ ), then it is obvious that one has reduced losses for the NIKKEI when one chooses a Z1 strategy with m=7; this is a very large ζ ′ case. This is rather similar to the FTSE case. Increased gains are found for S&P with Z1 and m=7, and for DJIA with Z1 and m=5, i.e. when ζ ′ is close to 0.1. On the contrary for the HS a Z2 and m = 3 strategy should be better, i.e. for ζ ′ << 0.1. The situation is rather neutral for the DAX, the choice Z1, m=5 being favored. Many other cases could be further considered, and theoretical work suggested : first one could wonder about signal stationarity. Next either a non linear (thus like a power law) trend or a periodic background could be sub- Figure 2: The time dependent returns for Z1 and Z2 in the case m=3, 5, and 7, and for k=2 for the seven considered financial indices tracted from the raw signal, and the Zipf exponent time variation examined. Many other strategies are also available. In summary, we have translated seven financial index signals each into a text based of two letters u and d, according to the fluctuations as in a corresponding random walk. We have calculated the Zipf exponent(s) giving the relationship between the frequency of occurrence of words of length m < 8 made of such "letters" for a binary alphabet. We have introduced considerations based on the notion of a local (or "time" dependent) Zipf law (and exponent). We have imagined two simple investment strategies taking into account the linear trend of the biased signal or not, and have reported the time dependence of the returns.
2019-03-29T11:50:04.355Z
2002-10-01T00:00:00.000
{ "year": 2002, "sha1": "ff8e16d0ab491bc46fcdcbec2d3b655d92dd5fa9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0210499", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "07f791e0fcde99ff7e059b1be756ddd69a9453c7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Economics", "Mathematics" ] }
266863846
pes2o/s2orc
v3-fos-license
Students’ Perception in Using ProProfs Online Quizzes as an Assessment Tool in English Classroom There have been several ICT platforms used in assessment, one of which is ProProfs. The use of Proprofs, however, has not been widely known compared to other platforms, such as Quizziz and Kahoot!. This research attempted to find out the students’ perception towards the use of ProProfs in doing exercise in their Intermediate English Class. This was qualitative research, which used questionnaires and interviews to collect the data. The subjects were 127 students of Politeknik Negeri Jember, majoring in Management of Agroindustry. The result showed that most students think that having quizzes using ProProfs were interesting and challenging, and thus, made ProProfs become one of the assessment tools in class that is worth implementing. However, further research still needs to be done to explore the use of ProProfs in all levels of education. Introduction As technology has grown rapidly in our surrounding, it slowly becomes an inseparable part in every field, including education.The use of information and communication technology, especially online platforms in education has been widely acknowledged.Many teachers have used them in their classroom activity.The use of online platforms in education also varies.Several online platforms are used to help students' improve their skill and boost their motivation in class, such as Instagram and Wattpad (Adiningtyas, 2020;Wahyudin & Sari, 2018).Furthermore, some online platforms are used in assessment activity, such as Kahoot! and Quizizz (Darmawan et al., 2020;Şad & Özer, 2019).Whatever seems to be the purpose, the use of online platforms in education has been chosen to help both teachers and students in teaching and learning activity. In English language teaching, several online platforms have been integrated as media to support the teaching and learning activity.The teachers usually use them to teach the four language skills.The use of online platforms is expected to encourage students' motivation in learning.Muslimah and Ardi (2012) also have proven this in their research by successfully decreasing students' boredom and increasing students' understanding through the help of Teaching Template Quiz Maker.In addition, it may also be supported by the fact that students nowadays, as digital natives, prefer more technology usage as part of their learning module (Cilliers, 2017).Furthermore, during Covid 19 pandemic, the teachers mostly relied on the use of ICT and online platforms to do the distance learning (Abduh, 2021;Fitri & Putro, 2021;Morrison & Sepulveda-Escobar, 2021;Triana & Nugroho, 2021).They were explored in order to ensure the students to be able to grasp the learning materials well and give them the best learning experience that they could get during the pandemic situation.When the pandemic was over and distance learning was no longer carried out, the use of online platforms still continues to help both the teachers and students in teaching and learning activity. In addition to becoming a medium in teaching language skill, online platforms can also be used as an assessment tool to evaluate the students' skills.The integration of ICT in an assessment activity is expected to improve the students' motivation in dealing with English quiz and examinations.ProProfs is one of the online platforms which can be utilized as an assessment tool in English language teaching, beside Kahoot! and Quizziz.ProProfs is an online tool designed to offer quizzes and training.It provides a lot of features which the teachers can explore to carry out their need, such as Quiz Maker, Training Maker, Knowledge Base, Collaborate, Project, Brain Games, Flashcard, Polls, and others (Wijayati et al., 2021).Exams, assessments, polls, tests, opinion surveys, scored quizzes, public quizzes, customized quizzes, and other types of quizzes can all be made using the quiz choices.There are thousands of ready-made quizzes available on the platform and it is also the world's largest quiz maker (ProProfs, 2019).Initially, most of the available quizzes were for work purposes, but as more education use increases, the number of relevant quiz options were growing as well.Furthermore, the important part is that it intelligently feeds back the results with analytics so that teachers can see exactly how a class, group, or individual student is performing based on their quiz responses.Since the platform is open-ended and encourages creativity, it meets the needs of many teachers. During the Covid 19 pandemic, ProProfs has been developed as an online assessment for Junior High School Students and it had several advantages in its implementation; some of which were various quizzes to be given to the students, practicality, and easy to develop (Mardiana et al., 2021).Also, ProProfs enables the teachers to set the minimum score that the students need to achieve.If they successfully pass this score, ProProfs will give them an e-certificate which can be downloaded later in their email (Albab et al., 2021).This surely encourages the students to do better in the quiz.Considering those benefits, this article aimed to investigate the students' perception towards the use of ProProfs, since there was not many research talking about it yet.The use of ProProfs is also still not widely recognized yet, compared to other online platforms, such as Kahoot! and Quizizz.Thus, the result was expected to give an insight to the teachers in utilizing ProProfs as an assessment tool in English language teaching. Method The research was qualitative research which employed questionnaires and face-to-face interview to collect the data.Before the questionnaire and face-to-face interview were given, quizzes about material learned from Intermediate English class using ProProfs were given twice to the students.The first one was at their lecturing class (all five classes attended a one-hour class at the same time), and the other one was given at practicum class (the five classes attended a two-hour class separately). The questionnaire was given to 127 students of Politeknik Negeri Jember majoring in Management of Agroindustry, which were in Intermediate English class.This research involved 127 participants from five classes, each of which consisted of 25 students from class A, 26 students from class B, 22 students from class C, 27 students from class D, and 27 students from class E. The questionnaire was distributed and collected online through GoogleForm.There are seven questions in the questionnaire which aimed to find the students' perception towards the use of ProProfs as the online quizzes.Five questions were presented in the form of 4-Likert scale, while two others were presented in the form of short answers.In the short answer questions, the students were asked to write down their opinions about the advantages and disadvantages of using ProProfs as online quizzes.To triangulate the data obtained from the questionnaire, 4 out of 20 students from each class were chosen randomly, to be interviewed face-to-face.During the interview session, the students were asked to confirm and elaborate the answers they chose and wrote in the questionnaire. Findings and Discussions The data were taken using GoogleForm where 127 respondents (consisting of 62.2% female students and 37.8% male students) participated.The respondents were the students of Politeknik Negeri Jember majoring in Agroindustry who took Intermediate English class during their second semester.The results of the GoogleForm were presented below: The first item of questionnaire was released to identify whether the participants prefer English exercise using online quizzes tool rather than paper-and-pencil assessment.From the chart above, it can be seen that 55.1% or 70 students strongly agreed to have English exercise using online quizzes tool rather than paper-and-pencil assessment, followed by other 46 students (36.2%).On the other side, there were 7.9% or 10 students who disagreed with having English exercise using online quizzes tool rather than paperand-pencil assessment, followed by only one student (0.8%) who strongly disagreed with it.This indicates that the use of online quizzes was in demand by the majority of respondents compared to paper-and-pencil assessment.The second item of the questionnaire was given to identify whether the online quizzes tool makes having English more interesting than paper-and-pencil assessment.According to the chart above, 43.3% or 55 students strongly agreed that having English exercise using online quizzes was more interesting than paper-and-pencil assessment, followed by other 62 students (48.8%).On the other side, there were 7.1% or 9 students who disagreed that having English exercise using online quizzes was more interesting rather than paper-and-pencil assessment, followed by only one student (0.8%) who also strongly disagreed.This indicates that most of the respondents were more fascinated to use online quizzes than paper-and-pencil assessment while having English exercise.The third item of the questionnaire was released to identify whether the online quizzes tool makes having English more challenging than paper-and-pencil assessment.The chart above shows that 36.2% or 46 students strongly agreed that having English exercise using online quizzes was more challenging than paper-and-pencil assessment, followed by other 65 students (51.2%).On the other hand, there were 9.4% or 12 students who disagreed that having English exercise using online quizzes was more challenging than paper-and-pencil assessment, followed by four students (3.1%) who strongly disagreed.This indicated that the use of online quizzes on English exercise was more challenging and exciting to the most of respondents than paper-and-pencil assessment.The fourth item of the questionnaire was deployed to identify whether the online quizzes tool makes having English less scary/less stressful.From the chart above, it can be concluded that most participants believed that English exercise became less scary/less stressful by having the online quizzes tool.The details taken from g-from are as follows: 39.4% or 50 students strongly agreed, 46.5% or 59 students agreed, 11.8% or 15 students disagreed, and 2.4% or 3 students strongly disagreed.This finding strengthened the idea that implementing online quizzes on English exercise was worthy and beneficial.The fifth item of the questionnaire was released to know the participants' feelings if the result of their English exercise can be seen right after the session was over.From the chart above, it can be stated that most participants were eager to see the outcome of their English exercise immediately after the session was over.There were 42,5% or 54 students who strongly agreed and 48% or 61 students who agreed with this statement.Meanwhile, the people who disagreed and strongly disagreed shared the same number, which is 4,7% or 6 students.This finding strengthened the idea of applying online quizzes through ProProfs was pleasant and likeable. The sixth item of the questionnaire was stated to gather the perspective of participants about the advantages of English exercise using the online quizzes tool.From the comments submitted by the respondents, the advantages of using ProProfs as an online quizzes tool can be summarized as not only fun, interesting, and challenging, but also making participants being more focused due to less stressful, less complicated saving paper and time, able to provide immediate feedback, enjoyable features, and also encouraging students' motivation. Meanwhile, the last question in the questionnaire, question number 7, asked about the disadvantages that the respondents experienced while working with ProProfs.From the compilation, the points submitted by the respondents can be summarized as unstable and bad internet connection, unable to revise their answer, and device problem. After submitting their responses, 5 students from each class were randomly chosen to do the interview session as the interviewers.During this session, the researcher asked them to elaborate the responses they submitted in the GoogleForm.From all the respondents chosen, it was revealed that it was their first time doing an online quiz using ProProfs.Previously, they only used Quizizz as one of the online quizzes they ever did.Hence, ProProfs was a new thing for them. There were 14 students who were excited to use ProProfs as their assessment tool.They argued that this platform was interesting and gave fresh ambiance to the assessment activity. "I prefer online quizzes to conventional ones.The conventional ones, such as paper and pencil tests, seemed flat and dull.Meanwhile, using this online quiz has created a new challenge for me.Also, it enabled us to know which question we answered correctly, so it will make it easier to reflect on our works".(Interview with N, one of the respondents from Class A). Other respondents also stated that doing online quiz using ProProfs were more practical and fun since they did not need to write something on the paper.They also argued that this kind of assessment made them more confident and relaxed in answering the questions, as they did not feel the pressure in doing the assessment just like they had when they experienced an assessment in conventional methods.Besides, ProProfs gave out a kind of certificate after they completed their quiz, and it made them feel that their effort was acknowledged. Even though these 14 respondents chose online quizzes using ProProfs, they also mentioned several things that might be seen as the weakness of this activity.Some of them argued that internet connection became one of the obstacles they needed to face.When doing the online assessment, they needed to deal with the slow and unstable internet connection.Next, some also stated that they had problems related to the gadgets they used to do these online quizzes.Some of the gadgets seemed to take a long time to run this platform.Lastly, one student suggested that when executing this online quiz, the teacher needed to arrange the students' seat so that it would minimize the cheating potential among the students.Despite all the mentioned problems, they still thought that using ProProfs in doing online quiz was interesting and challenging as it gave a new experience for them in the assessment phase. However, 6 other respondents preferred the conventional method in testing, such as paper and pencil test to online quiz using ProProfs. "I chose the conventional method because I am more familiar with it.For me, in online quizzes, I was still confused because the instructions were not that clear.Also, I thought it was very challenging for me because it felt like the real test.However, sometimes it made me nervous while doing the test."(Interview with G, one of the respondents from class E). As that was their first experience in doing the online quiz, some of these 6 students still did not know how to navigate it.Also, although the previous students said that the internet connection problem was not a big issue, the other 6 students found that it was quite significant.Hence, they preferred the conventional method as it required no internet connection and gadgets. From the results obtained from the questionnaire and interview, it can be inferred that most students found that doing exercise in Intermediate English was interesting.They even pointed out that this online platform motivated them to give their best in the quiz in a less stressful way.It seemed like the use of ProProfs as one of online platforms made the assessment activity become less intimidating and, even, improving their motivation.The Likert scales above showed that kinds of aspects such as preference, interest, challenge, comfortable feeling, quiz feature, are the factors that make ProProfs, the online quiz that was previously introduced to the students, became most of their choice.The findings were in line with what has been found by Segaran and Hashim (2022) that the use of various online quiz tools were very effective in enhancing the learning activity at the English classroom.Furthermore, the use of various names of online quiz platforms is something that has proven its effectiveness.The use of Kahoot as an assessment tool, for example, has been recognized to make learning activity become exciting (Widyaningrum, 2019).ProProf as one kind of online platform is also the same.When ProProfs was introduced to several teachers teaching Deutsch in Malang, they thought that the variation of games provided by ProProfs became one interesting element to be implemented to their students (Wijayati et al., 2021).Thus, the use of ProProfs as one of online platforms in learning activity is seen as beneficial and has a positive impact towards the effectiveness of teaching and learning activity. However, there were still some problems that need to be acknowledged when utilizing this platform.Unstable internet connection became the most occurred problem when the students worked with this platform.The second one was device problem, which sometimes took very long time to process.This is actually the common problem that the teachers often encounter while engaging the use of online platform in their class.During the implementation of online platforms in pandemic era, network problem become one of the challenges that the students faced (Aina & Ogegbo, 2021).Moreover, in Şad & Özer's (2019) research, they found that lacking of internet connection and proper devices has become a factor that demotivated the students in doing learning activity using Kahoot.It is not different from today, as internet connection and mobile devices appeared to be the most common problem that occurred during this research.Although most of the students agreed that the interesting features of ProProfs outweighed the problems occurred, few did think that these problems were substantial enough to discourage them using ProProfs.Thus, the teachers need to think about several ways to minimize the chance of these problems to occur. Conclusion Commonly, the use of online quizzes is a fun variation.However, it must be balanced with supervision from the teacher, and also the integrity of the learners.Implementation of ICT in small classes has made teaching and learning activities become more effective and efficient.When used proportionally, the use of ProProfs can be a variation of assessment in addition to a number of conventional ways applied in the classroom. Through questionnaires and interviews, it was found that the use of online quizzes such as ProProfs was well accepted by most participants.Most of them agreed that the use of ProProfs in doing English exercises made the activity itself become less scary and stressful.They also thought using ProProfs made their assessment activity become more interesting and challenging.This was definitely a good baseline for ProProfs to be used widely in teaching-learning activities in the classroom, especially to collect students' scores more rapidly and in a fun way.ProProfs surely can add variation to several online platforms which can be used as assessment tools in class.However, further investigation still needs to be done to bring out the maximum potential of ProProfs and how to use it to students at all levels, not only at the college level.There is also a need for several attempts to minimize the problems which may occur during the implementation. Figure 1 . Figure 1.Result of Question Number 1 Figure 2 . Figure 2. Result of Question Number 2 Figure 3 . Figure 3. Result of Question Number 3 Figure 4 . Figure 4. Result of Question Number 4 Figure 5 . Figure 5. Result of Question Number 5
2024-01-09T16:48:09.213Z
2023-12-30T00:00:00.000
{ "year": 2023, "sha1": "e8901b2bb533ae74f6a565509fd61e556e4e45f7", "oa_license": "CCBYSA", "oa_url": "https://publikasi.polije.ac.id/index.php/jlct/article/download/4495/2421", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b0c4f487fdaa011613ae46a8f2bf72c6a240143e", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [] }
115162021
pes2o/s2orc
v3-fos-license
Small braids having a big Ultra Summit Set In [J.Birman, V.Gebhardt, J.Gonzalez-Meneses, Conjugacy in Garside groups I: cyclings, powers and rigidity] authors asked: (open question 2) is the size of USS of a rigid pseudo-Anosov braid is bounded above by some polynomial in the number of strands and the braid length? We answer this question in the negative. In [2] authors asked: (open question 2) is the size of USS of a rigid pseudo-Anosov braid is bounded above by some polynomial in the number of strands and the braid length? We answer this question in the negative. Introduction. The conjugacy problem in the braid group is solved by F.Garside in 1969. The problem separates into two parts: to decide for two braids whether they are conjugate and to find by what braid they are conjugate. Advances in these problems' solutions are equal, so we do not differ them. In papers [2,3,4,5,6,7,9] the algorithm was improved but a polynomial algorithm is still unknown. Garside's algorithm calculates the Summit Set of a braid. We define it in Section 3. In short, the Summit Set is a finite canonical subset of the conjugacy class of the braid. Later W.Thurston [6] improved the algorithm by introducing a left normal form of the braid which can be calculated in polynomial time in the number of strands and the braid length. So W.Thurston obtained a polynomial algorithm for the word problem in the braid group. In 1994 E.El-Rifai and H.Morton [5] improved the algorithm by replacing a Summit Set by its subset Super Summit Set. And it was shown how one can find at least one element of the Super Summit Set in polynomial time. And in 2003 J.González-Meneses and N.Franco [7] shown how one can find the rest of the Super Summit Set in the time bounded above by a product of the size of the Super Summit Set and a polynomial in the number of strands and the braid length. However, the size of the Super Summit Set is not bounded above by a polynomial. In 2003 V.Gebhardt [9] replaced a Supper Summit Set by its subset Ultra Summit Set. The time to find at least one braid in the Ultra Summit Set is not known but one can find the rest of the Ultra Summit Set in the time bounded above by a product of the size of the Ultra Summit Set and a polynomial in the number of strands and the braid length. However, the size of Ultra Summit Set is not bounded above by a polynomial. J.Birman, V.Gebhardt and J.González-Meneses [2] suggested reducing problem to so called rigid pseudo-Anosov braids. They asked: (open question 2) is the size of USS of a rigid pseudo-Anosov braid is bounded above by some polynomial in the number of strands and the braid length? In this paper we answer this question in the negative. Theorem 1. The braid α n := σ 1 σ −1 2 σ 3 σ −1 4 . . . σ (−1) n n−1 on n strands is rigid and the size of its Ultra Summit Set is at least 2 [(n−2)/2] . Theorem 3. The braid α n is pseudo-Anosov for odd number n ≥ 3. Note. The statement of Theorem 3 holds true for arbitrary n. One can construct an invariant train-truck (for the notion of train truck see [1,10]) of the braid α n . But for simplicity we present here a direct proof for odd n. This counter-example was discovered experimentally by I.A.Dynnikov with help of the program of J.González-Meneses which implements the last version of algorithm introduced in [9]. According to the calculations the size of the Ultra Summit set of the braid α n is equal to if n = 3, 4, . . . , 11. J.Birman, K.H.Ko, S.J.Lee [4] introduced a new presentation of the braid group with so-called band generators. Using it, they obtained a new solution to the word and conjugacy problem which retains most of desirable features of the solution we consider above (it uses the notion of left normal form and Ultra Summit Set) and at the same time makes possible certain computational improvements. However, Birman-Ko-Lee presentation modification of open question 2 in [2] has the same answer. Theorem 2 Ultra Summit Set in Birman-Ko-Lee presentation of the braid α n contains at least 2 (n−1)/2 rigid braids for odd n. Due to computations of my program the size of USS of α n is equal to 3 Definitions. The following definitions refer to theorem 1. Definition. A braid with positive crossings is a permutation braid if its every pair of strands crosses at most once. A permutation braid is uniquely defined by its permutaion of strands and a number of permutation braids equals to n!. Definition. Let Garside element be the permutation braid ∆ with p(i) = n+1−i as a permutation of strands. For every permutation braid one can find a permutation braid such that their product equals to Garside element. Theorem-definition. [6] Any braid has a unique representative in left normal form ∆ k · b 1 · b 2 · · · · · b m where b i is a permutation braid not equal to ∆ and for i = 1, 2, . . . , m−1 and j = 1, 2, . . . , n−1 if the jth and the (j+1)th strands of b i+1 cross then two strands of b i which end in the jth and the (j+1)th points also cross. Definition. [5] Let Super Summit Set of a braid b be the set of braids which are conjugate to b and have maximal power of ∆ and minimal number of permutation braids in their left normal form. m . We call c(b) and d(b) the cycling of b and decycing of b respectively. Theorem. [5] Let l be the word length of a braid b. Then a sequence of at most l · n 2 cyclings and decyclings applied to b produces a representative of Super Summit Set. Definition. [9] Let Ultra Summit Set (U A ) be the subset of Super Summit Set which consists of braids b such that c d (b)=b for some natural number d. Definition. If c(b) in definition of the cycling is already in left normal form then we call b rigid. The following definitions refer to theorem 2. Here the main reference is [4]. Definition. The product of pairwise parallel descending cycles is called a cannonical factor. Theorem-definition. Any braid has a unique representation in left normal form δ k · b 1 · b 2 · · · ·· b m where b i is a cannonical factor not equal to δ and for i = 1, 2, . . . , m−1 and every generator (j k) the braids b i · (j k) and (j k) −1 · b i+1 are not simultaneously cannonical factors. Definitions if Cycling, Decycling, Super Summit Set and Ultra Summit Set are similar to Artin generators case. Denote Ultra Summit Set in Birman-Ko-Lee presentation by U BKL . For definitions of periodic, reducible and pseudo-Anosov braid see [1,2,10]. Main result. Theorem 1. The braid α n+1 on n+1 strands is rigid and the size of its U A is at least The word generators permutations applied to the braid α n+1 produces 2 n−1 braids (proposition 2) which are conjugate to α n+1 (proposition 1). We will prove that 2 [(n−1)/2] braids among them are rigid. Note that cycling of rigid braid is also rigid. So by the theorem [5] mentioned above a rigid braid belongs to its U A . So we obtain 2 [(n−1)/2] elements of U A . Proposition 1. A braid produced by generators permutation applied to the word of the braid α n+1 is conjugate to α n+1 . Proof. Assume that a generator σ i is on the right of the generator σ 1 in that braid word. Transpose them if i = 2. Do this operation while it is possible. If σ 1 is on the right end of the word then conjugate the braid by σ 1 and obtain a braid with σ 1 being on the left end. Then do mentioned operation until σ 1 meets σ −1 2 . Then similarly move σ 1 and σ −1 2 together on the right until σ −1 2 meets σ 3 . So at most n 2 transpositions and conjugations produce α n+1 . Proposition 2. A number of the obtained braids equals to 2 n−1 . Proof. An order of generators of neighbouring numbers in the braid word (which generator i+1 is on the left) determines the braid due to the commutativity relations. We will prove that this order is determined by the braid. Definition. We correspond to the braid obtained by permuting generators in the word of α n+1 a sequence n 1 , m 1 , n 2 , m 2 , . . . , n r , m r of natural numbers such that σ ±1 The (n 1 +m 1 + . . . +n k +m k +1)th strand ends in the (n 1 +m 1 + . . . +n k +m k +n k+1 +2)th endpoint and the (n 1 +m 1 + . . . +n k +1)th strand ends in the (n 1 +m 1 + . . . +n k +m k +2)th endpoint. The number of a strand decreases by one if it is between n 1 +m 1 + . . . +n k +m k +1 and n 1 +m 1 + . . . +n k +m k +n k+1 +1. Other strands increase their number by one. So the order of generators of neighbouring numbers is determined by the sequence n 1 , m 1 , . . . , n r , m r determined by our braid permutation. So we have 2 n−1 braids. Collorary from the proof. A braid is determined by a sequence n 1 , m 1 , n 2 , m 2 , . . . , n r , m r According to computations of J.González-Meneses' program all these braids are rigid (therefore belong to U A of α n+1 ) for 2 n 5. For simplicity we will consider only such braids among them that if a generator is on the left of both generators of neighbouring numbers then it has an even number and if a generator is on the right of both generators of neighbouring numbers then it has an odd number. In notations of the previous paragraph we require that all n i are even and all m i except the last are odd. Also we require that m r = 0. Proposition 3. A number of the braids we consider is at least 2 [ n− 1 2 ] . Proof. Without loss of generality we assume that n is odd. By collorary from proposition 2, a number of the braids we consider equals to a number of decompositions of n−1 into the sum of 2r numbers n 1 , n 2 , . . . , n r , m 1 , m 2 , . . . , m r for all r, where n i are even and m i except the last are odd. Assume that m r is odd. Then the quantity of decompositions we have equals to the quantity of decompositions of n−1+r into the sum of 2r even numbers for all r that equals to the quantity of decompositions of (n−1+r)/2 into the sum of 2r numbers for all r that equals to First assume that r = 1 and n is odd, i.e. we have a braid in the middle of the picture Note that crossings of the first strand with strands of numbers 3, 5, 7, . . . , n 1 +1 are negative. We can get rid of this: multiply our braid from the left by the permutation braid having (3 5 7 . . . n 1 +1 1 2 4 . . . n 1 n 1 +3 n 1 +5 . . . n 1 +m 1 +1 n 1 +m 1 +2 n 1 +2 n 1 +4 . . . n 1 +m 1 ) as a permutation of strands. In the obtained product first n 1 /2 strands cross the other strands from above, which means that we can move them above the other strands to the top, so that negative crossings between first n 1 +1 strands disappear and any two of these strands cross at most once (on the picture the right braid is a product of the braids on the left). Similarly move below to the bottom strands of numbers from n 1 +2+m 1 /2 to n. We obtain a permutation braid having (2 4 . . . n 1 n 1 +2 1 3 5 . . . n 1 +1 n 1 +4 n 1 +6 . . . n 1 +m 1 +2 n 1 +1 n 1 +3 n 1 +5 . . . n 1 +m 1 +1) as a permutation of strands. Now consider the general case. First reduce our braid by generators which are on the left of both generators of neighbouring numbers: multiply our braid from the left by a product σ 1+n 1 +m 1 σ 1+n 1 +m 1 +n 2 +m 2 . . . σ 1+n 1 +m 1 +···+n r−1 +m r−1 . If n is odd also multiply our braid by σ n from the left. By our assumption these generators have even numbers, therefore they will cancel. Denote that product of generators (including σ n if neccesary) by c. After multiplication our braid devides into the product of braids. Each braid among them has all strands standing still except strands whose number is between 1 + n 1 + m 1 + · · · + n k + m k and n 1 + m 1 + · · · + n k+1 + m k+1 . Note that the case of such braid was analyzed in the previous paragraph providing us a permutation braid to multiply from the left, which means that we can multiply from the left our braid by a product d of the corresponding permutation braids and obtain a permutation braid b. Denote a permutation braid d · c by a and our initial braid by β. So we have β = a −1 b = ∆ −1 a * b where a * = ∆a −1 . Now we prove that ∆ −1 a * b is a left normal form. In the braid b the ith and the (i+ 1)th strands cross for i = n 1 /2+1, n 1 +m 1 /2+2, n 1 +m 1 +3+n 2 /2, n 1 +m 1 +4+n 2 +m 2 /2, . . . , n 1 + m 1 + · · · + n r−1 + m r−1 + 3 + n r /2, n 1 + m 1 + · · · + n r−1 + m r−1 + 4 + n r + m r /2. But in the braid a for such values of i, the ith and the (i + 1)th strands do not cross. Therefore strands ending at the ith and the (i + 1)th endpoints cross in the braid a * . So we have a left normal form. Now we prove that the braid β is rigid. It is sufficient to check that ∆ −1 b(∆ −1 a * ∆) is a left normal form. Strands ending at the ith and the (i + 1)th endpoints do not cross for odd i in the braid a. Note that a(∆ −1 a * ∆) = ∆. Therefore the ith and the (i + 1)th strands cross in the braid (∆ −1 a * ∆) for odd i. But in the braid b strands ending at the ith and the (i + 1)th endpoints crosses for odd i. So β is rigid. Theorem 2 Ultra Summit Set in Birman-Ko-Lee presentation of the braid α n contains at least 2 (n−1)/2 rigid braids for odd n. 2 )) 2 is conjugate to the braid β. So then it is sufficient to prove for this braid the conjugacy to α n . Step 1. First we prove that tδ −1 sδ is a cannonical factor. Denote Consider the braids (n−i i) belonging to S. Then a product of the correspinding a i equals to δ −1 sδ. Analyse a product (tδ −1 sδ). First separate it into the commutating multipliers. Assume that ( n+1 2 ) commutates with the rest of the product, so separate it. If ( n+1 2 +1 n−1 2 −1) ∈ S and ( n+1 2 +2 n−1 2 −2) ∈ S then (( n+1 2 n−1 2 ) · ( n+1 2 +1 n−1 2 −1) commutates with the rest of the product, so separate it. Then go further in the same way. Now formalize our observation. We say that a i and a i+1 belong to the same multiplier if (n−i i) and (n−i − 1 i + 1) belong to the same subset (S or T ). Indeed, the product divides into the commutating multipliers in this way. Now compute each multiplier. It equals to a i · a i+1 · · · · · a j . Its form depends on the belonging of (n−i i) and (n−j j) to S. So we have four cases. Consider two of them, the other cases are similar. So assume that (n−i i) ∈ T and (n−j j) ∈ T . Then using two formulas ( (r 1 r 2 . . . r p )(r q r) = (r 1 r 2 . . . r q r r q+1 . . . r p ) if r q < r < r q+1 and (r r q )(r 1 r 2 . . . r p ) = (r 1 r 2 . . . r q−1 r r q . . . r p ) if r q−1 < r < r q ) and induction we obtain that the considered multiplier equals to (n−i n−i−2 . . . n−j j j+2 . . . i). And if (n−i i) ∈ S and (n−j j) ∈ T then it equals to (n−i+1 n−i−1 . . . n−j j j+2 . . . i+1). In both cases we obtain a descending cycle. Note that the numbers in the cycle form two arithmetic progressions. Note that the descending cycles in our product are parallel, so the statement of the first step is proved. We say that cannonical factor c is divisible by a generator (i j) if (j i) · c is also a cannonical factor. Collorary 3.7 [4]: a cannonical factor is divisible by a generator (i j) if and only if one of descending cycles in its decomposition includes i and j. By the same collorary, (j i) · c is a cannonical factor if and only if c · (j i) is a cannonical factor. For every cannonical factor a (δ −1 a) and (aδ −1 ) are also cannonical factors ( [4]). Therefore we have: if ab = δ where a and b are cannonical factors then a · (i j) is a cannonical factor if and only if b is divisible by (i j). Recall that ts = ((n−1 1)(n−2 2) . . . 2 ). A descending cycle of (tδ −1 sδ) consists of two arithmetical progressions. Numbers in the first progression are at least n+1 2 and in the second -at most n+1 2 . Therefore if n−i and i+1 belong to some descending cycle then these numbers belong to the different progressions. But recall that a sum of numbers from different progressions is of the same parity with n. So the statement of the second step is proved. We are to prove that if (δ −1 tsδ) is divisible by (i j) then (i j) · tδ −1 sδ is not a cannonical factor. Due to the decomposition into descending cycles of (δ −1 tsδ) and collorary 3.7 [4] we have that if (δ −1 tsδ) is divisible by (i j) then (i j) = (n−i i). So denote r := (n−i i)·tδ −1 sδ. Assume that (n−i i) ∈ T . Then the braid tδ −1 sδ is divisible by (n−i i) and so (n−i i) −2 r is also a cannonical factor. Therefore r is not a cannonical factor by lemma 3.3 in [4]. Assume that (n−i i) ∈ S. Then ((n−i i)(n−i+1 i+1)) −1 r is also a cannonical factor. So r is not a cannonical factor by the same lemma. Note that cycling of rigid braid is also rigid. So by theorem [5] mentioned in the part Definitions a rigid braid belongs to its U BKL . Now finish the prove of proposition 2. In steps 1,2 we found a left normal form of the braid (δ −1 sδ) −1 β(δ −1 sδ). In steps 3,4 we proved that it is rigid and belongs to U BKL . To finish the proof of theorem 2 count a number of braids (δ −1 sδ) −1 β(δ −1 sδ). They are all distinct due to they have distinct left normal forms and their number equals to the number of subsets of S, i.e. to 2 n−1 2 . Therefore a size of U BKL is at least 2 n−1 2 . Theorem 3. The braid α n is pseudo-Anosov for odd number n ≥ 3. Proof. Assume that α n is periodic braid, i.e. α k n belongs to center of B n , i.e. it equals to some power of a braid (n n−1 n−2 . . . 1) n [Chow, 1948]. But a differ between the quantity of positive generators and negative generators in the word of a braid is invariant with respect to the braid group relations. For the braid α k n this differ equals to zero, but for (n n−1 n−2 . . . 1) n it is a positive number. Therefore the braid α k n is trivial. But this is impossible because the braid group is torsion-free. Assume that α n is reducible, i.e. one obtain the braid α n if one substitute nontrivially the strands of some nontrivial braid by some braids. Denote by γ the braid whose strands are substituted. Assume that the ith strand is substituted by a braid γ i . Consider a special case n = 3. Note that γ has two strands. Any pair of strands in α 3 3 are not linked. Therefore γ 3 is trivial. Therefore γ i is trivial which means that α 3 3 is trivial. Contradiction. Consider a general case. Let k, l, m be natural numbers, 1 k < l < m n and numbers l−k and m−l are odd. Delete from α n n all strands except the kth, lth and mth. Note that we obtain α 3 3 . Therefore the kth, lth and mth strands belong to either the same γ i or three distinct braids γ i . Assume that j−i is odd. If i > 1 then consider a set i−1, i, j and obtain that (i−1)th strand belongs to γ s . If j < n consider a set i, j, j+1 and obtain that (j+1)th strand belongs to γ s . Then we obtain similarly that all strands belong to γ s . That is a contradiction and so theorem 3 is proved.
2009-05-30T11:41:28.000Z
2009-05-30T00:00:00.000
{ "year": 2009, "sha1": "50b64bb1114c9f18c897b2d7644a11f034648c0a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "50b64bb1114c9f18c897b2d7644a11f034648c0a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
231689691
pes2o/s2orc
v3-fos-license
Quantification of local dislocation density using 3D synchrotron monochromatic X-ray microdiffraction ABSTRACT A novel approach evolved from the classical Wilkens’ method has been developed to quantify the local dislocation density based on X-ray radial profiles obtained by 3D synchrotron monochromatic X-ray microdiffraction. A deformed Ni-based superalloy consisting of γ matrix and γ′ precipitates has been employed as model material. The quantitative results show that the local dislocation densities vary with the depths along the incident X-ray beam in both phases and are consistently higher in the γ matrix than in the γ′ precipitates. The results from X-ray microdiffraction are in general agreement with the transmission electron microscopic observations. GRAPHICAL ABSTRACT IMPACT STATEMENT A new approach based on 3D synchrotron microdiffraction showing broad application potential in heterogeneous materials was developed and applied to quantify local dislocation densities in a fatigued two-phase Ni-based superalloy. Introduction Dislocations are present in all crystalline materials. A quantitative description of the dislocation content, including their type, density and spatial distribution, is essential for understanding their origin, dynamics and contribution to the materials physical and mechanical properties [1,2]. Transmission electron microscopy (TEM) is one of the most frequently employed characterization techniques for such studies [3]. However, when the dislocation density exceeds 10 14 m −2 , it becomes challenging to count the dislocations precisely. Features of their spatial arrangement as mutual screening of their strain fields [4] are hard to obtain in conventional TEM and only assessable using high-resolution mode [5]. X-ray (and neutron) diffraction is able to provide useful information about the characteristics of dislocations. Based on the X-ray diffraction radial line profiles, Wilkens screening of the strain field of dislocations by introducing two parameters, namely the effective outer cut-off radius R e and the dislocation screening factor M (M = R e √ ρ) [6,7]. A small or large M value implies a strong or weak screening of the strain field of dislocations, respectively. His method has been applied successfully to quantify the dislocation density in a number of material systems [8][9][10]. The development of synchrotron sources enables measuring diffraction profiles with high accuracy. With monochromatic synchrotron X-ray, high-resolution reciprocal space mapping technique, for example, dislocation structures with dislocation-free regions separated by dislocation walls can be discerned in individual crystalline grains and the local dislocation density within individual subgrains revealed [11,12]. However, the spatial distribution of dislocations and their densities in real space cannot be obtained using this technique. Effects of local microstructural heterogeneities on the plastic deformation behavior are thus difficult to be studied. Another synchrotron technique, 3D Laue microdiffraction, utilizes focused polychromatic X-rays and differential aperture to resolve the diffraction signals from local micrometer-sized voxels [13,14]. Employing this technique, the dislocation contents have been linked with the shape of the Laue peaks [15]. The method has been successfully used to identify the slip systems in deformed materials [16,17] and to quantify the local geometrically necessary dislocation density [18][19][20]. Most properties such as strength are controlled by all dislocations including the redundant dislocation density (not causing any geometrical consequences). Determination of the total dislocation density by polychromatic X-rays is difficult, however [21]. Alternatively, the 3D intensity distribution of a diffraction peak in reciprocal space from a certain volume within a specimen, can be obtained by tuning the monochromatic X-ray energy using a specially designed monochromator [14,22]. From the 3D intensity distribution, radial line profiles can be obtained. So far, such radial line profiles have not been utilized thoroughly for investigations of the dislocation content. The present study aims to accomplish such a quantitative characterization by extending the classical Wilkens' method to the radial profiles obtained by microdiffraction tuning monochromatic X-ray energies. Material and methods A deformed directionally solidified two-phase Ni-based superalloy, DZ17G, was used as model material to demonstrate the broad usage of the method. During solidification, dendrites grow with one of their 001 directions along the temperature gradient. Perpendicular to the growth direction the dendrites are ∼ 200 μm in width. Boundaries between two adjacent dendrites may be either low-or high-angle boundaries depending on their mutual misorientation. The resulting grain width (defined by boundary misorientation angles of 15°a nd above) ranges from 200 μm to 2 mm. This alloy has a structure consisting of coherently oriented cuboidal precipitates of an ordered L1 2 γ -Ni 3 (Al,Ti) phase in a matrix of face-centered cubic (FCC) γ -phase [23,24]. The cuboidal γ precipitates have an average size of ∼ 360 nm, with their edges aligned with the three crystallographic 001 directions. The cuboids are distributed uniformly in the γ matrix, which appears as interconnected 3D channels separating the cuboids. The average width of the γ channels is ∼ 35 nm. The volume fraction of the γ precipitates is about 70%. An as-cast sample was vibration fatigued using a D-300 vibrating machine of Suzhou SuShi Testing Group Co., Ltd. in China, following the standard of the Ministry of the Aviation Industry of the P.R.C. (HB5277-84). More specimen details were given in section A in the supplementary material. Synchrotron microdiffraction was conducted at beamline 34-ID-E at Advanced Photon Source in the USA. A polychromatic X-ray beam was focused to a size of ∼ 0.3 μm using non-dispersive Kirkpatrick-Baez mirrors. The sample was mounted on a holder at an inclination of 45°to the incoming beam. Laue diffraction patterns were recorded using a panel detector mounted in a 90°r eflection geometry 513.2 mm above the specimen. The detector position with respect to the incident beam was calibrated using a strain-free silicon single crystal. A sketch of the experimental set-up and an indexed Laue diffraction pattern from the investigated region can be found in section A in the supplementary material. A monochromatic beam was used for mapping the 3D intensity distribution around the 800 diffraction spot in reciprocal space by scanning the X-ray energy with an energy step of 5 eV from 21.348 keV to 21.688 keV. For each energy step, a Pt knife-edge scanning along the sample surface at a distance of 250 μm was used as a differential aperture for resolving the diffraction signal from different depths illuminated by the microbeam with a resolution of 5 μm. The energy and depth step sizes were chosen to balance between accuracy of the resulting radial line profile and time consumption. Four examples of depth-resolved diffraction patterns obtained at different X-ray energies from a small volume along the beam at a depth of 5-10 μm below the sample surface are shown in Figure 1(a). Based on the diffraction geometry and the energy of the X-rays, the diffraction vector, Q i , for each pixel in the diffraction patterns was determined. The intensity distribution as a function of the length of the diffraction vector, Q = 4πsinθ /λ (λ is the wavelength of X-ray and θ is the Bragg angle), was determined for each energy and each depth. By collecting the intensity distributions of all individual energy steps for each voxel, an X-ray radial line profile from a local volume at each depth was determined. Results and discussion An example of the radial line profile for a depth of 5-10 μm is shown as black circles in Figure 1(b). It is seen that the radial profile is asymmetrical, with a longer tail at the lower Q than at higher Q. This is mainly due to the presence of the γ and γ phases, which have slightly different lattice constants. To reveal the diffraction signal originating from each phase, the radial line profile is separated into two subprofiles using a mirroring method described in section B in the supplementary material. For this separation, the ratio of the integrated intensity between γ and γ subprofiles at each depth is assumed equal to the ratio of the macroscopic volume fraction between the γ and γ phases, i.e. 30:70. Considering the fact that the size of the probed volume is much larger than the sizes of the γ channels and γ cuboids, this is a reasonable assumption. Also, in section E in the supplementary material it is shown that the influence of different integrated intensity ratio is insignificant. An example of separated subprofiles is shown in Figure 1(b), where the orange and olive curves are from the γ and γ phases, respectively. The full width at half maximum (FWHM) of the separated radial subprofiles, δ E , for the local volumes at different depths is determined and shown in Figure 2(a). It is seen that δ E for both phases is different along the depths and the values for the γ phase are in general smaller than those for the γ phase. The lattice constant a determined based on the maximum intensity of subprofile is smaller for the γ precipitate than the γ phase (Figure 2(b)), leading to negative γ /γ lattice misfit, γ /γ = 2(a γ − a γ )/(a γ + a γ ), in a range between −0.05% and −0.11% (Figure 2(c)). Based on the separated line profiles, the dislocation screening factor M and the dislocation density ρ are determined using the Wilkens' method [6,7] for each phase. A brief summary of the Wilkens' method of describing the X-ray line profile of restrictedly random distributed dislocations (with equal amount of dislocations having opposite signs of their Burgers vector) is given here, while more details can be seen in section C in the supplementary material. Series of radial profiles normalized by the dislocation density were determined numerically for different screening parameters M * in the range 0.5-10 based on Wilkens' theory [6,7]. The asterisk indicates that the normalized profiles were calculated by taking into account restrictedly random distributions of solely screw dislocations. The series of calculated radial profiles was then compared to the experimental radial profile by comparing the ratios between full width at several different intensities and the FWHM. The calculated profile with best shape matching to the experimental one (in terms of these ratios) is identified and its apparent dislocation screening factor M * and its FWHM, denoted as δ M * , determined. The apparent dislocation density ρ * is then given by: ( 1 ) The actual dislocation density ρ can be determined using Equation (2), considering also edge dislocations: whereC is a geometrical contrast factor depending on the angles between Q, b (Burgers vector) and l (line vector) of the involved dislocations. The contrast factor is 0.1667 for screw dislocations (i.e. C * ) and 0.1889 for edge dislocations [6], the average valueC will be in the range between these two values. The small difference between C * and C will result only in small differences between the ρ * and ρ. For simplicity and avoiding further assumptions about the involved dislocations, the apparent parameters ρ * and M * are directly used in this article. Last but not least, for the present analysis, peak broadening from size and instrumental effects has little influence on the dislocation densities (about 10%, see section D in the supplementary material) and is therefore omitted in the calculations. The apparent parameters M * and ρ * for each phase and each depth are displayed in Figure 2(d,e), respectively. The results show that the apparent dislocation screening factors M * for the γ channels are in general smaller than that for the γ precipitates (Figure 2(d)), which implies that δ M * is also smaller for the γ phases (see Fig. S3b). According to Equation (1), smaller δ M * and larger δ E lead to larger ρ * for the γ phase (Figure 2(e)). ρ * in general decreases from the surface to the interior for the γ phase, while no clear pattern is seen for the γ phase. The volume-weighted average ρ * over the two phases, at different depths is shown in Figure 2(f). The average ρ * is generally higher in the region close to the surface (with depth below 20 μm) than that in the deeper region. The average apparent dislocation density over the entire characterized volume is ∼ 12.1 × 10 14 m −2 in the γ phase and ∼ 5.7 × 10 14 m −2 in the γ phase. Considering the volume ratio between the two phases, this result suggests that the number of dislocations within the two phases is similar. The apparent effective outer cut-off radius R e * = M * / √ ρ * of dislocations (not shown here), varies from 12 nm to 78 nm with an average of 35 nm in the γ phase, and from 53 nm to 141 nm with an average of 76 nm in the γ phase. To confirm the quantitative results, the microstructure of the sample was characterized using TEM (see section F in the supplementary material). An example TEM micrograph is shown in Figure 3. The dislocations are heterogeneously distributed between both phases. Darker regions depict the majority of the γ channels, suggesting a high dislocation density there, while no dislocations are seen in some of the γ channels (see e.g. the one marked by the orange arrows in Figure 3). The average dislocation density in the γ phase is determined to be about 8.5 × 10 14 m −2 (for details see section F). A large number of dislocations are also apparent in some γ cuboids, while no dislocation is seen in others. The majority of these dislocations are inclined approximately 45°to the cuboid axes and appear in pairs. It remains uncertain if these dislocations reside within the γ cuboids; they may actually lie in γ channels parallel to the TEM foil [25]. The density of dislocation in the γ phase can therefore not be quantified, but is obviously much smaller than in the γ phase. Nevertheless, the level of the average dislocation density in the γ phase determined from TEM is comparable with that determined from the radial line profiles, suggesting that the calculation based on the local microdiffraction data is reliable. The fact that dislocations in the γ phase are confined to the channels and the γ precipitates are isolated by the channels is likely to be the reason for the small values of R e * between 12 and 141 nm determined from the microdiffraction data. To understand the variation seen in Figure 2, pseudo white beam diffraction patterns are obtained by adding all diffraction patterns collected through the series of energies for each depth. The result is shown in Figure 4. Peak splitting is seen in several of the patterns, indicating the presence of a subgrain boundary in the corresponding local volume, i.e. between a depth of 15 and 35 μm. Hence, the region closer to the surface belongs to a different subgrain than the region probed in larger depth. The higher average ρ * (Figure 2(f)) for the subgrain close to the surface than that deeper in the volume, suggests that the crystallographic orientation of different subgrains plays a role for their plastic deformation and dislocation accumulation. (The presence of a subgrain boundary at a depth of 20-25 μm is likely to be also the reason for the small lattice misfit seen in Figure 2(c)). The misorientation angle between both subgrains is at maximum around 0.9°(seen at the depth between 15-20 μm and 20-25 μm). The (geometrically necessary) dislocation density estimated using the Read-Shockley formula ρ = θ/bx [26] based on the misorientation angle θ between subgrains is ∼ 2.5 × 10 13 m −2 , assuming a spacing x of 5 μm. This density is about one order of magnitude less than the total dislocation density determined based on the radial line profile. The overwhelming contribution to the dislocation density in the sample comes from redundant dislocations (with opposing sign of their Burgers vector) as a result of the fatigue deformation of the Ni-based superalloy [27,28]. Conclusions In the present study, intragranular dislocation densities in a vibration-fatigued Ni-based superalloy, DZ17G, have been quantified based on 3D synchrotron monochromatic microdiffraction data using the classical Wilkens' radial line profile method. In this manner, the redundant dislocation density is revealed locally on small length scale of 5 μm, which has not been accessed before by X-ray diffraction, neither by conventional line profile analysis, which cannot capture heterogeneities on micrometer length scales, nor by local polychromatic X-ray investigations, which solely revealed the geometrically necessary dislocation content. Our results show that large amount of redundant dislocations (in the order of 10 14 m −2 ) are generated during vibration fatigue test. The dislocation densities resolved in the γ channels of the superalloy are about twice that in the cuboidal γ precipitates. Local variations in dislocation density are seen for the γ and γ phases at different depth along the incident X-ray. Significant differences are detected between two subgrains with misorientation of ∼ 1°. Upon continued upgrades and developments of synchrotron achieving orders of magnitude higher brilliance and smaller size, the powerful approach introduced in this article will allow resolving intragranular dislocation structures with a spatial resolution better than 100 nm in a broad range of materials.
2021-01-24T14:08:17.648Z
2021-04-03T00:00:00.000
{ "year": 2021, "sha1": "9b97b89f3fb1e7c1b6e90db37855c50de9b09bbf", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21663831.2020.1862932?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "b6109fb784ca2028b7c030aac5b61c5040b1e339", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
254105372
pes2o/s2orc
v3-fos-license
Deflection of light by black holes and massless wormholes in massive gravity Weak gravitational lensing by black holes and wormholes in the context of massive gravity (Bebronne and Tinyakov, JHEP 0904:100, 2009) theory is studied. The particular solution examined is characterized by two integration constants, the mass M and an extra parameter S namely ‘scalar charge’. These black hole reduce to the standard Schwarzschild black hole solutions when the scalar charge is zero and the mass is positive. In addition, a parameter λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document} in the metric characterizes so-called ‘hair’. The geodesic equations are used to examine the behavior of the deflection angle in four relevant cases of the parameter λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document}. Then, by introducing a simple coordinate transformation rλ=S+v2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r^\lambda =S+v^2$$\end{document} into the black hole metric, we were able to find a massless wormhole solution of Einstein–Rosen (ER) (Einstein and Rosen, Phys Rev 43:73, 1935) type with scalar charge S. The programme is then repeated in terms of the Gauss–Bonnet theorem in the weak field limit after a method is established to deal with the angle of deflection using different domains of integration depending on the parameter λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document}. In particular, we have found new analytical results corresponding to four special cases which generalize the well known deflection angles reported in the literature. Finally, we have established the time delay problem in the spacetime of black holes and wormholes, respectively. Introduction At present independent observations have confirmed that the universe is currently undergoing a phase of accelerated expansion. The observed late time acceleration has been confirmed by data from type Ia Supernovae [3,4], anisotropy in the Cosmic Microwave Background radiation [5] and SDSS [6,7]. To describe the present expansion scenario several models have been proposed so far. Two broad approaches have emerged to account for the observed accelerated expansion. The first is the dark energy proposal with the assumption that nearly 70 % of the total energy-density in the universe may be in the form of negative pressure fluid with the associated density parameter DE of the order of DE ∼ 0.70. One of the simplest candidates generating the dark energy is the cosmological constant, but its characterization has two wellknown problems, i.e., fine-tuning and cosmic coincidence. Moreover, there is a severe discrepancy in the observed value of the cosmological constant in contrast with the value predicted by quantum cosmology. Ellis et al. [8,9] proposed the use of the trace-free Einstein equations which effectively treats the cosmological constant as a mere constant of integration. This idea was first proposed by Weinberg [10] and has also gone by the name unimodular gravity [11][12][13]. Several alternative models have been suggested to incorporate the cosmological constant problems, namely, quintessence [14], tachyon field [15], phantom model [16] and k-essence [17] that also predict cosmic expansion amongst others. A second approach is that of modified gravity as an alternative to appealing to exotic matter distributions such as dark energy or dark matter. Generalizations of general relativity (GR) appear to avoid introducing matter with nonstandard physical properties and to solve the singularity problem. Modified or extended theories of gravity often require higher dimensional spacetimes. This in itself is no shortcoming as historically a number of higher dimensional theories have appeared such as Kaluza-Klein theory and the brane world concept. It is debatable whether gravitational interactions are necessarily four dimensional. Indeed if string theory or its generalization M-theory for quantum effects is to be consistent with a theory of gravitation then higher dimensions are necessary. The Einstein-Hilbert action may be modified to include non-linear geometric terms. One of these proposals is the f (R) theory [18][19][20][21][22], as a simple modification of the Einstein-Hilbert Lagrangian density by a general function of the Ricci scalar R. While f (R) theory does have the capacity to explain the late-time expansion of the universe, the theory does possess some difficulties in that ghost terms are manifest in the presence of fourth order derivatives. Of late f (R) theory has been shown to be equivalent to the Brans-Dicke scalar tensor theory. A more natural generalization of general relativity is the Lovelock [23,24] lagrangian postulate in which the action is composed of terms quadratic in the Ricci scalar, Ricci tensor and the Riemann tensor. Remarkably this higher curvature theory generates up to second order derivative terms in the equations of motion and is accordingly ghost-free. To zeroth order the Lovelock polynomial is identical to the cosmological constant, to first order the Einstein action is regained while to second order the action is known as the Gauss-Bonnet action. In this paper, we consider massive gravity as a modification of GR. These include massive gravitons and have attracted much attention recently. In addition the theory incorporates massive spin-2 particles which have two degrees of freedom. This theory has a rich phenomenology, such as explaining the accelerated expansion of the universe without invoking dark energy. Additionally, the resolution of the hierarchy problem and brane-world gravity scenarios also generate arguments for the existence of massive modes; hence massive gravity as in the Refs. [25,26] emerged. In this direction the pioneer work was done by Fierz and Pauli [27] in the context of linear theory. It is worthwhile to mention that the original theory suffered from the existence of vDVZ (van Dam-Veltman-Zakharov) discontinuity. Later, Vainshtein introduced a well known mechanism [30][31][32] to resolve the long standing problem of the vDVZ discontinuity by considering a nonlinear framework but this raised another problem of the Fierz and Pauli theory which is known as the Boulware-Deser (BD) [28,29] ghost instability at the non-linear level. In order to avoid such instability, de Rham, Gabadadze and Tolley (dRGT) [33,34] have proposed a new massive gravity theory with an extension of the Fierz-Pauli theory. Recently other versions of massive gravity have been proposed, namely, new massive gravity [35] and bi-gravity [36]. Massive gravity theories are also studied in the astrophysical context. Black hole solutions and their thermodynamical properties have been analyzed in dRGT massive gravity [37][38][39][40]. Katsuragawa et al. [41] devised a neutron star model that demonstrated that massive gravity dynamics deviates only slightly from GR. It was recently proposed by Bebronne and Tinyakov [2] that vacuum spherically-symmetric solutions do exist in massive gravity. The black hole solution depends on the mass M and an extra parameter S which is referred to as the 'scalar charge'. Additionally, in Ref. [42] the validity of the laws of thermodynamics in massive gravity have been checked for the same black hole solutions. A number of articles on black holes in massive gravity have appeared recently; some solutions have been reported in [43][44][45][46][47]. It is important to understand the deflection of light in the presence of a mass distribution. This becomes an important and effective tool for probing a number of interesting phenomena. As early as 1919 Eddington [48] studied the weak gravitational lensing of the Schwarzschild spacetime. This seminal work initiated the study of gravitational lensing (GL) theory [49][50][51][52]. It is also known that in the vicinity of massive compact objects (such as neutron stars or black holes) electromagnetic radiation is generated. The importance of examining light deflection in the weak field limit lies in the ability to probe large-scale structures, as well as exotic matter, wormholes, naked singularity, etc (The reader is referred to the more detailed review in [53][54][55][56][57]). It is thus imperative to investigate the GL effect of black holes in massive gravity and to search for their possible observational signatures in the weak field limit. In contrast to the lensing situation already studied in the literature, we apply the higher curvature Gauss-Bonnet theorem (GTB) [86] to calculate the deflection angle. It is well known that the deflection of light (i.e. Gravitational lensing) is now one of useful tools to search not only for dark and massive objects, but also wormholes. In recently past, several attempts have been made to calculate the elliptical integral by Virbhadra and Ellis [58,59]. Soon after the Eiroa et al. have studited Riessner-Nordstrom black hole lensing in strong gravitational region [60]. The black hole gravitational lenses have been widely demonstrated in [61][62][63][64][65][66][67][68][69][70][71][72]. In addition, after the pioneer works by Kim and Cho [79], the gravitational lensing by a negative Arnowitt-Deser-Misner (ADM) mass was studied in [80][81][82][83][84][85]. As a consequence, several forms of the deflection angle by the Ellis wormhole (particular example of the Morris-Thorne traversable wormhole) have been studied in the strong field limit [73][74][75][76][77][78]. The computation of the deflection angle in the weak field limit for spherically symmetric static spacetimes may be accomplished through a simple algorithm. Very recently, Werner [87] extended and applied the optical geometry to the case of stationary black holes. Further, under some physically realistic assumptions GBT was used in studies of various astrophysical objects, such as Ellis wormholes by Jusufi [88], wormholes in Einstein-Maxwell-dilaton theory [89][90][91], black holes with topological defects and deflection angle for finite distance by Ishihara et al. [88,[98][99][100][101][102][103]. In Ref. [105], the authors have studied the strong deflection limit from black holes and explored the role of the scalar charge in massive gravity. In the present work, we aim to investigate the deflection angle by black holes and charged wormholes in massive gravity in the weak limit approximation using the optical geometry as well as the geodesic method. This paper is structured as follows. In Sect. 2 we review the black hole solution in massive gravity. In Sect. 3 we consider the geodesic equations in massive gravity theory and analyse the deflection angle in four special cases. In Sect. 4 we consider the same problem viewed in terms of the Gauss-Bonnet theorem. In Sect. 5 the time delay problem is considered. In Sect. 6 we shall consider deflection of light by wormholes. By applying the GBT of gravitational lensing theory to the optical geometry, we calculate the deflection angle produced by charged and massless wormhole in massive gravity. In Sect. 7 we consider the time delay problem in the context of wormholes. Finally in Sect. 8 we comment on our results. Black hole solution in massive gravity We commence with a brief discussion about black holes in massive gravity. An action of a four-dimensional massive gravity model which is used in this paper, is given by: where R is as usual the scalar curvature and F is a function of the scalar fields φ i and φ 0 , which are minimally coupled to gravity. These scalar fields play the crucial role for spontaneously breaking Lorentz symmetry. Actually, this action in massive gravity can be treated as the low-energy effective theory below the ultraviolet cutoff . The value of is of the order of m M pl , where m is the graviton mass and M pl is the Plank mass. The function F which depends on two particular combinations of the derivatives of the Goldstone fields, X and W i j , are defined as where the constant has the dimension of mass. From this, one can arrive at the new type of black hole solution, namely, massive gravity black hole (for detailed derivation can be found in [2]). The ansatz for the static spherically symmetric black hole solutions can be written in the following form: where the metric function with the scalar fields are assumed in the following form with where M accounts for the gravitational mass of the body and λ is a parameter of the model which depends on the scalar charge S. The presence of the scalar charge represents a modification of the Einstein's gravitational theory. When S = 0 the usual Schwarzschild potential is regained. However, at large distances with positive M the solution (2) has an attractive behavior, whereas with negative M the Newton potential is repulsive at large distances and attractive near the horizon. Our goal is to study the when M > 0 and S > 0, so that black hole has attractive gravitational potential at all distances and the size of the event horizon is larger than 2M. Another reason for considering such a solution is that the asymptotic behaviour of the gravitational potential is Newtonian with finite total energy, featuring an asymptotic behavior slower than 1/r and generically of the form 1/r λ . Therefore, the attraction the modified black hole solution exhibits is stronger than that of the usual Schwarzschild black hole due to the presence of "hair λ". Geodesic equations Let us turn our attention to the problem of the deflection angle in massive gravity theory in the framework of the geodesic equations. Recently a new black hole solution in the context of the massive gravity theory was found to be [2] This solution does not describe asymptotically flat space in the case λ < 0. For λ = −2 the metric coincides with the familiar Schwarzschild de-Sitter spacetime consisting of a constant stress energy tensor in the form of the (positive) cosmological constant [106]. In the present paper we shall focus on the case λ ≥ 1. Immediately it may be recognized that the case λ = 2 corresponds with the Reissner-Nordström solution for the exterior of a charged perfect fluid sphere. Applying the variational principle to the metric (6) we find the Lagrangian It is worth noting that L is +1, 0, and −1, for timelike, null, and spacelike geodesics, respectively. Taking the equatorial plane θ = π/2, the spacetime symmetries implies two constants of motion, namely l and E, given as follows To proceed further we need to introduce a new variable, say u(ϕ), which is can be given in terms of the radial coordinate as r = 1/u(ϕ) which yields the identitẏ After some algebraic manipulations one can show that the following relation can be recovered On the other hand, from Eqs. (8) and (9) we finḋ Hence we can recast Eq. (11) in terms of the impact parameter b as follows We proceed by considering four special cases for different values of the parameter λ in the metric (6). Case λ = 1 To begin, we shall consider the affine parameter along the light rays to be E = 1, therefore one should find the following condition u max = 1/r 0 , where r 0 gives the distance of the closest approach. Next, we can evaluate the constant l from Eq. (14) in leading order terms as This leads us to the following differential equation du dϕ 2 1 where From the above equation we find where It is well known that the solution to the above equation in the weak limit can be written as follows [107] whereα is the deflection angle which should be calculated. Moreover, from the above equation the deflection angle is shown to be calculated as follows [107] Using this relation, from Eq. (19) the deflection angle is found to bê Furthermore if we let S = 0, we find the Schwarzschild deflection angle with second-order correction terms which is in perfect agreement with [104]. Case λ = 2 Our second case will be λ = 2. Going through the same procedure as in the last example the constant l is found to be We obtain the following differential equation where From the above equation we get that where Consequently the deflection angle has the form Now as a special case we can find the charged black hole deflection angle by simply letting S = −Q 2 . In that case we find the RN deflection anglê In a similar way, letting λ = 3 we found The differential equation takes the form du dϕ 2 1 where From the above equation we find where The deflection angle is given bŷ We find the following differential equation du dϕ 2 1 From the above equation we obtain where Expanding in Taylor series and integrating we derive the expression In this subsection we consider null geodesics deflected by a black hole in massive gravity models. We start by considering the optical metric from spacetime metric (6), by choosing For the following considerations, it is convenient to introduce a radial Regge-Wheeler tortoise coordinate r , with a new function f (r ) as follows: This prescription allows us to write the line element of the optical metric in the form Using this static coordinates system, it is now clear that the equatorial plane in the optical metric is a surface of revolution when it is embedded, in R 3 . We utilized the following mathematical formulae to calculate the Gaussian curvature K , of the optical surface as With the help of Eq. (54) the optical Gaussian curvature may be expressed as (for further review see [86]) Deflection angle Theorem Let S R be a non-singular region with boundary ∂S R = γ g op ∪ γ R , and let K and κ be the Gaussian optical curvature and the geodesic curvature, respectively. Then GBT reads [86] in which θ i are the exterior angles at the ith vertex. In our setup, however, the Euler characteristic is χ(S R ) = 1 due to the fact that we consider a non-singular domain outside of the light ray. It is worth noting that for a singular domain we have χ(S R ) = 0. Furthermore, for computing the deflection angle of light, we need first to compute the geodesic curvature in terms of the following relation In doing so we should take into account the unit speed condition which is stated as follows g op (γ ,γ ) = 1, withγ being the unit acceleration vector. Next, if we simply allow R → ∞, one can show that our two jump angles (θ O , θ S ) yield π/2. Put it differently, if we take the total sum of our jump angles at S and O, we find θ O + θ S → π [86]. It follows from the simple geometry that κ(γ g op ) = 0 due to the simple fact that γ g op is a geodesic. Hence we are left with the following relation in which γ R := r (ϕ) = R = constant. In this way, one is left with the following non-zero radial part note that˜ r ϕϕ is the Christoffel symbol associated with the optical metric geometry. While is clear that the first term in this equation must vanish, we can calculate the second term via the conditiong ϕϕγ ϕ Rγ ϕ R = 1. Finally we find But for very large radial distance Eq. (53), suggest that provided that λ > 0. From GBT we find where the surface element is given by d A = √ det g op dr dϕ. It is clear now that we should integrate over the domain S ∞ to find the deflection angle. This the deflection angle is found to bê One can now compute the deflection angle by choosing the light ray as r (ϕ) = b/ sin ϕ. However, this equation corresponds to the straight-line approximation and gives the correct result only for the linear terms in the deflection angle. In this paper, we will make use of the following choice for the light ray which is a solution of our geodesic equation (13): Let us now elaborate on the following special cases: λ = 1 Let us first calculate the Gaussian optical curvature from Eq. (58) in the case when λ = 1. One can easily find that Substituting into Eq. (66) generates the value of the deflection angle in terms of the integral In order to evaluate the above integral note that det g op dr = r dr Using the above result for the deflection angle we find On the other hand we can use the relation (15) to express the last result in terms of the minimal distance r 0 in terms of the impact parameter Consequently the deflection angle takes the form Thus we have shown that by modifying the integration domain our result is in perfect agreement up to the second order in M, and agrees only in the linear term in S. In order to find the exact result including the second order terms in S we have to modify the equation for the light ray (65). However this goes beyond the scope of this paper. λ = 2 Let us substitute this equation into Eq. (66) then we find that the deflection angle is given in terms of the following integral The deflection angle in terms of the impact factor is found to bê As already noted, the disagreement in the last two terms is to be expected due to the integration domain. Finally, neglecting these terms and letting S = −Q 2 , if we expand (25) in series form the last result we recover Eq. (34) up to the second order terms in M and Q. λ = 3 Let us substitute this equation into Eq. (66) then we find that the deflection angle is given in terms of the following integral where det g op dr = r dr 1 + 3M r + 15M 2 2r 2 + 3 2r 3 The deflection angle has the form Hence in a similar way using Eq. (35) we recover Eq. (43) up to the second order in M, but in leading order in S. λ = 4 Let us substitute this equation into Eq. (66) then we find that the deflection angle is given in terms of the following integral The deflection angle is given bŷ Or, after we use Eq. (44) the deflection angle in terms of the distance of the closest approach readŝ Time delay We analyze here the time delay due to the massive gravitational field of the black hole solution. Suppose that two photons emitted at the same time but follow different paths to reach the observer. They will take two different times to reach the observer and this time difference is called the time delay. It is important to discuss the time delays between lensed multiple images which is directly related to determining the Hubble constant H 0 and was first pointed out by Refsdal [108]. We consider light propagation in a static spherically symmetric spacetime given by the line element The time delay of a light signal passing through the gravitational field of this configuration is express as where r 1 and r 2 are distances of the observer and the source from the configuration and r 0 is the closest approach to the configuration. With help of this algorithm we will calculate the time delay due to the massive gravitational field of the black hole. Let r e and r s be distances of the observer (Earth) and the source from the black hole respectively. Further r 0 is the closest approach to the black hole. Therefore, the total time required for a light signal passing through the gravitational field of the black hole to go from the observer (Earth) to the source and back after reflection from the source is given by the following equation [107]. where and for our considered metric, given in the Eq. (6). Considering the approximations (as r e ,r s , r 0 >> 2M) the integrand of these expressions assume the form So, we can express the Eq. (85) as In the absence of gravitational field (M = S = 0) the time is Now, the delay in time is express as the following equation Finally, we can estimate the time delay due to the gravitational field of the black hole as and we may proceed to calculate the delay in time for the cases corresponding to the values of λ = 1, 2, 3, and 4 respectively. Case Therefore, the required delay in time corresponding to λ = 1 Therefore, the required delay in time corresponding to λ = 3 is Case Therefore, the required delay in time corresponding to λ = 4 is Light deflection by charged and massless Wormholes in massive gravity Let us set the mass to zero i.e. M = 0 and introduce the following coordinate transformation r λ = S + v 2 into the metric (6), in that case we find the wormhole solution given by the Einstein-Rosen (ER) bridge form The throat of the wormhole is located v = 0, with radius R thro. = S 1 λ . This metric represents a massless wormhole with scalar charge S, and as far as we know this is a new metric. One can check by setting λ = 2 and S = −Q 2 the above metric takes the form of usual charged ER wormhole. From now on, we shall consider v = r , in this way from the metric (104) the Lagrangian yields Going through same procedure and introducing a new variable r = 1/u as in the black hole case, we find the following equation On the other hand the wormhole optical metric reads with The Gaussian optical curvature is found to be We shall consider the deflection angle by the spacetime metric (104) in terms of the GB method. Case λ = 1 The Gaussian optical curvature from Eq. (109) in the case when λ = 1 reads Substituting this result into Eq. (66) generates the value of the deflection angle in terms of the integral In order to evaluate the above integral we need to find the equation for the light ray which can be found from Eq. (106) which yields If we linearize Eq. (113) around S, and then consider the equation which corresponds to straight line approximation we are left with the following equation Solving this differential equation and using the condition u(0) = 0 and u(π/2) = 1/b we find Finally the light ray equation in terms of the old coordinate gives The deflection angle is found to bê In this case when λ = 2 the Gaussian optical curvature yields We Substitute this equation in the deflection angle led to the following integral Considering a series expansion around S in Eq. (106) and then take only the straight line approximation led to the following differential equation Solving this equation we find the light ray equation Using the above result for the deflection angle we find The Gaussian optical curvature in the case when λ = 3 is found to be From the GBT we find On the other hand the light ray equation in this case reduces to a nonlinear differential equation. However we can approximate this equation from Eq. (106) as follows 4 9 Solving this equation one finds Using the above result for the deflection angle we find 6.4 λ = 4 We start by calculating first the Gaussian optical curvature when λ = 4 to find This result with the help of GBT giveŝ From Eq. (106) we find as follows with the following equation for the light ray Using the above result for the deflection angle we find Thus we have shown that the deflection angle increases with the increase of the parameter λ for a constant value of the scalar charge S, which is shown in Fig. 3. From Fig. 3 we can see that for a fixed value of S = 0.5, the deflection angle increases when increase of λ. It is a straightforward calculation to show and check these results in terms of the geodesic approach (Fig. 3). Time delay due to massless wormhole in massive gravity Here, we focus to estimate the time delay due to the massless wormholes in the massive gravity. Using the same technique Fig. 3 We plot the deflection angle as a function of the impact factor b. We have chosen S = 0.5. We see that with the increase of λ the deflection angle actually increases as above, we calculate the delay in time for the cases corresponding to the values of λ = 1, 2, 3 and 4 respectively. Case λ = 1 Here, we find the time delay as 7.2 Case λ = 2 Here S + v 2 = r 2 , hence S + v 2 e = r 2 e and S + v 2 0 = r 2 0 . In this case, we obtain the time delay as 7.3 Case λ = 3 Corresponding the value of λ = 3, time delay is found as (S + v 2 s ) 1 3 . 7.4 Case λ = 4 In this case we calculate the time delay as Conclusions In this paper we have studied the weak gravitational lensing for a black hole and wormhole in massive gravity. The black hole solution is governed by a parameter λ dependent further on the mass M and scalar charge S. In the case of vanishing S, the results of the standard Schwarzschild geometry are recovered. By deforming the black hole solution in terms of the following coordinate transformation r λ = S + v 2 we constructed a wormhole solution of ER type bridge which is regular in the interval −∞ < v < ∞. The deflection angle is then computed for four different values of the parameter λ. The extension of this work via Gauss-Bonnet theorem is nontrivial. First we derive a result showing how the Gaussian optical curvature and deflection angle is to be computed. The analysis is aided through the use Taylor series expansions. The time delay function is also established and computed for each of the four cases of λ of interest in this investigation. Graphical plots indicate that for a fixed value of the mass and positive scalar charge, the deflection angle decreases with increasing λ, while for negative scalar charge, the deflection angle increases with an increase in λ. Whereas in the wormhole case we found that the deflection angle increases with the increase of the parameter λ for a constant value of the scalar charge S, provided S > 0.
2022-12-01T15:46:08.514Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "10f26a6a2a1e41368aa7e08ef666825207ae186e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1140/epjc/s10052-018-5823-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "10f26a6a2a1e41368aa7e08ef666825207ae186e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
32312354
pes2o/s2orc
v3-fos-license
A rare case of necrotising fasciitis after spinal anaesthesia 1. Abel M, Eisenkraft JB. Anesthetic implications of myasthenia gravis. Mt Sinai J Med 2002;69:31‐7. 2. Politis GD, Tobias JD. Rapid sequence intubation without a neuromuscular blocking agent in a 14‐year‐old female patient with myasthenia gravis. Paediatr Anaesth 2007; 17:285‐8. 3. Scher CS, Gitlin MC. Dexmedetomidine and low‐dose ketamine provide adequate sedation for awake fibreoptic intubation. Can J Anaesth 2003;50:607‐10. 4. Levänen J, Mäkelä ML, Scheinin H. Dexmedetomidine premedication attenuates ketamine‐induced cardiostimulatory effects and postanesthetic delirium. Anesthesiology 1995;82:1117‐25. 5. Bajwa SJ, Kaur J, Singh A, Parmar S, Singh G, Kulshrestha A, et al. Attenuation of pressor response and dose sparing of opioids and anaesthetics with pre‐operative dexmedetomidine. Indian J Anaesth 2012;56:123‐8. 6. Péréon Y, Bernard JM, Nguyen The Tich S, Genet R, Petitfaux F, Guihéneuc P. The effects of desflurane on the nervous system: From spinal cord to muscles. Anesth Analg 1999; 89:490‐5. 1. Abel Necrotizing fasciitis (NASTI) is a progressive, lethal and often polymicrobial bacterial infection of the fascia and surrounding soft tissue. The risk of infection during regional anaesthesia is very low. We present a case of necrotising fasciitis as a result of E. coli complicating operating room spinal anaesthesia injection. A 27-year-old female was admitted in the emergency department with severe pain, swelling, erythema and blackening involving nearly whole of the back, part of anterior abdomen and gluteal regions accompanied by fever and chills. The patient had a history of caesarean section delivery for non-progression of labour with fetal distress 20 days back in a hospital near by her residence for which spinal anesthesia was given. There was no history of diabetes mellitus, chronic infections, immunosupressive medications intake and leukemias or lymphomas. She took inj.diclofenac sodium intravenously on 1 st post operative day and thereafter switched to oral tablet. On examination, the patients general condition was very poor. Her temperature was 40°C, pulse rate was 130/minute, blood pressure was 84/52 mmHg and respiratory rate was 24/minute. There was extensive deep necrotizing fasciitis of the whole of back and part of anterior abdomen and gluteal regions with gangrene and foul smelling exudates [ Figure 1]. Early goal directed therapy (EGDT) for septic shock (1) central venous the early symptoms. [2] The development of pain and erythema first at lumbar region shows the route of entry is through spinal injection. Erythema, blister, discharge, necrosis and hemorrhagic bullae may be present. Viral symptoms may be present in the form of chills, fever, myalgia and diarrhea. Late stages may land in multiorgan failure and disseminated intravascular coagulation. [3] Laboratory tests, tissue biopsies and cultures along with appropriate imaging studies may facilitate the diagnosis of necrotizing fasciitis. [4] Treatment includes broad spectrum antibiotics, aggressive debridement of suspected deep-seated infection and supportive measures for the management of septic shock and multiorgan failure. Hyperbaric oxygen therapy and intravenous immunoglobulins have been shown to reduce the mortality. [5,6] In this case, unfortunately no exact records were obtained regarding the aseptic techniques followed in operating room while giving spinal anesthesia. Other potential source of infection could be contaminated anesthetic solution or syringes. [7] A portal of entry from the patient's skin or from the oropharyngeal cavity of the operating room personnel was suspected based on the studies. [8] Wearing a facemask before entering the operating room and allowing time to ensure effective antibacterial action of antiseptics have been recommended for the practice of regional anaesthesia. Delay in diagnosis and surgical treatment probably ended in mortality in this case. Strict adherence to the principles of asepsis is the foundation of regional anesthesia-related infection prevention. was initiated immediately. Central venous cannulation was performed and fluid resuscitation started. Blood sample for cultures were sent. Foley's catheterisation was performed and intravenous antibiotics were adminstered. CVP was measured to be 9 cm H2O. The patient remained hypotensive, tachycardiac and oliguric. Inj.Noradrenaline (2 μg/min) and Inj. Dopamine (16 μg/kg/min) were started and the MAP slowly raised to above 65 mmHg. Laboratory findings showed TLC of 24,000/mm 3 with increased polymorphs and hemoglobin of 6.7 g/dl. Serum creatinine was 3.2 mg/dl and blood urea nitrogen was 98 mg/dl. Serum electrolytes showed hyponatremia, hypocalcemia and hyperkalemia. Arterial blood gas analysis revealed metabolic acidosis with pH 7.2. Other laboratory results were normal. Cental venous saturation (ScvO2) was 60%. ECG showed sinus tachycardia at a rate of 120/minute. Chest radiograph was normal. Computed tomography, Doppler ultrasonogram and magnetic resonance imaging were not advised as the patient was critically ill. The patient was shifted to operation theatre and surgical debridement of the devitalised tissues was done under total intravenous anaesthesia (TIVA) with inj. ketamine and inj.midazolam and spontaneous mask ventilation maintained with oxygen. One unit blood and two units of fresh frozen plasma were transfused intraoperatively. The operative findings revealed diffuse necrosis of the skin, fascia and muscles. Wound swab culture and tissue biopsy were sent. The patient was shifted to intensive care unit in the postoperative period. After 12 hours, the patient became drowsy, hypotensive and tachypneic with glassgow coma scale of 7/15. Tracheal intubation was done and ventilatory support was provided. Inj.Dobutamine (5 μg/kg/min) was started and dose of noradrenaline and dopamine was increased. The patient became anuric and hemodynamically unstable. Despite all aggressive medical and surgical interventions, the patient died on the third day of admission in the ICU due to sepsis-induced multiorgan failure. Blood and swab culture report revealed luxuriant growth of E. coli. Histopathological examination confirmed the diagnosis of necrotizing fasciitis with myonecrosis [ Figure 2]. Necrotising fasciitis is associated with high mortality and long-term morbidity. [1] In this case, the use of NSAIDs for post-operative pain would have masked
2018-04-03T02:05:30.677Z
2013-05-01T00:00:00.000
{ "year": 2013, "sha1": "2113947f143302eb5fed7108ee09a1eb641c1e54", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0019-5049.115594", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1b5ab4fa339196957ddaad2159e9e99e5751aaba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247599238
pes2o/s2orc
v3-fos-license
Comparative plastome analysis of Musaceae and new insights into phylogenetic relationships Background Musaceae is an economically important family consisting of 70-80 species. Elucidation of the interspecific relationships of this family is essential for a more efficient conservation and utilization of genetic resources for banana improvement. However, the scarcity of herbarium specimens and quality molecular markers have limited our understanding of the phylogenetic relationships in wild species of Musaceae. Aiming at improving the phylogenetic resolution of Musaceae, we analyzed a comprehensive set of 49 plastomes for 48 species/subspecies representing all three genera of this family. Results Musaceae plastomes have a relatively well-conserved genomic size and gene content, with a full length ranging from 166,782 bp to 172,514 bp. Variations in the IR borders were found to show phylogenetic signals to a certain extent in Musa. Codon usage bias analysis showed different preferences for the same codon between species and three genera and a common preference for A/T-ending codons. Among the two genes detected under positive selection (dN/dS > 1), ycf2 was indicated under an intensive positive selection. The divergent hotspot analysis allowed the identification of four regions (ndhF-trnL, ndhF, matK-rps16, and accD) as specific DNA barcodes for Musaceae species. Bayesian and maximum likelihood phylogenetic analyses using full plastome resulted in nearly identical tree topologies with highly supported relationships between species. The monospecies genus Musella is sister to Ensete, and the genus Musa was divided into two large clades, which corresponded well to the basic number of n = x = 11 and n = x =10/9/7, respectively. Four subclades were divided within the genus Musa. A dating analysis covering the whole Zingiberales indicated that the divergence of Musaceae family originated in the Palaeocene (59.19 Ma), and the genus Musa diverged into two clades in the Eocene (50.70 Ma) and then started to diversify from the late Oligocene (29.92 Ma) to the late Miocene. Two lineages (Rhodochlamys and Australimusa) radiated recently in the Pliocene /Pleistocene periods. Conclusions The plastome sequences performed well in resolving the phylogenetic relationships of Musaceae and generated new insights into its evolution. Plastome sequences provided valuable resources for population genetics and phylogenetics at lower taxon. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-022-08454-3. The genus Musa was established by Carolus Linnaeus in 1753 [7]. Cheesman [8] divided the genus into four sections: Australimusa and Callimusa with n = 10, Eumusa and Rhodochlamys with n = 11 chromosomes. Later, Argent [9] established the Musa sect. Ingentimusa based Sections Rhodochlamys and Eumusa are closely related, having bracts that are generally sulcate, glaucous and that become revolute on fading [8]. This contrasts with species of sections Australimusa and Callimusa, which have bracts that are smooth, polished on the outside, and that do not become revolute on fading. In contrast with the pendent inflorescences with dull-colored bracts and large plants (3 m or taller) in Eumusa, species of sect. Rhodochlamys are generally smaller in stature (less than 3 m), have erect inflorescences with brightly colored bracts. Species of sect. Callimusa are separated from those of sect. Australimusa by their unique seeds, which are cylindrical or barrel-shaped and possess a large apical chamber. Seeds of species of sect. Australimusa are subglobose or dorsiventrally compressed and possess a small apical chamber. These five sections proved to be very useful and have been widely accepted [8][9][10][11]. Since the molecular markers were applied in plant systematics, there are many related studies on the Musa section assessment. For example, Wong et al. [12] used AFLP to validate this classification system. Several phylogenetic studies have been published for the Musaceae, however, none of these five sections was recovered as monophyletic [5,6,[12][13][14][15][16][17][18]. Only two infrageneric clades corresponded well to the basic chromosome numbers (one clade with n = x = 11, the other with n = x = 10/9/7) [6,17]. Häkkinen [2] reappraised the five-section system by integrating molecular phylogenetic studies and proposed two infrageneric clades classification: sect. Musa and sect. Callimusa (referring as sect. Callimusa Cheesman emend Häkkinen). Sect. Rhodochlamys was synonymized with sect. Musa, sect. Australimusa and sect. Ingentimusa were treated as synonyms of sect. Callimusa [2]. Most edible banana cultivars are from hybridization between Musa acuminata Colla different subspecies or with M. balbisiana Colla [3] and these two species are both from the sect. Musa [2]. A well-resolved phylogeny of Musaceae is critical for the germplasm conservation of cultivated banana ancestors and their wild relatives. However, a well-resolved phylogeny of Musaceae has been still missing. The lack of herbarium specimens and quality molecular markers limited our understanding of the phylogenetic relationships of Musaceae species. Studies with broad taxonomic coverage usually employed limited gene fragments and reconstructed phylogeny containing polytomy and low-support branches [5,6,[17][18][19]. For instance, using plastid atpB-rbcL, rps16, trnL-F and nuclear ribosomal ITS, Li et al. [6] generated a phylogenetic tree with many polytomies though this study covered 36 species. Recently, Burgos-Hernandez et al. [18] used ITS, trnL-trnF and atpB-rbcL to conduct a biogeographic analysis of Musaceae and covered 37 species. Their resulting phylogeny also encompassed multiple low-support branches. In contrast, studies using multiple low copy nuclear genes or even whole-genome sequences on Musaceae phylogeny have in-depth gene coverage and strong internal support, but their taxonomic coverage was often sparse [20][21][22][23] since their sampled species did not even exceed 20. Thus, it is worthwile to investigate phylogenetic relationships of Musaceae in more detail with both expanded taxonomic coverage and gene sampling. Genome skimming, an approach to sequence samples with shallow depth, is usually used to acquire the highcopy genomic fraction, such as plastome [24]. Many studies showed that the plastome significantly resolves phylogenetic relationships at lower taxonomic levels [25][26][27][28][29]. The plastome is maternally inherited without recombination in Musaceae [30]. They are generally comprised of four regions, namely the large single copy (LSC), the small single copy (SSC), and two inverted repeats (IRs, IRa, and IRb) [31]. Some highly variable regions in the plastome have been identified as "hotspots" and employed as useful molecular markers for phylogenetic studies [32,33]. In recent years, although some plastome sequences ofMusaceae have been reported [23,[34][35][36], most species studied concentrated on a few wild bananas cultivated at botanical gardens and did not propose a comprehensive plastome analysis for the Musaceae family. In this study, we used the genome skimming approach for the assembly of the plastomes of a large panel of Musaceae species. We analyzed their plastome (1) to investigate the plastome structure variations; (2) to identify highly variable regions; and (3) to reconstruct the phylogeny of the Musaceae, and (4) to assess the divergence time of the main clades. Plastome features We analysed the structure of 49 full plastomes covering 48 species/subspecies in the Musaceae (including 45 new plastome assemblies generated for this study) ( Table 1). The full-length variation of Musaceae and the genus Musa plastomes is approximately 5.7 kb (plastome length: 166,782-172,514 bp), with small variation in Ensete plastomes (163 bp, plastome length: 168,248-168,411 bp). All sequenced plastomes exhibited the typical quadripartite structure, composed of one LSC, one SSC, and two IRs (IRa and IRb) (Fig. 2). The overall GC content was nearly identical (36.5-37.1%) (Table 1). Individual plastome was annotated and followed by manual checking, resulting in a total of 113 genes, including 79 protein-coding genes, 30 transfer RNA (tRNA), and four ribosomal RNA (rRNA) genes (Fig. 2 Table S2). Among these 113 genes, 21 genes have two copies (within IR region), the remaining 92 have one single copy. Sixteen genes have one single intron, and two contain two, the left 95 genes have no intron (Table S2). The complete plastome alignment for the 48 Musaceae species illustrated that there was no genomic rearrangement (Fig. S1). IR boundary comparative analysis The IR/LSC and IR/SSC junctions of the 49 Musaceae species were compared to explore the IR expansion/contraction (Fig. S2). No noticeable expansion or contraction was found within the four Ensete species. Compared to Ensete species, the JLA and JLB of Musella lasiocarpa extended into gene rps19. Apparent differences in IR boundaries were observed among Musa species. The JSB of Musa gracilis withdraws to the spacer of ndhA1 and ndhF compared to other species from sect. Callimusa Cheesman emend Häkkinen, of which JSBs resided in ndhF (Fig. S2). On the contrary, the JSB of Musa balbisiana extended into the ndhF gene compared to other species in the sect. Musa. All those species from the sect. Callimusa Cheesman emend Häkkinen had only one copy of rps19 gene. In contrast, those species from the sect. Musa had one more copy of rps19, except Musa velutina. The four junctions between LSC/IRs and SSC/ IRs were confirmed with PCR-based sequencing. The assembly of the PCR product was mapped against the plastome that we generated previously and the mapping result was shown in Fig. S3. All of the IR borders could match the assemblies of PCR-based sequences. Codon usage preference Among the 49 Musaceae plastomes, the total codons (including stop codons) ranged from 28,770 in M. itinerans to 29,521 in M. yunnanensis (Table S3). The codon frequency was relatively similar across Musaceae species (Table S4). Only methionine (Met) and tryptophan (Trp) were encoded by a single codon among all 20 amino acids encoded by 64 codons (Fig. 3). The three most frequent codons were GAA-Glu, AUU-Ile, and AAA-Lys (Table S4). The most and least abundant amino acids were leucine (Leu) and cysteine (Cys), encoded by about 10% and 1% of codons, respectively (Table S4). The relative synonymous codon usage (RSCU) values of the same codon were very similar between all plastomes of Musaceae (Table S4). The two codons with the highest RSCU values were AGA-Arg and UUA-Leu. Codons ending in T or A had RSCU > 1. In contrast, codons with C or G in the third position mostly had RSCU < 1, indicating a significant preference for codons ending with T and A, which is generally observed in the angiosperm plastomes [37,38]. GC3 value is significantly higher than the GC2 in all Musaceae species, which supported this preference pattern (Table S3). Musa species exhibited higher usages in UUG, GUG, GAA, CGU, AGA, GGU, and GGA (Table S5). Selective pressure analysis Synonymous (dS) and nonsynonymous (dN) substitution rates, as well as dN/dS, were determined for the 79 coding sequences to estimate the selective pressure acting on them (Fig. S5, Table S11). The dN and dS ranged from 0 to 0.16, and 0 to 0.59, respectively. Among the 79 CDSs, ndhF and rpl32 showed relatively higher dS values (> 0.4), while accD and matK exhibited relatively higher dN values (> 0.1; Fig. S5, Table S11). For most genes (89.87%), dS was significantly greater than dN, resulting in a dN/dS value less than 0.5, suggesting a purifying selection. Two genes with relatively higher dN/dS value were identified (dN/dS > 1; ycf1, ycf2 valued as 1.16 and 4.44, respectively). The null model (dN/dS = 1) was performed for ycf1 and ycf2. The P value of Chi-square test for ycf2 was less than 0.05, indicating an intensive positive selection. P value of ycf1 was 0.4335, it suggested that ycf1 may not be in positive selection (Table 2). Sequence variability and divergent hotspots identification Nucleotide diversity (Pi) of the 49 Musaceae plastomes ranges from 0 to 0.03282, with an average of 0.00698 (Fig. S6, Table S12). Among LSC, SSC, and IR regions, SSC and IR regions exhibit the highest and the lowest Pi value of 0.01671 and 0.00389, respectively (Table S12). Ten most variable regions with peak Pi values > 0.020 and alignment length over 600 bp were identified as divergent hotspots (Fig. S6, Table S12). The ndhF-trnL sequence had the highest Pi value (0.02470), followed by ndhF, matK-rps16, and accD (Table S12). These four hypervariable markers had more haplotypes (45 vs. 34) and higher resolution than the three universal DNA barcodes (matK, rbcL, and trnH-psbA) based on the ML tree (Fig. S7, Table S12). Moreover, based on the combination of the four most variable markers, many indels sites could be found within those pairwise species with the lowest K2P distance (Table S13). These indels increased the species identification rate for those closely related species. Phylogenetic relationships Our Maximum likelihood (ML) and Bayesian inference (BI) analyses generated a consistent phylogenetic tree supporting the same topological structure. The CDSs and the complete plastome dataset produced similar topology trees with only one discordance on the relationship between five species in sect. Callimusa (M. borneensis, M. barioensis, M. gracilis, M. salaccensis, M. lokok) (Fig. S8, Fig. S9). The full plastome dataset provided a better-supported phylogeny than CDSs dataset because it possessed fewer branches with bootstrap support values of less than 90%. The monospecies genus Musella is sister to the Ensete (Fig. 5). The genus Musa was subdivided into two large clades, which corresponded to the Callimusa and Musa Phylogenetic relationships of Musaceae Compared to previous phylogenetic studies on Musaceae [5,6,17], this study is the first one to analyze Musaceae phylogenetic relationships with density sampling using plastome-scale sequences. The resulting tree is fully resolved with substantially increased support value for several branches across the Musaceae tree (Fig. 5). The sister relationship between the genus Musella and Ensete is reassured. The genus Musa is well-supported into two clades, corresponding to Häkkinen's two-section reappraisal as Sect. Musa and Sect. Callimusa Cheesman emend Häkkinen [2] that delineated the basic chromosome number of n = x = 11 and n = x =10/9/7, respectively. For the infrageneric classification in Musa, Cheesman [8] indicated that"the groups have deliberately been called sections rather than subgenera in an attempt to avoid the implication that they are of equal rank". Although there are significant morphological characters and chromosome number difference between both clades, following the suggestion of Cheesman [8], Häkkinen [2] classified both clades as sect. Musa and sect. Callimusa, respectively (Fig. 5). x = 11 is most reasonable original basic number in Zingiberales [39], with x = 10, 9, splendida and M. lutea [41], but concentrating only on their morphological description. For this study, we could not access the material but it would likely help refining species delimitation and phylogenetic relationship within the subclade and between the two subclades. The subclade II (with support value: 100/1.0) distributes in the Malayan Peninsula/Sumatra, Borneo, and Papua Guinea, with the species diversity center in Borneo. Notably, it includes M. beccarii (2n = 18) and the physically largest wild banana, M. ingens (2n = 14), whose chromosome numbers differ from the other species in the sect. Callimusa (2n = 20) (Fig. 5). M. ingens, the only species in sect. Ingentimusa, was treated as one section by Argent at 1976 [9] due to its seven pairs of chromosmes. M. ingens distributes in the tropical montane forests of New Guinea, Indonesia. Our study sampled more Australimusa species than earlier phylogenetic studies [6,17,18,23] shapes, they are sympatric with other species in subclade II, and phylogenetically nested within subclade II. Therefore, in agreement with previous studies [6,17], we support the treatment of Häkkinen [2], that sect. Ingentimusa and sect. Australimusa should be reduced as the synonym of sect. Callimusa. The Musa section is also subdivided into two subclades (subclades III and IV, both with support value: 100/1.0) with the species diversity center in Indo-Burma (Fig. 5). Subclade III includes banana wild relatives that share interesting features for crop improvement, such as M. balbisiana which is resistant to the harsh environment, M. itinerans immune to Foc. 4 [42], and M. basjoo the most cold-tolerant wild banana. M. balbisiana is one of the ancestors of the interspecific cultivated banana, no obvious close relatives were reported earlier [43]. Both Li et al. [6] and Janssens et al. [17] . These species distribute from the eastern Himalayas region to South China, and grow from seasonal tropical forest to temperate forest, with drought and cold tolerance. Natural crossing between them is a relatively common event [44]. Therefore, these species can represent valuable genetic resources for banana breeding. However, as banana wild relatives, they were often neglected while more conservation and characterization is needed. M. acuminata species, the main wild ancestor of cultivated banana, is included in the sister subclade (subclade IV, Fig. 5). M. acuminata is an extremely variable species with a wide geographical distribution from Burma through Malaysia to New Guinea, Queensland, Samoa and the Philippines [44]. Among the M. acuminata subspecies, M. a. ssp. burmannica is the earliest diversified, consistent with the previous studies covering four M. acuminata subspecies based on whole genomes [22] and 72 M. acuminata accessions using restriction-site-associated DNA sequencing data [45]. Consistently with previous studies [5,6,17], we found that M. acuminata clustered closely with four species from sect. Rhodochlamys, namely M. rubra, M. laterita, M. siamensis, and M. rosea. However, contrary to Janssens et al. [17], M. siamensis is not nested within M. acuminata subspecies, and is clustered with M. rubra. This result reinforces recent studies that claimed M. laterita and M. siamensis as a synonym of M. rubra [46,47]. Moreover, it is worth noting M. rubra and M. rosea were described based on the vouchers cultivated in the botanical garden, without evidence of their occurrence in the wild. The only wild population of M. rubra was reported in Manipur and Mizoram, NE India [46]. M. rosea, only collected in Angkor ruins in Cambodia, has long been a "lost species" [48]. The high plastome identity between these species and M. acuminata suggests that M. acuminata have provided their maternal material during hybridization. Various Eumusa × Rhodochlamys hybrids have been observed, which gave rise to considerable taxonomic confusion in poorly understood Rhodochlamys [44]. We, therefore, speculate that both species (M. rubra and M. rosea) are hybrids between Musa acuminata and species from sect. Rhodochlamys, but more studies are needed to verify their origin and species status. Excluding Musa rubra, M. laterita, M. rosea, and M. siamensis, the other species from sect. Rhodochlamys formed one well-supported clade (support value: 100/1.0), with the common ancestor of M. acuminata. Although Rhodochlamys was morphologically characterized by the erect inflorescence and colorful bracts, this phylogenetic relationship suggests the separation of sect. Rhodochlamys from Eumusa was not clear-cut. Both Li et al. [6] and Janssens et al. [17] did not recover its monophyly due to the low resolution of few genes. This lineage experienced a recent (ca. 10.97 Ma) and rapid speciation (Figs. 5 and 6). Sect. Rhodoclamys species concentrate in the East Himalayas region, especially in the Assam-Burma mountain region. Reproductive isolation between Rhodochlamys species is slight [44]. Due to the difficult access for field investigation and rapid speciation, extending the sampling and employing more nuclear genes would provide further evidence for the evolutionary history of Rhodoclyamys species. Divergence time estimation Correct phylogeny and divergence-time estimation are essential for evolutionary history study. With a complete chloroplast gene set, we can choose suitable genes to facilitate and optimize divergence-time estimation. The crown node age of Musaceae (59.19 Ma, Fig. 6 [50]. Our study used more taxon sampling and DNA nucleotide to increase the divergence-time estimation accuracy. Among those studies for divergence-time estimation of Musaceae [17,18,20,49], two fossils (Spirematospermum chandlerae and Ensete oregonense) were often used: Ensete oregonense, confirmed to be part of Musaceae [51] and Spirematospermum chandlerae Friis is the oldest known fossil of the Zingiberales. This study selected one more fossil (Zingiberopsis attenuate) and one secondary calibration point compared to other related studies [17,18,20,49]. Our analyses suggest that main lineages within Musa diversified from the late Oligocene and accelerated at the late Miocene, and two lineages (Australimusa and most Rhodochlamys species) radiated very recently in the Pliocene /Pleistocene periods. As discussed in Burgos-Hernandez et al. [18], this time frame is consistent with the collision of India with Eurasia and the uplifts of the Qinghai-Tibetan Plateau (QTP). With the uplift of the QTP, the Asian monsoon was initiated in the late Oligocene, followed by several periods of strengthening in the Miocene (e.g., ~15 Ma & ~8 Ma) and a putative abrupt strengthening in the Pliocene/Pleistocene periods (~3 Ma) [52,53]. The intensification of amount and seasonality of precipitation in South East Asia may have produced higher rates of diversification for various biotic lineages [54], which may have led to the evolutionary diversification of Musa, as demonstrated in other species from the lower altitudes of SE Asia, i.e., Lepisorus [54], Pogostemon [55], and Primulina [56]. The recent diversification of Australimusa species in the Pliocene and Pleistocene coincides with rapid orogenesis in New Guinea [57]. The orogenesis of the Central Range in New Guinea was initiated in the late Miocene, but most of the mountain uplift probably occurred since 5 Ma [54]. As found in the sect. Petermannia in the genus Begonia [58], the recent radiation in the Australimusa may be jointly triggered by orogenesis and associated microallopatry. Divergent IR borders and selective pressure analysis Due to possessing many repetitive sequences, the size of IR regions could be variable, and their boundaries are in random dynamics in most plants [59,60]. The contraction/expansion of IR region could bring about gene loss/ addition [61,62]. This study found that the contraction/ expansion of IR region mainly existed in the boundaries of IR regions and LSC region, namely, JLA and JLB (Fig. S2). The IR borders variation showed phylogenetic signal in Musa to a certain extent. According to these two boundaries, the genus Musa can be roughly divided into two groups, i.e., sect. Musa and sect. Callimusa Cheesman emend Häkkinen. The divergences of IR borders also led to the variation of gene composition in the genus Musa. Specifically, within sect. Musa, except for Musa velutina with a single copy of gene rps19, the remaining species contain two copies of gene rps19. Whereas all species of sect. Callimusa Cheesman emend Häkkinen harbors only one copy of rps19, reducing the gene content to 135 (Table 1, Table S2). In addition, M. coccinea lost one copy of the trnH gene. This result is congruent with previous investigations [23]. The different copy numbers of trnH and rps19 genes may hint at their gene substitution on nuclear and/or functional redundancy in the plastid [63]. Generally, variations in the synonymous mutation rate (dS) are likely to be affected by potential factors that could change the mutation rate, e.g., DNA repair. Nevertheless, the value of nonsynonymous mutation rate (dN) and dN/dS are impacted by the varied mutation rate and driven by selection regimes [64]. In our study, ycf2 and ycf1 were found with dN/dS value greater than 1 (Fig. S5, Table S11). The gene ycf2 was indicated under intensive positive selection. Huang et al. [65] suggested that ycf2 could be a useful DNA marker for estimating sequence variation and evolution in plants. Ycf2 is one of the largest genes encoding putative membrane protein [66,67] and was found to rapidly evolve in Fagopyrum [68], Ipomoea [69], Ophrys [70], Chrysosplenium [71], and Mimosoideae [72]. The extremely high dN/dS value (4.44) of ycf2 indicated that this gene is a valuable marker for the adaptive evolution study of Musaceae. Divergent hotspots identification and molecular markers for Musaceae species The mutations in the plastome are not universally randomly distributed along the sequence and are concentrated in certain regions referred to as the "hotspots" [73]. The highly variable hotspot regions could be used as markers to distinguish closely related species [74] and act as the taxon-specific DNA barcode. In this study, we identified ten highly variable regions (Fig. S6, Table S12). Among them, ycf1 has been recommended as the most promising chloroplast DNA barcodes for land plants [75] and was found to harbor the greatest number of informative sites in this study. The compound region ndhF-trnL, which proved to have the highest Pi value here, has been considered to be the best marker for molecular studies at a low taxonomic level [76][77][78]. However, both ycf1 and ndhF-trnL were less discriminatory when used alone since they could not provide enough haplotypes. The species identification analyses showed the better discriminatory power of the four most variable regions combined (ndhF-trnL, ndhF, matK-rps16, and accD) (Fig. S7). Therefore, we recommend these four regions to be the specific DNA barcodes for Musaceae species. Conclusions This study employed the genome-skimming approach and assembled the complete plastomes of 44 Musaceae species/subspecies, providing valuable genomic resources for this family. Based on the complete plastome analysis, the relationship within Musaceae was resolved with high branch support. In addition, the comparative analysis of plastomes revealed variable regions, which could be used as Musaceae-specific DNA markers. All the obtained genomic resources will contribute to future studies in species identification, population genetics, and germplasm conservation of Musaceae. Taxon sampling, DNA extraction, and sequencing The taxon sampling contains 49 accessions of Musaceae species/subspecies, representing four Ensete species (four accessions), 43 Musa species/subspecies (44 accessions), and one Musella species (one accession) (Table S14). Among these 49 Musaceae plastomes, 45 plastomes of 44 species/subspecies representing two genera (Musa and Ensete) were generated by the current study. Due to the sample collection challenges, 22 of 37 species from sect. Callimusa Cheesman emend Häkkinen could not be included in this study. Fifteen plastomes from other eight families were downloaded from NCBI for analysis. Sixty-four plastomes were used in the current study (Table S14). For data quality consistency, we dropped the plastome of Musa textilis, which presents a distinct short plastome compared to other Musa species (GenBank accession number: NC_022926.1, length 161,347 bp). Total genomic DNA was extracted from silica-dried materials using CTAB protocol [79]. The quality and concentrations of the DNA were assessed using agarose gel electrophoresis and a Qubit 3.0 Fluorometer (Life Technologies). We constructed sequencing libraries using the TruePrep DNA Library Prep Kit V2 for Illumina (Vazyme, TD501). Library lengths were evaluated with the High Sensitivity NGS Fragment Analysis Kit (Advanced Analytical Technologies, Ankeny, IA) on the Fragment Analyzer (Advanced Analytical Technologies). Lengths of all libraries ranged from 300 to 450 bp and were pooled together at equimolar ratios. Libraries were subjected to 150 bp paired-end sequencing on an Illumina X Ten platform (BGI, Wuhan, China). On average, approximately 3 Gb of clean NGS data were obtained for each sample. All raw reads data were submitted into the Sequence Read Archive (SRA) under BioProject PRJNA530661. Plastome assembly and annotation Raw reads were trimmed, and adaptors were removed using Trimmomatric v. 0.36 [80]. The quality of filtered reads was assessed using FastQC (http:// www. bioin forma tics. babra ham. ac. uk/ proje cts/ fastqc) to assure adaptors and bases below PHRED 30 were removed. We employed NOVOPlasty v. 4.2.1 [81] for the assembly of plastomes by providing Musa balbisiana as the reference (GenBank accession number NC_028439), and all parameters were kept as default settings (see https:// github. com/ ndier ckx/ NOVOP lasty). To confirm the result reliability of the assembling, we also used the toolkit GetOrganelle [82] to assemble the plastomes, and the parameter settings followed the online manual (see https:// github. com/ Kingg erm/ GetOr ganel le). In rare cases, when NOVOPlasty and GetOrganelle failed to obtain a complete plastome, reads were mapped against the non-overlapping contigs from NOVOPlasty to extend their ends to close the gap in Geneious, performing with medium-low sensitivity for 100 iterations. Two independent approaches were applied to annotate these 45 plastomes. Firstly, the annotation of the plastome sequences was performed with GeSeq [83], choosing the plastome of Musa acuminata ssp. malaccensis (HF677508) as the reference genome. In the meantime, ARAGORN was selected as a third party to annotate tRNA. Secondly, we use MAFFT v. 7.388 [84,85] to align and annotate these plastome sequences using the "Annotation Transfer" option with Musa itinerans (NC_035723) as a reference in Geneious. The annotation results from GeSeq and Geneious were subsequently compared and manually integrated. The plastome maps were drawn using OGDRAW [86]. Newly generated plastomes were submitted to GenBank (see Table S14 for accession numbers). Comparative plastome analyses for 49 Musaceae plastomes The boundaries between the four plastome regions, i.e., LSC/IRb (JLB), SSC/IRb (JSB), SSC/IRa (JSA), and LSC/IRa (JLA), were inspected with the online program IRscope [87]. According to the phylogeny generated in this study (Fig. 5), we chose 17 representative species for confirming the IR region expansion/contraction. The four junctions between LSC/IRs and SSC/ IRs of the 17 species were confirmed with PCR-based product sequencing. Target DNA regions were amplified in 25 µl reactions containing 10 ng (1 µl) template DNA, dNTP mixture 2 µl, 10 × LA PCR Buffer 2.5 µl, 0.5 µl of each primer, and 18.5 µl ddH 2 O. The primer pairs designed and used for PCR in this study were listed in Table S15. PCR products were bi-directionally sequenced by GENEWIZ Biotechnology Co., Ltd. (Suzhou, China). The sequences were submitted to the Science DB (available at https://www.https:// doi. org/ 10. 11922/ scien cedb. 01436), and the accession number were listed in Table S16. Codon usage analysis for protein-coding genes (PCGs) was conducted in DnaSP v. 6.12.03 [88]. PCGs were extracted and concatenated in Geneious before being imported to DnaSP for analysis. The relative synonymous codon usage (RSCU) values were calculated to measure the usage bias of synonymous codons. Other three indices, including the effective number of codons (ENC), codon bias index (CBI), GC content of the synonymous second (GC2) and third codons positions (GC3), were also computed to assess the extent of the codon usage bias. The online program REPuter [89] was used to detect short dispersed repeats (SDRs), with the parameters setting as follows: (1) Hamming distance of 3; (2) maximum computed repeats of 500; (3) minimum repeat size of 30 bp. Besides, tandem repeats (≥ 10 bp) were calculated with the online program Tandem Repeats finder (http:// tandem. bu. edu/ trf/ trf. html). Three alignment parameters, i.e., match, mismatch, and indel were kept as two, seven, and seven. The minimum alignment score was set to 80 and the maximum period size to 500. Simple sequence repeats (SSRs) were identified in MISA-web [90]. The minimum number of repetitions was set to 10, 5, 4, 3, 3, and 3 for mono, di-, tri-, tetra-, penta-and hexa-nucleotide repeats. The Maximum length of sequence between two SSRs to register as compound SSR was set 0. Mauve v1.1.1 [91], a plugin within Geneious, was applied to detect the genome rearrangements and inversions among 49 Musaceae plastomes. Nucleotide substitution rate analysis Seventy-nine coding sequences (CDSs) were individually extracted from 49 Musaceae plastomes and separately aligned using "Translation Align" tool in Geneious. Nonsynonymous (dN) and synonymous (dS) substitution rates and the ratio of nonsynonymous to synonymous rates (dN/dS) were calculated using CODEML option in PAML v.4.9 [92]. The phylogeny generated from CDSs dataset was used as the constraint tree. The parameters in CODEML control file were set as follow: (1) F3 × 4 model for codon frequencies; (2) "model = 0" for allowing a single dN/dS value to vary among branches; (3) "cleandata = 1" to remove gaps; (4) default settings for other parameters (as alternative model, "fix_omega = 0" and "omega = 2") [64]. For the potential positive selection gene, a null model (set "fix_omega = 1" and "omega = 1" in the control file) was additionally performed following Xiong et al. [93]. LRT were used to test model fit and a Chi-square test was conducted to calculate the P value. Sequence divergence analysis A sliding window analysis was conducted in DnaSP v. 6.12.03 [88] to locate genomic regions with a high frequency of variation. The alignment of 49 Musaceae plastomes was generated in MAFFT (with default settings) and used as the input file. The window length and step size were set to 600 bp and 200 bp, respectively. Those regions with nucleotide diversity (Pi) values higher than 0.020 and alignment length longer than 600 bp were extracted from the alignment and analyzed individually to estimate their characteristics. The pairwise distance was calculated using Kimura 2-parameter (K2P) distance in MEGA 7 [94]. Indel polymorphism analysis was conducted in DnaSP v. 6.12.03. Phylogenetic analysis For the phylogenetic analysis of Musaceae, two datasets (coding plastid sequences (CDSs) and the complete plastome sequence) were generated. A total of 49 Musaceae plastomes representing 48 species/subspecies were used, including 45 plastomes generated in this study and four downloaded from NCBI (Table S14). Three Alpinia species with plastome in GenBank were added as outgroup (Table S14). The 79 coding plastid sequences were combined, followed by multiple sequence alignment (MSA). For the complete plastome sequence dataset, the IRa was removed and served as inputs for MSA. All alignments were performed using MAFFT [95] and then manually checked in Geneious. We used Modeltest-NG 0.1.6 [96] to determine an optimal nucleotide substitution model under the corrected Akaike Information Criterion (AICc) for each dataset. All the ML analyses were performed in RAxML v8.2.12 [97] by assigning the GTRGAMMA model, and 1,000 rapid bootstrap replicates were run to evaluate the support values for each node. All the BI analyses were conducted in MrBayes v. 3.2.6 [98], and the best-fit models selected for CDSs dataset and the complete plastome sequence dataset were both GTR+I+G. Two MCMC runs were performed with five million generations and four chains, sampling every 5,000 generations and discarding the 25% as burn-in. For the CDSs dataset, best-fit partitioning scheme (Table S17) was determined by PartitionFinder 2 [99], and an additional ML analyse was performed using IQ-TREE [100] with 1000 ultrafast bootstraps [101]. Molecular clock dating The divergence time of Musaceae was estimated using BEAST v2.6.4 [102]. To incorporate multiple fossil calibration points and reduce the bias imported from a single calibration point, the divergence time was estimated by including the whole Zingiberales. SortaDate [50] was used to choose genes suitable for divergence-time estimation. This package determines which gene trees are clock-like, have the least topological conflict with the species tree, and have informative branch lengths. The ML tree generated from the complete plastome sequence dataset was used as an input species tree. As the result of SortaDate, the final screened genes were ccsA, matK, ndhF, rpoC1, and rpoC2. We selected optimal nucleotide substitution models for each of the five genes using Modeltest-NG 0.1.6 [96] under the AICc. These were identified as GTR+G4 for ccsA, matK, rpoC1, rpoC2, and GTR+I+G4 for ndhF. In BEAST, the newick ML tree of Zingiberales inferred from complete plastome sequences was used as a starting tree due to its more robust phylogenetic resolution. Clock models were linked, while site models were unlinked for each gene. The uncorrelated log-normal distribution relaxed molecular clock model was selected with the Yule model as the tree prior. MCMC run was set to 100 million generations, sampling every 10,000 generations. BEAST 2 output was assessed in Tracer 1.7.2 [103] to evaluate convergence and ensure an effective sample size for all parameters surpassing 200. TreeAnnotator v2.6.4 was used to annotate the maximum clade credibility tree after removing the first 20% of samples as burn-in. Three fossil records and one secondary calibration point were used in this divergence time estimation. Spirematospermum chandlerae [104] was used to calibrate the crown age of order Zingiberales with a mean age of 83.5 Ma. Zingiberopsis attenuate [105] was applied as a mean age of 65 Ma for the crown node of the Zingiberaceae family. Then Ensete oregonense [106] was used to calibrate the crown age of Ensete and Musella clade with a mean age 43 Ma. Each fossil calibration point was assumed to follow a normal distribution with a standard deviation of 2 and an offset of 2, resulting in 81.6-89.4, 63.1-70.9, and 41.1-48.9 Ma 95% intervals, respectively. The secondary calibration point was generated based on previous studies on Monocots [107,108]. It was placed on the stem node of Zingiberales with a normal distribution as a mean age of 100 Ma and a broad standard deviation of 5 (95% intervals 90.2 -110 Ma).
2022-03-23T01:45:35.026Z
2022-03-21T00:00:00.000
{ "year": 2022, "sha1": "4288bb4389f7347befb2b994b9a05de0ca74a82e", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-022-08454-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6c902955cfebe8bbc515eda6ceddcacdd4f7b45", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250175343
pes2o/s2orc
v3-fos-license
Spatial match analysis of multiple factors in the geopolitical environment of the Arctic Passage This study seeks to provide a basic approach to fulfill the spatial visualization of geopolitical environmental factors required for the navigation of vessels in the Arctic. Multi-dimensional geopolitical environmental factors of the Arctic Passage are analyzed and classified into geopolitics, geoeconomics, geo-military, geoculture, and laws and regulations related to geography. Their characteristics are extracted to form an attribute information table matching spatial layers. Based on the information category and basic characteristics, the spatial match method is applied and connected with the spatial layers to examine the spatial point, polyline, and polygon. According to the qualitative description, the study extracted the quantitative indicators for the following spatial–temporal pattern analysis. The standard deviational ellipse is used to analyze the spatial–temporal patterns and trends of the geopolitical environmental indicators of the Arctic Passage in the Arctic and Northeast Asia. The expansion and contraction of geoinformation coexist in the countries surrounding the Arctic Passage. The spatial–temporal changes indicate that the Arctic channel has a great economic impact on the Nordic countries and Northeast Asia, especially the coastal areas of China and Japan. The characteristic extraction and spatial match of the geopolitical environment provide integrated Arctic geoinformation inquiry and services for the diplomatic, administrative, and legal preparations required for Arctic navigation. Therefore, the geospatial analysis conducted provides scientific support and a basis for the geographical distribution and developing trends of visualization and spatial–temporal pattern in Arctic navigation. The results of this research will help decision-makers to make a comprehensive judgment on governance related to the sustainable development of the Arctic Passage. Introduction With the warming of the Arctic Ocean, the ice has been melting at an accelerating speed, making the full opening of the Arctic routes possible [1][2][3][4][5]. Therefore, seeking a suitable method for risk decision-making in Arctic route planning is currently a necessary research topic [6]. The Arctic Passage is closely related to the world geopolitical pattern [7,8]. The Arctic passages include three routes: the Northern Sea Route (NSR), the Northwest Passage (NWP), and the Transpolar Sea Route (TSR). The environmental and socioeconomic impacts of the Transpolar Sea Route could be locally significant because this route is shorter and deeper than the NSR [9]. The huge benefits generated by the Arctic Passage have gradually attracted the attention of many Arctic countries, which have attached great importance to the Arctic routes and Arctic geopolitics for a long time. Non-Arctic countries, though away from the Arctic, also consider the issue of Arctic routes an important agenda. During navigation along Arctic routes, ships are not only bound by the maritime administrative regulations of the countries along the route but are also indirectly affected by various factors, such as the political influence, economic development, military, and the culture of each country. So far, services for ship navigation mainly rely on the global positioning system (GPS). Various researchers have conducted a comprehensive analysis of the energy efficiency of ice-going ships to ascertain the most energy-efficient navigation in the Arctic routes for both economic and environmental purposes [10]. A model developed by the researchers estimates how ice conditions change the probability of blockage along the route and play on the economic and environmental attractiveness of the NSR [11]. Some researchers also collected daily information for ships sailing along Russia's NSR to investigate spatial and temporal variations of shipping and learn about the possible drivers of traffic levels and future trends [12]. In addition, geoinformation support tools have been developed to manage natural risks within the NSR area, which are used to build the geospatial contents and allocate the interconnected components of the solution space [13]. The central Arctic Ocean is designated as a Particularly Sensitive Sea Area under international law, which provides a useful mechanism for creating and updating precautionary shipping measures as more information becomes available [14]. The navigability of passages is affected by many factors, such as meteorological and hydrological conditions, extreme events, facilities, water depth, draft restrictions, and local laws and regulations [15]. The most critical elements restricting Arctic NSR development include extreme climate, political considerations, and sea ice conditions based on expert inputs, whose relations are discussed among these factors as well as their policy implications-as they are likely to contribute to the decision-making of shipping companies and the policy focus of the administration [16]. However, these navigation systems do not provide information on whether the vessels have a legal and political basis for free navigation and scientific investigations in different sea areas. Besides, the related jurisdictional agencies and the economic, political, cultural, and military conditions of the countries, as well as the navigation issues reflected by geoinformation, are not directly concerned. It is against this backdrop that the spatial pattern of the Arctic Passage is analyzed in terms of geopolitical environmental factors, which include regional politics, economics, culture, military, and other factors involved, to achieve the matching of geoinformation with the spatial location and to investigate the spatial-temporal pattern. When planning the Arctic route before navigation and changing the route during navigation, a realistic problem needs to be solved urgently: how to intuitively combine the required geographical environment with spatial information? The findings of this research will play a significant role in aiding the planning of the Arctic navigation routes and facilitating the changes of routes of various countries during the navigation process. At the same time, this paper considers the interaction between the Arctic Passage and its geopolitical and geographical environments. On the one hand, it considers the comprehensive influence of various factors on the Arctic Passage and associates them with geospatial information. From the perspective of spatial visualization, the study analyzes the distribution of the geographical factors of various actors in geographical space and changes the form of word representation of geographical information in the past. Intuitively and comprehensively considering the geographical information of the waterway is beneficial for Arctic navigators when a trip changes before and during the voyage. On the other hand, given the changes in the influence of the geographical environment due to the opening of the Arctic Passage, the study takes the NSR with more navigation as an example, and adopts the spatial analysis method for the spatial-temporal pattern analysis. This step was taken due to the effects of the geopolitical environment in the form of the indicators of multiple influencing factors, which is advantageous to the comprehensive evaluation of the influence of the geographical environment factors of the Arctic Passage. Study area The Arctic Passage generally refers to a collection of sea routes across the Arctic Ocean that connect the Pacific Ocean with the Atlantic Ocean. As a result of the seasonal changes in sea ice, Arctic Passage routes are not fixed [17]; instead, there are a number of different routes that mainly consist of the Northeast Channel of the Russian coast [18], the Northwest Channel across the Canadian islands, and the North Pole channel at the center of the Arctic Ocean ( Fig 1). The geopolitical environment of the Arctic Passage involves multi-dimensional information about countries or regions during navigation, such as the political, economic, military, cultural, and legal and regulatory issues. Consequently, this paper draws features based on the multi-dimensional geopolitical environment of the Arctic Passage. Geopolitical factors. The geopolitical factors of the Arctic Passage refer to a type of aggregation between countries in the process of geopolitical right competition in the navigation, i.e., a layout wherein each country is assigned to a certain group. The geopolitical factors surrounding the Arctic waterway are of great importance. On the one hand, they reflect the geopolitical pattern of the Arctic, and on the other hand, they are concerned with the issue of free navigation, which is the focus of global maritime rights [19,20]. Considering the strong dynamics and extensibility of the Arctic route, its geopolitics should not be the same as those of a fixed area. As a result, the influence of its geopolitics spreads to the extension of the Arctic route [21]. The geopolitical factors of the Arctic Passage not only revolve around the ownership of the seas in the Arctic Ocean but also the terrestrial attributes of coastal countries. Terrestrial attributes are mainly concerned with administrative countries and international organizations with certain Arctic influences, such as Arctic countries with geographical advantages, the Arctic Council, the Arctic Parliament, and the Northern European Council. Geoeconomic factors. Geoeconomics refers to the economic relationship between countries or regions based on their geographical location, resource endowment, economic structure, etc. The relationship can be an alliance or competition, cooperation, opposition, or even containment [20]. The Arctic Passage has important geoeconomic and commercial value. The economic benefits of the channel are mainly manifested in three aspects. The first is the economic benefit of shipping, i.e., the shipping distance is shortened, which in turn reduces costs. The second is the trade-economic interest; i.e., stakeholders who rely on the Arctic Passage to develop their trade economy have more interest because 90% of their trade is dependent on maritime transportation. The third is the resource and economic interest, which is facilitated by the exploitation of Arctic resources, and this greatly mitigates the energy crisis through the Arctic Passage. The shipping economy, trade economy, and resource economy in the Arctic region have distinct geographical characteristics, and their economic interests are classified into three categories: countries with exceptional geographical advantages around the Arctic; areas affected by the extension of the Arctic Passage, such as Europe and North America; and the beneficiary countries of the traditional trade route, whose domestic trade economy and energy exports are greatly affected by the opening of the route [21,22]. Geo-military factors. Geo-military is a combination of the terms "geography" and "military," and it can be understood as a military situation formed under the influence of geographical factors. The opening of the Arctic Passage has facilitated and diversified military delivery and military operations through the Arctic Ocean, making the region a "new battlefield" for strategic games among the Arctic countries. The Arctic "fights" and the trend of militarization in those countries have intensified. For instance, the United States and Russia attach great importance to military control over the Arctic, and they build military bases, conduct exercises, and deploy anti-aircraft guided missile systems and anti-submarine warfare airplanes in this region. North Atlantic Treaty Organization's (NATO) eastward expansion and American strategies to fight Russia are likely to exist for a long time, and Russia is also likely to form strategic counter-measures and deterrence against the United States and European countries [23]. However, at present, almost all Arctic countries' policy statements emphasize that a realistic military threat does not exist in the Arctic. All countries have promised that they will abide by the basic principles of international law to ensure peace and stability in the Arctic. Factors of laws and regulations related to geography. The utilization of the Arctic Passage must comply with corresponding legal rules. The legal system of Arctic navigation comprises two parts: relevant international law and domestic law. The nature and regulations of the two types of laws are inconsistent. However, as their scopes of application, contents of rules, and legal effects are related to the district to a certain extent, it is necessary to match the laws with geographic information to ensure that the Arctic Passage is used accurately and legally. Geocultural factors. Geoculture generally analyzes and predicts the strategic situation of the world or a region and the status of cultural expression concerning the political behavior of a country, i.e., according to the geographical form of various geographical factors and political patterns. The existence of geoculture has given birth to different nations and countries, thus influencing relations between countries. In some instances, it has even fueled national war and regional turmoil. The origin of culture, habitat of nationality, and distribution of aborigines in the Arctic are important factors for the investigation of Arctic geopolitical culture. The "cold culture" of the Arctic natives-Eskimos (Inuit)-is also known as the "white culture," which is largely predominant in the Bering Strait, the Aleutian Islands, Alaska, northern Canada, and Greenland. The Lapps, with their reindeer civilization in the Arctic Nordic region, are Nordic people, Urals, a mix of the Mongolian and European races. They are mainly distributed in the Arctic regions of Norway, Sweden, Finland, and Russia. Data sources This paper uses descriptive data, such as interest groups belonging to countries, and quantitative data, such as the Rule of Law Index from the Worldwide Governance Indicators (WGI) project, to measure a country's geopolitical level. The quantitative data includes GDP, per capita GDP, GDP ratio, and economic density. Therefore, this paper relies on the data provided by the World Bank to measure a country's geoeconomic level. The spatial point data is used to measure a country's geo-military level, and it combines the location of the military base with the basic military information. The geocultural level of a country is measured by the employment ratio; the proportion of primary, intermediate, and advanced education; and the Human Development Index (HDI). This data is provided by the World Bank and the United Nations Development Programme. This paper further extracts descriptive data from laws and regulations of various countries and forms an attribute table to explain the laws and regulations. Spatial match and evaluation. Based on the classification of the elements of the geopolitical environment and the analysis of its related influencing factors, this study combines the functions of the Geographic Information System (GIS) to conduct geoinformation matching research in the Arctic Passage [24][25][26][27]. The infrastructure and its matching process are determined on the basis of the fundamental logic and ideas of the geopolitical environmental system of the Arctic Passage, as shown in Fig 2. Flexible and simple functions, such as map settings and map analyses of GIS, are employed in the geopolitical environment matching process to provide quantitative and visual geodata support and services for the international activities involving Chinese vessels in the Arctic Ocean. Matching the geoinformation and geospatial information requires GIS. Supported by computer software and hardware, spatial database, and the theories of systems engineering and information science, the matching process aims to manage and synthesize spatial data as well as data on various topics and provide geoinformation for the planning, decision-making, and management of Arctic activities. Specifically, (1) the attribute of geoinformation is categorized into geopolitics, geoeconomics, geo-military, and geoculture; (2) the spatial match method is used to collect and integrate the multi-dimensional attribute of the geoinformation of the countries involved during the navigation process; (3) according to the properties and characteristics of various geoinformation, relevant attribute tables in the GIS are formed by means of data classification, management, and feature extraction; (4) the basic base map is constructed by matching multi-dimensional attribute information with spatial position-in terms of different types of geography features in the form of points, lines, and surfaces-to obtain a geoinformation layer; and (5) according to various geodata layers and buffer areas within the influence of each segment of the Arctic Passage, the match process provides the necessary thematic map productions with spatial analysis services, as well as the geoinformation browsing based on spatial location, and facilitates the downloading of relevant laws and regulations and a feasibility check of the vessel activities. The geopolitical environmental spatial match of the Arctic Passage is a kind of "space standardization" based on classification. It not only creates thematic maps according to existing needs but also facilitates basic map browsing, provides space-attribute information, and plays a crucial role in geospatial analysis. This study provides a comprehensive visual information platform for participating in Arctic affairs and meeting the demands of the Arctic scientific research ship in navigation, such as its geographical location and the query of legal provisions. It also highlights the rules and precautions to be observed and the processing procedures of related events during navigation, as well as geoinformation, such as economy, trade, and laws and regulations for the Arctic scientific expedition team. Geospatial analysis can facilitate an independent analysis of points, lines, and surfaces in the relevant areas of the Arctic Passage. It can also achieve a range of statistics and comparative analyses between different areas. Moreover, spatial statistical analysis and spatial analysis can be performed as required. Therefore, this study aims to determine the range and trends affected by the geoinformation of the channel and provide guidelines for Arctic navigation, scientific research, and participation in Arctic affairs to promote resource development and utilization. Standard deviation ellipse. To ascertain changes in the direction of multiple indicators and measure the spatial distribution of the geographical environment in Arctic countries, which are in close proximity to the Arctic Passage, this paper analyzes the size and centroid of the standard deviational ellipse (SDE) [28]. The SDE describes the spatial distribution characteristics of related elements using the basic parameters of an ellipse, such as the spatial distribution, center, long axis, short axis, and azimuth [29]. The center of the ellipse represents the average center of the spatial distribution of the elements, while the direction of the long axis is the direction of the major trend of the elements. The length reflects the degree of dispersion of the elements in the main trend direction, and the short axis reflects the range of the element's spatial distribution. It is worth noting that the bigger the axial ratio, the greater the centripetal force of the data is-otherwise, the greater the degree of dispersion is. The azimuth is the angle rotated clockwise from the north direction to the long axis of the ellipse, which represents the direction of the spatial distribution of the elements [30]. Results and discussion This paper puts forward three forms of expression related to the analysis and extraction of the elements of the geopolitical environment. The first denotes qualitative descriptive attributes; the second pertains to quantitative index representation; lastly, the third refers to graphical and intuitive expression. First, we analyze and classify various factors that influence geopolitics from a qualitative perspective. Second, we conducted an extraction of characteristics by analyzing the classified elements of the geo-environment and their respective spatial features from the qualitative perspective, and connecting them with spatial match. Moreover, the study found that exploring the indicator representation information from a quantitative perspective according to the classification factors of the geopolitical environment is convenient for data extraction for spatial-temporal pattern analysis. Characteristic extraction of the geopolitical environment The study synthesized the factors of the geopolitical environment, including geopolitics, geoeconomics, geoculture, geo-military, and laws and regulations related to geography, as well as extracts that correspond to attributes to establish the link between basic attributes and geographical location. Table 1 reveals that the geopolitical environmental factors of the Arctic Passage feature certain characteristics. The results are presented in the form of layers to illustrate the spatial visualization of multi-dimensional thematic data, such as the spatial match of geodata. The geopolitical characteristics of the Arctic route can be considered from three dimensions, namely, is the Arctic region, the route and the buffer zone around the route, and the route extending to other areas outside the Arctic Ocean. The Arctic region is a geopolitical combination of five countries in the Arctic, eight countries around the Arctic, and Barents Europe. The scope affected by the route and its buffer zone contains the continental shelf and the exclusive economic zone of such countries. However, the demarcation of the buffer zone may be considered by Arctic countries, such as Russia, Norway, Denmark, and Canada, due to the delimitation disputes in the Arctic. These countries have submitted claims to the Commission on Limits of the Continental Shelf for a continental shelf of 200 nautical miles. Other areas refer to the range where the Arctic Passage can reach outside of the compass of Arctic countries, such as other areas extended by the route. Free navigation is an Arctic geopolitical focus issue that is directly related to the Arctic Passage. Many analyses in geopolitics need to use the quantitative representation of indicators. In this regard, the indicators involving six dimensions of governance in WGI are used, namely, voice and accountability, political stability and awareness of violence/territory, government effectiveness, regulatory quality, rule of law and control of corruption. The indicators correspond to countries to establish a spatial match. The opening of the Arctic Passage involves not only countries around the Arctic but also countries whose overseas trades are mainly affected by the route. Thus, the study considers all cited countries in terms of geoeconomic factors. Their important ports, foreign trade volume, and resources are factors that need to be considered in geoeconomics. According to the characteristics of geoeconomics in the Arctic Passage (Table 1), spatial match mainly employs economic information as the basic unit and time as the main economic indicator for each country. Table 1 displays the specific match scheme. The characteristics of geo-military elements reflect the regional influence of the actual military strength of each country. The location of the military bases of the countries is presumed to be in the Arctic, whereas nuclear submarines under the ice or under the water remain uncertain. Among the geo-military factors, the scale of military bases, the number of equipment and personnel, and military expenditure can be extracted as quantitative indicators to extract data for the subsequent evaluation of the geopolitical environment. The geoculture of the Arctic Passage includes relevant countries and regions of the land and sea areas. It has three main characteristics. First, regionalism is formed mainly in terms of the national culture; second, the stability of spatial distribution depicts regional continuity to a certain extent; third, it possesses the characteristics of intercontinental distribution and national boundaries. The attribute properties are mainly extracted from data on the humanistic environment of the related countries and regions of the Arctic Passage. In addition, the extraction of the humanistic environment index is conducted on the basis of the basic units of the Arctic countries and their subregions. Among the geocultural elements, quantitative indicators can be used to represent the level of education, levels of urban and rural development level, social protection and labor, degree of comprehensive development, and other types of information on the Arctic countries and sub-regions near the Arctic channel. In this manner, index data can be provided for the following analysis of the spatial-temporal evolution of the geopolitical environmental pattern. Table 2 presents the results. International law and domestic law form part of the laws and regulations of the Arctic Passage. International law related to Arctic navigation mainly comprises the 1982 United Nations Convention on the Law of the Sea (hereafter referred to as "the Convention") and the relevant rules of the International Maritime Organization (IMO). First, identifying sea areas with different legal attributes is important to the geographic information of the Arctic Ocean in accordance with the maritime delimitation rules established by the Convention. Second, navigation rights vary in seas with different legal attributes, such as from internal waters and the territorial sea to the exclusive economic zone, the high seas, and the "straits used for international navigation." Therefore, we need to identify corresponding navigation rights with different legal attributes. Third, general navigation rules and the special rules of the IMO on polar navigation are , are the main rules. The characteristics of these international laws are extracted according to the degree of correlation with geographical location, that is, continuous but with necessary discontinuity. Table 3 provides the interpretation of certain articles and feature extractions. The quantitative index of laws and regulatory geography can consider the strength of the legal rights of countries adjacent to the Arctic Passage. Spatial match of the geopolitical environment In this study, spatial match mainly involves geospatial data and attribute data as well as necessary base map data. Spatial data consists of three forms of data: point, polyline, and polygon. It is generally constructed from point to line and then to surface. Attribute data is geoinformation characteristic data, and it includes geopolitics, geoeconomics, geoculture, geo-military, and laws and regulations. As the base map data covers much more information, it is managed by establishing a base map vector database, which mainly includes administrative divisions, Arctic stations, coastlines, ice boundaries, rivers, and Arctic Ocean ranges. According to the information of the Arctic scientific research station and the important marks of the coastal countries, the corresponding point-like vector data are formed using the drawing tool on a desktop, and it is based on the information about the baseline and the external boundary of the territorial sea, the relevant connection rules and relationships, and the point-like vector data that are transferred into linear data. In terms of the different definitions of different sea areas and exclusive economic zones depending on the countries concerned and the boundaries of each region, relevant surface data are obtained. To ensure consistency Table 3. Characteristics of laws and regulations in the Arctic Passage. International law United Nations Convention on the Law of the Sea Navigation rights vary in the seas with different legal attributes, from the internal water, territorial sea, exclusive economic zone, and high seas, to the "straits used for international navigation" International Guidelines for the Navigation of Ships in Polar Waters (Polar Code) Stricter standards of environmental protection than MARPOL 73/78 are adopted, requiring ships operating in specific Arctic waters to apply for polar ship certificates . . .. . . of data projection, all the thematic data and base map data are projected using Polar Stereographic Projection with a standard latitude of 71˚N. Spatial match of the geopolitics. According to the extraction of characteristics and its relationship with space, in terms of the geopolitical information of the Arctic passage, the relationship between the countries or groups of countries involved must be viewed globally, and the geographical information of the political pattern should be interpreted from a global or large regional perspective. Based on the classification of attributes, the geopolitical plate is determined by different classifications of geographical attributes, which mainly include the geopolitical patches in accordance with the interest groups of the Arctic Passage and the interest groups involved in Arctic affairs. When considering geopolitical factors, the state is the basic unit of the terrestrial factors, which can be analyzed based on three major interest groups: the Arctic eight countries (A8), the five coastal countries (A5), and the extraterritorial countries (Fig 3). Some geopolitical characteristics are shown in Table 4, which illustrates the present identity of each country in different international organizations, including whether it has joined the organization and its specific status. The issue of the delimitation of the continental shelf revolves around the ownership of the sea area, and it is mainly based on the documents approved by the Commission on Limits of the Continental Shelf. In this context, delimitation also refers to the Arctic countries' application program for the continental shelf of the Arctic Ocean, which includes the territorial sea, the adjacent area, the exclusive economic zone, and the continental shelf. Spatial match of the geoeconomics. The actual geographical location, geoeconomic factors related to ports, and the number and tonnage of vessels are mainly united in the form of point, polyline, and polygon to achieve spatial expression in the form of patches. When solving the problems of international correlation, spatial analysis is adopted with economic indicators as the mainline. Therefore, the geoeconomics of the Arctic Passage is characterized by the nation and other subregions, such as counties and provinces, as the basic units. After collecting various economic data and extracting the characteristics, the specific features and index information were recorded in the attribute table (Table 5). The index data were also matched with the spatial map, and GDP growth was used as an example (Fig 4). Spatial match of geo-military. Therefore, this paper organizes the basic military information mainly from two angles: the spot (Fig 5) and the entire area. First, it combines the location of the military base with the basic military information, and second, it focuses on the military forces of various countries and the regional influence of each country. Meanwhile, the geo-military information is mainly based on the spatial expression of the buffer zone under the Arctic countries' military influence, which combines the space and attributes with the military-based location data as the mainline. Simultaneously, the influence of different patches' information on ice and sea is considered. Spatial match of geoculture. According to the geocultural structure of the Arctic, the area is divided in terms of the location of geographical and cultural distribution. Analysis of the geo-cultural information of different countries and regions can help the fleet effectively avoid various ethnic contradictions during the Arctic navigation and to facilitate navigation activities. In the Arctic, or the circumpolar region, the people are indigenous inhabitants of the northernmost regions of the world; therefore, they can represent geoculture to a certain extent. For the most part, they live beyond the climatic limits of agriculture, where climatic gradients determine the effective boundaries of the circumpolar region. Geocultural spatial match is mainly based on the buffer zone in the subregion, ethnic distribution, and the spatial analysis of cultural characteristics and international relations. 3.2.5 Spatial match of Laws and regulations related to geography. All coastal countries have domestic laws concerning Arctic navigation and the laws are applicable within the sea areas under their jurisdiction. Although these laws should be based on the international conventions that specific countries have ratified, differences in geographical location, national strength, and foreign policy have led to the selection of different governance models at the domestic law level [32,33]. For example, because of geographical advantages and historical reasons, Russia and Canada have a strong control tendency over the Northeast Passage and the NWP. The governance of the channel is mainly based on the ambiguity principle of the United Nations Convention on the Law of the Sea that allows the expansion of power through domestic legislation. On the contrary, some countries, such as Norway and Iceland, manage the channel by formulating specific implementation regulations based on the relevant international conventions that they have ratified. Information on such domestic law was interpreted, and the feature extraction was carried out according to the description of the geographical location in the article. Where necessary, the segmentation features of the channel were extracted according to the provisions of laws and regulations, and the data influenced by the laws and regulations were extracted by surface features. The two types of laws (i.e., international law and domestic law) need an interpretation that matches their geographical location, can extract relevant characteristics, form attribute lists, and coordinate with geographical positions. In terms of the characteristics of laws and regulations of the Arctic Passage, considering all the factors above and the consistency of attribute PLOS ONE and spatial information, this paper interprets the laws and regulations and forms an attribute table. Fig 6 depicts a case of map representation as an example. Based on the specified provisions, the Arctic Passage is segmented, and the range of the buffer zone is determined. The overlapping areas can be illustrated with different layers. Spatial-temporal pattern analysis According to the multi-dimensional data of the Arctic Passage obtained above, the spatial pattern is divided. This study aims to analyze the geopolitics, geoeconomics, and geoculture patterns of the target areas. This paper mainly takes the country as the basic unit, selects corresponding indicators from geopolitics, geoeconomics, geo-military, geoculture, and legal and regulatory efficiency, and performs spatial statistical analysis based on the indicators. Corresponding thematic maps are produced and used as needed. The thematic maps contain a large amount of information in the form of images, symbols, annotations, and colors, which can provide basic geodata for polar scientific research. In addition, this paper clearly represents the temporal and spatial distribution features of various cartographic objects and the connection among them. Analysis of spatial patterns based on SDE. According to the classification of geopolitical environmental information and the characteristics of the index, government effectiveness, GDP growth, military expenditure, HDI, and rule of law were selected to represent different types (S1 Table). Based on the SDE of the respective index, the results are shown in Table 6 and Fig 7. As for the government effectiveness, both the SDE axis and area are smaller than the datum ellipsoid, which is a clear indication that the index is shrinking in Arctic countries. The coordinates of the center move to the southwest and the azimuth gets larger, implying that the government effectiveness of the Nordic countries has a positive and strong effect on stimulating the Arctic. The GDP growth represents geoeconomics, and the overall spatial distribution pattern shrinks in the Nordic countries. Apparently, the proportion of military expenditure increases greatly and the center moves to the northwest, expanding toward Russia, Canada, and the United States. The expansion trend is quite obvious, indicating that the military forces of Russia and the United States have a stronger spatial layout in the Arctic region, which is also consistent with the density of military base stations. According to the distribution pattern of the HDI, the SDE and the datum ellipsoid have a high degree of consistency. Notably, the focus is slightly inclined to the southwest, and the spatial layout is slightly contracted toward the United States and Canada. The SDE of the Rule of Law expands to the northwest, but the degree of expansion is less than that of military expenditure. This is closely related to the fact that Russia controls the Northeast Channel through its domestic law and Canada controls the Northwest Channel. Analysis of spatial-temporal evolution based on SDE. The distribution pattern of import and export indicators in the Arctic region (S2 Table) shows an evolution pattern from west to east and from south to north-that is, a pattern of "west (slightly north) and east (slightly south)"-as shown in Fig 8. According to the SDE, from 2005 to 2018, the distribution range of export GDP reduced from 15 175 261. 806 km 2 to 14 113 814. 958 km 2 , which shows a trend of spatial contraction, as illustrated in Fig 8a. The coordinates of the SDE centroid have been changed from (22.234W, 73.266N) to (21.783W, 72.785N). The azimuth gets larger and the long axis of the SDE becomes shorter, which is a clear indication that the main contraction force driving the Arctic export volume is east-west, rather than north-south. This also reflects a reduction in Russia and Canada's export as a result of the contraction of the ellipse. According to Fig 8b, 2015, the SDE of the import and export GDP contracted to the southwest, as shown in Fig 8c. Comparing the SDE of the import GDP and the export GDP in 2018, as presented in Fig 8d, the long axis of export volume is longer while the short axis is shorter than that of the import volume. The export volume stretches in the east-west direction but contracts toward the north-south direction, which is a clear indication that the export volume extends to the Nordic countries, Canada, and the United States. The general trend shows that with the opening of the Arctic Passage, the spatial distribution of Arctic countries' international trade as a share of the GDP is contracting toward the Nordic countries, Canada, and the United States. This implies that the Arctic Passage has increased the proportion of foreign trade in Nordic countries' national economies. According to the SDE of the spatial distribution of imports and exports in Northeast Asia, the distribution pattern is characterized by the northwest to the southeast direction, as shown in Fig 8. The SDE of export GDP shows a state of expansion and a subsequent contraction, as shown in Fig 8e, where the azimuth gets smaller and the general direction approaches southeast. The spatial distribution of import GDP in the Northeast Asia countries shows a tendency toward the southeast, as shown in Fig 8f. This manifests a trend of expansion from 2005 to 2015, i.e., as the size of the long axis of the ellipse becomes larger, the short axis becomes shorter and the oblateness gradually increases. A slight contraction was observed in 2018, but the azimuth continues getting smaller and the overall trend toward the southeast is maintained. From 2005 to 2015, the spatial distribution of import and export GDP also expands to the southeast, as shown in Fig 8g. It remains in a state of contraction from 2015 to 2018 as the azimuth gets smaller and the oblateness gradually increases. As shown in Fig 8h, the export GDP is slightly smaller than the import GDP, but it tends to approach China, Japan, and South Korea. The fact that the contribution of foreign trade to the national economy for Northeast Asian countries tends to expand toward the southeast indicates that in recent years, with the opening of the Arctic Passage, the export volume of the countries in Northeast Asia has played a significant role in their economic growth, especially in China and Japan. Therefore, after the analysis and discussion, spatial match can be conducted on the basis of the relationship between feature extraction and geographic information to establish a spatial association, which is represented by spatial data. The innovation of establishing a connection between geopolitical analysis and spatial mapping is to utilize the geopolitical environmental methodology that integrates qualitative analysis, quantitative characterization and spatial match. The practicability of this approach is that it can not only render the geopolitical environment more intuitive and visual but also promote in-depth analysis of the spatial-temporal evolution. Moreover, it provides effective research ideas and tools for researchers, stakeholders and decision makers of the Arctic Passage in analyzing problems from the perspective of space, time change, and volume difference. Conclusion The geopolitical environment of the Arctic Passage not only refers to geography and politics, but also consists of many humanistic societies and their influencing factors related to geography. Based on the high demand for the geospatial information about the Arctic Passage in navigation and scientific research activities, this paper investigates geoinformation match methods, where the results provide a geopolitical environmental methodology that integrates qualitative analysis, quantitative characterization and spatial match of Arctic Passage, which not only renders the geopolitical environment more intuitive and visual, but also promotes indepth analysis of spatial-temporal evolution. In practical application, on the one hand, it provides integrated Arctic geoinformation query and retrieval services for Arctic expeditions and commercial navigation to ensure access to important information that must be known and observed in the process of course changes, such as geopolitics, geoeconomics, and laws and regulations, which play a significant role in the diplomatic, administrative, and legal aspects of the Arctic navigation. On the other hand, it provides visual, demand-oriented, and spatialtemporal geodata support and services required to ensure efficiency in the international affairs of vessels in the Arctic Ocean. Third, an analysis of spatial statistics was conducted on the basis of the indicators of geoinformation, geographical spatial distribution, and the spatial-temporal pattern of the Arctic Passage using SDE. Therefore, this study aimed at achieving three objectives: (1) Analyzing the geoinformation of the Arctic Passage and classifying it into different perspectives, including geopolitics, geoeconomics, geo-military, and geoculture. The characteristics are extracted to form relevant attribute tables with the GIS tools. The information related to the geographical location is also extracted to connect with the attributes. (2) Determining the spatial match method based on the information category and attribute characteristics, according to the application of geoinformation in the form of point, line, and surface. The attribute information concerning the Arctic navigation channel, such as economics, politics, culture, military, and laws and regulations, is linked and matched with the spatial information to provide a basis for conducting the spatial analysis of the geopolitical environmental information of the Arctic Passage. (3) According to the qualitative description of the geopolitical environmental factors of the Arctic Passage, the study extracted the quantifiable indicators for quantitative analysis to establish the relationship with geospatial mapping. (4) Realizing spatial statistical analysis in terms of the specific research project and its requirements. The extraction of indicators is based on the related elements in the geoinformation of the Arctic Passage, such as geopolitics, geoeconomics, geo-military, geoculture, and laws and regulations, and the application demand is to perform an SDE spatial statistical analysis toward the Arctic and Northeast Asia. According to the geoinformation of the Arctic, military expenditure and Strength of Legal Rights Index stretch toward Russia and Canada, while government effectiveness, GDP growth, and HDI contract toward Nordic countries, the United States, and Canada. Based on the analysis of the foreign trade index of the Arctic and Northeast Asia, the opening of the Arctic Passage plays a significant role in motivating the economy of countries in Northern Europe and Northeast Asia, especially in China and Japan. This research is helpful for those who care about the Arctic Passage and for decision-makers in making a comprehensive judgment on governance related to the sustainable development of the navigation channels in the Arctic. Supporting information S1 Table. Dataset records of representative indicators in geopolitical environment of the Arctic Passage used for spatial pattern analysis. (XLSX) S2 Table. Dataset records of the import and export indicators were used to assess the spatial-temporal evolution. (XLSX) Resources of the People's Republic of China) for providing information on the geoculture of the Arctic Passage. The authors also would like to express their gratitude to the editors and the anonymous reviewers for their comments and suggestions.
2022-07-02T06:17:24.010Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "933371b5283bd364153160d3459c602e20fb683b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1821c34d8e6df7de36c5e88198e3c7c1c15d2d6a", "s2fieldsofstudy": [ "Geography", "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
7612737
pes2o/s2orc
v3-fos-license
Quantum geometry of the universal hypermultiplet The universal hypermultiplet moduli space metric in the type-IIA superstring theory compactified on a Calabi-Yau threefold is related to integrable systems. The instanton corrections in four dimensions arise due to multiple wrapping of BPS membranes and fivebranes around certain (supersymmetric) cycles of Calabi-Yau. The exact (non-perturbative) metrics can be calculated in the special cases of (i) the D-instantons (or the wrapped D2-branes) in the absence of fivebranes, and (ii) the fivebrane instantons with vanishing charges, in the absence of D-instantons. The solutions of the first type are governed by the three-dimensional Toda equation, whereas the solutions of the second type are governed by the particular Painleve VI equation. Introduction Non-perturbative contributions to the effective supergravity theory, originating from the type IIA string compactification on a Calabi-Yau (CY) threefold Y, are known to be due to the solitonic five-branes wrapped about the entire CY space and the supermembranes (D2branes) wrapped about special Lagrangian (supersymmetric) three-cycles C 3 of Y [1]. The supersymmetric cycles minimize volume in their homology class, while the corresponding wrapped brane configurations lead to the BPS states. Being solitonic (BPS) classical solutions to the higher dimensional (Euclidean) equations of motion, these wrapped branes are localized in the uncompactified (four) dimensions and thus can be identified with 4d instantons. The instanton actions are essentially given by the volumes of the cycles on which the branes are wrapped. The compactification of the type-IIA superstring theory on Y gives rise to the fourdimensional (4d) N=2 superstrings whose Low-Energy Effective Action (LEEA) is given by the 4d, N=2 supergravity coupled to N=2 vector supermultiplets and hypermultiplets. The hypermultiplet LEEA is most naturally described by the Non-Linear Sigma-Model (NLSM), whose scalar fields parametrize the quaternionic target space M H [2]. The instanton corrections to the LEEA due to the wrapped fivebranes and membranes can be easily identified and distinguished from each other in the semi-classical limit, since the fivebrane instanton corrections are organized by powers of e −1/g 2 string , whereas the membrane instanton corrections are given by powers of e −1/g string , where g string is the type-IIA superstring coupling constant [3]. The vacuum expectation value of the four-dimensional dilaton field φ in the compactified type-IIA superstring is simply related to the CY volume V CY in M-theory, V CY = e −2 φ , so that the type-IIA superstring loop expansion amounts to the derivative expansion of the M-theory action [4]. Any CY compactification has the co-called Universal Hypermultiplet (UH) containing a dilaton, an axion, a complex RR-type pseudo-scalar and a Dirac dilatino. The target space of the universal hypermultiplet NLSM has to be an Einstein space with the (Anti)Self-Dual (ASD) Weyl tensor [2]. We restrict ourselves to a calculation of the instanton corrections to the universal hypermultiplet NLSM metric by analyzing generic quaternionic deformations of the classical UH metric. We use the simple fact that the (anti)self-dual Weyl tensor already implies the integrable system of partial differential equations on the components of the UH moduli space metric. Additional simplifications arise due to the Einstein condition and the physically motivated isometries. The exact UH metric is supposed to be regular and complete (cf. Seiberg-Witten theory -see, e.g., ref. [5] for a review). UH metric in string perturbation theory The LEEA of (tree) type-IIA superstrings in ten dimensions is given by the IIA supergravity. The universal (UH) sector of the 10d type-IIA supergravity compactified down to four dimensions is obtained by using the following Ansatz for the 10d metric: while keeping only SU(3) singlets in the internal CY indices and ignoring all CY complex moduli. In eq. (1) φ(x) stands for the 4d dilaton, g µν (x) is the spacetime metric in four uncompactified dimensions, µ, ν = 0, 1, 2, 3, and ds 2 CY is the (Kähler and Ricci-flat) metric of the internal CY threefold Y in complex coordinates, where i, j = 1, 2, 3. By definition, the CY threefold Y possesses the (1, 1) Kähler form J and the holomorphic (3, 0) form Ω. The universal hypermultipet (UH) unites the dilaton φ, the axion D coming from dualizing the three-form field strength H 3 = dB 2 of the NS-NS two-form B 2 in 4d, and the complex scalar C representing the RR three-form When using a flat (or rigid) CY with this yields the (Ferrara-Sabharwal) NLSM action in 4d [6], where H 3 has been traded for the pseudoscalar D via the Legendre transform. The perturbative (one-loop) string corrections to the UH metric originate from the (Riemann) 4 terms in M-theory compactified on a CY three-fold Y [4]. These quantum corrections are known to be proportional to the CY Euler number χ = 2 (h 1,1 − h 1,2 ). In fact, the corrected metric is related to the classical UH metric by a local field redefinition [4], so that the local UH geometry is unchanged in superstring perturbation theory. D-instantons and UH metric The classical (Ferrara-Sabharwal) NLSM metric describes the symmetric quaternionic space SU(2, 1)/SU(2) × U(1). In particular, the U(1) subgroup of the SU(2) symmetry is given by the duality rotations U C (1) of the complex R-R pseudo-scalar C, These duality rotations are believed to be exact in quantum theory [1], as we assume too. As regards generic four-dimensional quaternionic manifolds (relevant for UH), they all have Einstein-Weyl geometry of negative scalar curvature [2], where W abcd is the Weyl tensor and R ab is the Ricci tensor for the metric g ab . When using the Ansatz [7] for a generic quaternionic metric with an abelian isometry, it is straightforward to prove that the restrictions (6) on the metric (7) precisely amount to the 3d Toda equation The second potential P of eq. (7) is then given by [7] whereas the remaining one-form Θ 1 obeys the linear equation [7] In terms of the complex coordinate ζ = x + iy, the 3d Toda equation (8) takes the form 4u ζζ + (e u ) ωω = 0 . Separable solutions to the 3d Toda equation, having the form are easily found to be [8] where (α, b, c) are all constants. Eq. (13) automatically possesses the rigid U C (1) symmetry with respect to the duality rotations ζ → e iα ζ of the complex RR-field ζ. As was demonstrated in ref. [9], the BPS condition on the fivebrane instanton solution with the vanishing charges defines a gradient flow in the hypermultiplet moduli space. The flow implies the SU(2) isometry of the UH metric since the non-degenerate action of this isometry in the four-dimensional UH moduli space gives rise to the well defined threedimensional orbits that can be parametrized by the 'radial' coordinate to be identified with the flow parameter. Let's consider a generic SU(2)-invariant metric in four Euclidean dimensions. In the Bianchi IX formalism, where the SU(2) symmetry is manifest, the general Ansatz for such metrics reads [10] in terms of the su(2) (left)-invariant (Cartan) one-forms σ i and the radial coordinate t. Being applied to the metric (15), the ASD Weyl condition gives rise to a (Halphen) system of Ordinary Differential Equations (ODE) [10,11], where the dots denote differentiation with respect to t, and the functions A i (t) are defined by the auxiliary ODE system, (17) The Halphen system (16) has a long history. Perhaps, its most natural (manifestly integrable) derivation is provided via a reduction of the SL(2, C) anti-self-dual Yang-Mills equations from four Euclidean dimensions to one. The Painlevé VI equation is known to be behind the ASD-Weyl geometries having the SU(2) symmetry [10,11]. In fact, all quaternionic metrics with the SU(2) symmetry are governed by the particular Painlevé VI equation: where y = y(x), and the primes denote differentiation with respect to x [11]. The equivalence between eqs. (16) and (18) is well known to mathematicians [10,11]. An exact solution to the Painlevé VI equation (18), which leads to a regular (and complete) quaternionic metric (15), is unique [11]. The regular solution can be written down in terms of the standard theta-functions ϑ α (z|τ ), where α = 1, 2, 3, 4, and the arguments are related as z = 1 2 (τ − k), where k is an arbitrary (real and positive) parameter. The variable τ is related to the variable x of eq. (18) via the relation where the value of the theta-function variable z is explicitly indicated, as usual. The explicit solution to eq. (18) reads [12] y(x) = ϑ ′′′ 1 (0) 3π 2 ϑ 4 4 (0)ϑ ′ 1 (0) . (20) The parameter k > 0 describes the monodromy of this solution around its essential singularities (branch points) x = 0, 1, ∞. This (non-abelian) monodromy is generated by the matrices (with the purely imaginary eigenvalues ±i) The function (20) is meromorphic outside its essential singularities at x = 0, 1, ∞, while is also has simple poles atx 1 ,x 2 , . . ., wherex n ∈ (x n , x n+1 ) and x n = x(ik/(2n − 1)) for each positive integer n. Accordingly, the metric is well-defined (complete) for x ∈ (x n , x n+1 ], i.e. inside the unit ball in C 2 with the origin at x = x n+1 and the boundary at x =x n [11]. Near the boundary the metric has the following asymptotical behaviour [13]: As is clear from eq. (22), the real parameter k can be identified with the five-brane instanton action that is proportional to the CY volume and 1/g 2 string as well. The semiclassical regime thus arises near the boundary x → 1 − at k → +∞. In this limit, one gets back the Ferrara-Sabharwal metric out of that in eq. (22) after rescaling (1 − x) → 2 6 e πk (1 − x) and redefining x = r 2 . A few comments are in order. The very notions of a 'wrapped brane', an 'instanton' and a 'dilaton' are essentially semiclassical, and they do not exist non-perturbatively. We consider the full UH theory as the NLSM, i.e. modulo field reparametrizations (or diffeomorphisms in the NLSM target space). The physical interpretation of the exact quaternionic solutions to the UH metric is, however, possible in the semiclassical regime. Hence, first, we identify the semilasssical region and, second, we rewrite a given exact solution as a sum of the known classical solution and the exponentially small corrections with respect to the well defined real parameter (or modulus). Those corrections are finally identified with the instanton contributions, whose origin (due to the wrapped BPS branes) we already know in the context of the CY compactified type-IIA superstrings. The supersymmetric 3-cycles C 3 are defined by two conditions: (i) the pullback of the CY Kähler form J on C 3 should vanish, J| C 3 = 0, and (ii) the pullback of the imaginary part of the holomorphic CY 3-form Ω should vanish too, Im Ω| C 3 = 0 [1]. In the terminology of ref. [14], the supersymmetric 3-cycles C 3 are of the A-type, whereas the Calabi-Yau threefold itself is of the B-type. In our case, the only relevant modulus of a wrapped 5-brane is its volume, i.e. the CY 'size' parameter (a Kähler moduli). The semiclassical description can be valid only for large CY volumes. In the opposite limit of small CY volumes, an exact N=2 superconformal field theory (Landau-Ginzburg) description can apply [14]. Mirror symmetry may allow us to relate these two different descriptions. The only relevant parameter tanh 2 (πk/2) in eq. (23) represents the central charge (or the conformal anomaly) of the 2d conformal field theory on the boundary. Our results are, therefore, consistent with the holographic principle [15]. I would like to thank Klaus Behrndt, Wolfgang Lerche, Nick Warner and Bernard de Wit for discussions. This work is supported in part by the 'Deutsche Forschungsgemeinschaft'.
2014-10-01T00:00:00.000Z
2001-11-08T00:00:00.000
{ "year": 2001, "sha1": "cbf1076640010dea56102fa74c950b0913f88304", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0111080", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8dfc80ececcb84ea153c299ff48ad9f66f2aaeda", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17935148
pes2o/s2orc
v3-fos-license
The Noncommutative Chern-Connes Character of the Locally Compact Quantum Normalizer of SU(1,1) in SL(2,C) We observe that the von Neumann envelope of the quantum algebra of functions on the normalizer of thegroup $\SU(1,1)\cong \SL(2,\mathbb R)$ in $\SL(2,\mathbb C)$ via deformation quantization contains the von Neumann algebraic quantum normalizer of $\SU(1,1)$ in the frame work of Waronowicz-Korogodsky. We then use the technique of reduction to the maximal subgroup to compute the K-theory, the periodic cyclic homology and the corresponding Chern-Connes character. Introduction It was remarked [KK] that among the short list of very few well-studied locally compact quantum groups: quantum E(2), quantum "ax+b", quantum "az+b", quantum q SU(1, 1), the quantum group q SU(1, 1) plays an important role. The representation theory of q SU(1, 1) was well-treated and fully described by I. M. Burban and A. U. Klimyk [BK], see also [KS]. In our previous works [DKT1], [DKT2] and [DK] we developed a method of computation of the K-groups, periodic cyclic homology groups and the corresponding noncommutative Chern-Connes characters as homomorphisms between the two theories. Our method is based on the following ingredients: 2. Reduce the computation to the case of maximal compact subgroups. This method was applied in [DK] for the quantized algebras of functions on coadjoint orbits of the Lie groups like "ax+b", "az+b" and SL(2, R) which are obtained from the deformation quantization of the algebras of functions on coadjoint orbits. In this paper we compute the K-theory groups, the periodic cyclic homology groups and the Chern-Connes character between them for the quantum groups q SU(1, 1). We show that the K-theory and cyclic theory in this case are isomorphic to the corresponding ones for the torus T = S 1 and that the noncommutative Chern-Connes character is equivalent to the ordinary (cohomological) Chern character in the ordinary case of torus T = S 1 . In order to make clear the ideas and situation, we draw in the section 1 a corollary of the method of computation of K-theory, cyclic theory and noncommutative Connes-Chern character for the group C*-algebra C * (SU(1, 1)). In Section 2 we prove that the deformation-quantized algebra SU(1, 1) of smooth function with compact support can be included in the von Neumann algebraic quantum group W * q ( SU(1, 1)) as some dense subalgebra. Section 3 and Section 4 are devoted to reduction of the computation to the compact Lie group case and then reduction to the maximal compact subgroups. 1 Noncommutative Chern-Connes character of the group C*-algebra C * (SU(1, 1)) Let us first recall that the normalizer of SU(1, 1) in SL(2, C) is the subgroup consisting of 2 × 2 complex matrices X ∈ Mat 2 (C) such that X * UX = ±U, The following result is an easy consequence of the main theorem of [DK] and [DKT1]. Theorem 1.1 The K-theory an the cyclic theory for C * (SU(1, 1)) and the corresponding W-equivariant theories for C(T) are isomorphic, i.e. Proof. Following the main theorem of V. Nistor [N] and our main theorem of [DK], the K-theory and the cyclic theory of C * (SU(1, 1)) are isomorphic with the same theories for the C*-algebra C * (SO(2)) of the maximal compact subgroup SO(2), which is also the maximal compact torus inside this maximal compact subgroup itself. The K-theory and the cyclic theory for SO(2) ≈ S 1 are isomorphic with the corresponding W -equivariant cohomological Ktheory and Z 2 -graded de Rham theory. The noncommutative Chern-Connes character is equivalent to the classical Chern character of torus S 1 , i.e. we have a commutative diagram with vertical isomorphisms and the bottom isomorphism The Chern-Connes character is therefore an isomorphism. Proof. Let us denote x ij the matrix coefficients of the standard representation of SU(1, 1) in C 2 : Let X + , X − , K ± be the natural basis of g = su(1, 1) and ρ the standard representation of U h (su(1, 1)) given by Let ∆ be the product of U h (su (1, 1)), e. i. Then the quantized universal enveloping algebra U h (su(1, 1)) is isomorphic as C[[h]]-modules to U(su(1, 1)). The convolution product can be defined by f ⋆ g(x) := f ⊗ g(∆(x)), and therefore where by definition and [n] q := q n −q −n q−q −1 . It is well-known, see e.g. [SS] that This means that the matrix coefficients x ij of the standard representation ρ generated the space of all polynomial functions. Elements of the universal enveloping algebra U(su(1, 1)) can be considered also as polynomial functions over the dual space to the Lie algebra su(1, 1). Let us recall the Possoin structure on the dual space of the Lie algebra su(1, 1) = Lie SU(1, 1) : for all f, g ∈ C ∞ (su(1, 1)) their Possoin bracket is for all F ∈ g * = su(1, 1) * , where df, dg ∈ Hom(g * , R) ∼ = g. To this structure associates a star product * . Apply this construction for f = x ij , g = x kl We have Let us consider now the ordinary star product of functions on the Lie algebra su(1, 1). Denote again the standard representation by ρ : SU(1, 1) → Mat 2 (C) and the matrix coefficients satisfy the orthogonal rations, we have therefore the relation ρ(f * g) = ρ(f )ρ(g). Because the standard representation is faithful, we can deduce that Thus, the generators of von Neumann algebraic W * q (SU(1, 1)) are in a bijection with the functions x ij and the product structure are agreed. So the von Neumann envelop of C ∞ (SU(1, 1)) contains and therefore isomorphic to the W * q (SU(1, 1)). The theorem is therefore proven. Restriction to a maximal compact subgroup Now we use the technique of reduction to maximal compact subgroups developed in [N] and [DKT1]. Theorem 3.1 The K-theory and the cyclic theory for W * q ( SU(1, 1)) are isomorphic to the corresponding theories for C ∞ c (SO(2)), i.e. Noncommutative Chern-Connes character Let us finally draw back the corresponding computation results and in particular the noncommutative Chern-Connes character. Proof. The proof is a combination of Theorems 2.1 and 3.1 and the main result of [DKT2] The K-theory and the cyclic theory for SO(2) ≈ S 1 are isomorphic with the corresponding cohomological K-theory and Z 2 -graded de Rham theory. The Chern-Connes character is equivalent to the classical Chern character of torus S 1 , i.e. we have a commutative diagram with vertical isomorphisms and the bottom isomorphism K * (W * q ( SU(1, 1))) ch − −− → HP * (W * q ( SU(1, 1))) The Chern-Connes character is therefore an isomorphism. Acknowledgments The main part of the paper was realized during a stay of the author in Abdus Salam ICTP. The author would like to express sincere thanks to ICTP, and in particular Professor Le Dung Trang and Professor Aderemi O. Kuku, for invitation and for the provided excellent conditions of work. The deep thanks are addressed to professor E. Koelink for the useful remarks, and especially for the reference [KS].
2014-10-01T00:00:00.000Z
2003-09-12T00:00:00.000
{ "year": 2003, "sha1": "abb75bb14679be908728ace746fbe315d5207673", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "abb75bb14679be908728ace746fbe315d5207673", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
13721108
pes2o/s2orc
v3-fos-license
Cortical oxygen consumption in mental arithmetic as a function of task difficulty: a near-infrared spectroscopy approach The present study investigated changes in cortical oxygenation during mental arithmetic using near-infrared spectroscopy (NIRS). Twenty-nine male volunteers were examined using a 52-channel continuous wave system for analyzing activity in prefrontal areas. With the help of a probabilistic mapping method, three regions of interest (ROIs) on each hemisphere were defined: The inferior frontal gyri (IFG), the middle frontal gyri (MFG), and the superior frontal gyri (SFG). Oxygenation as an indicator of functional brain activation was compared over the three ROI and two levels of arithmetic task difficulty (simple and complex additions). In contrast to most previous studies using fMRI or NIRS, in the present study arithmetic tasks were presented verbally in analogue to many daily life situations. With respect to task difficulty, more complex addition tasks led to higher oxygenation in all defined ROI except in the left IFG compared to simple addition tasks. When compared to the channel positions covering different gyri of the temporal lobe, the observed sensitivity to task complexity was found to be restricted to the specified ROIs. As to the comparison of ROIs, the highest oxygenation was found in the IFG, while MFG and SFG showed significantly less activation compared to IFG. The present cognitive-neuroscience approach demonstrated that NIRS is a suitable and highly feasible research tool for investigating and quantifying neural effects of increasing arithmetic task difficulty. INTRODUCTION Basic mental arithmetic bears a helping hand in many occasions of daily life. It is essential for time management, is a central aspect of mathematical achievement in school, but also supports us in many everyday life decisions as for instance in grocery stores. Many everyday situations involve encoding and manipulating of numerical information. Even in young children, mental arithmetic can be studied, for example, in the form of precursors of mathematical skills (e.g., counting, number sense; Gilmore et al., 2010), and the mastery of more and more complex mental arithmetic tasks in ontogeny provides a window for studying the constituents of these mental operations (Wynn, 1992;Geary, 1995;Lee et al., 2012;Van Der Ven et al., 2012). Thus, from a scientific point of view, mental arithmetic provides an ideal function for investigating fundamental cognitive processes such as retrieving information, execution of control processes, updating of information, and the like. In the present study, a cognitiveneuroscience approach is presented aiming at identifying and quantifying (in terms of blood oxygen consumption) neural correlates involved in the mastery of simple and more complex mental addition tasks. Addition is one of the core operations in mental arithmetic. According to Van Harskamp and Cipolotti (2001) the simple addition of two numbers entails three specific processes: (1) retrieval of mathematical facts from long-term memory (e.g., 7 + 8 = 15), (2) the execution of the arithmetic operation as represented by an arithmetic symbol (e.g., "+"), and (3) the implementation of supporting arithmetic procedures like carrying. Carrying refers to decomposing the 8 into 3 and 5, bridging to ten, maintaining the first digit in working memory (i.e., updating), and then adding the remaining addend for completing the calculation. Studies on the development of arithmetic skills documented that there is a shift from premature arithmetic strategies observed in young children (e.g., "counting-on" or "using-the-fingers" strategies) to more sophisticated strategies as retrieval-based techniques or the decomposition strategy (Flynn and Siegler, 2007;. The use of the latter mirrors age-related improvements in the efficiency of mathematical problem solving (Siegler, 2006;Lemaire and Callies, 2009). When studying adults, therefore, the predominant strategy for simple arithmetic (SA) tasks (e.g., 12 + 6) will consist of fast retrieval-based techniques, while more complex tasks with large addends (e.g., 27 + 38) will be solved through a coordinated mix of retrieval-based techniques, decomposition, and updating processes (Seitz and Schumann-Hengsteler, 2002). The complexity of arithmetic tasks is mainly determined by the number of the necessary additional cognitive processes. For example, several studies indicated the number of carry operations in an addition task to be highly correlated with the time needed to solve this task (Ashcraft and Faust, 1994;Ashcraft and Kirk, 2001;. Other studies showed inefficient carrying to be a central cause of errors in mental arithmetic (Fürst and Hitch, 2000). The main requirements for efficient carry operations are the ability to store interim results temporarily, the use of problemsolving skills, and the use of rule-based procedures. The resources necessary for these operations are absorbed primarily by working memory (Geary and Widaman, 1987). Complex arithmetic tasks require more working memory resources than SA tasks (Kong et al., 2005;Fehr et al., 2007) and are more strongly negatively affected by a secondary task, especially when the secondary tasks absorbs additional executive resources (Seitz and Schumann-Hengsteler, 2002). Traditional behavioral approaches to calculation skills, however, leave open the question concerning the relative importance of the different cognitive mechanisms involved in mental arithmetic. Neuroimaging studies may contribute to a better understanding of cognitive processes and brain mechanisms underlying mental arithmetic as will be outlined in the following paragraphs. In an often-cited study on mental arithmetic (Ischebeck et al., 2006), reduced working memory load was accompanied by decreased brain activation in inferior frontal areas as measured by functional magnetic resonance imaging (fMRI). Other neuroimaging studies indicated highly complex cerebral networks to be involved in arithmetic task performance including the prefrontal cortex (PFC), cingulate cortex, fusiform gyrus, insula, cerebellum and the parietal cortex (for a review see Arsalidou and Taylor, 2011). More specifically, there is some evidence for the notion that the cingulate gyrus may coordinate and integrate activity of multiple attentional systems (Peterson et al., 1999). The fusiform gyrus seems to be crucial for encoding object properties (Allison et al., 1994) and visual number form (Dehaene and Cohen, 1997). The insula is associated with the execution of responses (Huettel et al., 2001), error processing (Hester et al., 2004), and, thus, might act as a network hub during information processing (Sridharan et al., 2008). The parietal cortex plays a crucial role in verbal number processing, quantity representation, and attentional processes (Dehaene et al., 2003). The cerebellum (Stoodley and Schmahmann, 2009) and PFC (Owen et al., 2005), are involved in working memory and executive functioning. In a recent meta-analysis including 53 fMRI data sets on brain activation during mental arithmetic, Arsalidou and Taylor (2011) identified three distinct regions in the prefrontal cortex that contribute to performance on mental arithmetic. The inferior frontal gyri (IFG) were reported to be active during basic numerical tasks with only little storage requirements, while the middle frontal gyri (MFG) seemed to be involved in calculations entailing procedural steps like carrying. When the tasks contained multi-step problems or when computational strategies were required, activity in the superior frontal gyri (SFG) was observed. The primary goal of the present study was to examine whether differences in arithmetic task complexity are associated with different levels of oxygen consumption in IFG, MFG, and SFG. While most previous studies used fMRI, our study contributes to the growing number of studies using near-infrared spectroscopy (NIRS). NIRS allows to measure changes in oxygenated (O 2 Hb) and deoxygenated hemoglobin (HHb) in the cortical surface of the human brain while subjects perform cognitive tasks. It is widely accepted that increases in O 2 Hb and decreases in HHb indicate cortical activation (Strangman et al., 2002a). Numerous studies documented a high correspondence between NIRS and other functional brain imaging techniques such as fMRI (e.g., Strangman et al., 2002b;Huppert et al., 2006;Eggebrecht et al., 2012) and provided empirical evidence for the reliability and validity of NIRS data (e.g., Plichta et al., 2006a,b;Sato et al., 2006;Schecklmann et al., 2008). We performed a comprehensive literature search that resulted in four NIRS studies accentuating the role of the PFC during mental calculations (Tanida et al., 2004;Yang et al., 2009;Pfurtscheller et al., 2010;Power et al., 2010). While three of these studies (Tanida et al., 2004;Pfurtscheller et al., 2010;Power et al., 2010) did not systematically vary arithmetic task complexity, Yang et al. (2009) used only a small number of sampling. Most commonly, NIRS studies on mental arithmetic use visually presented tasks and, thus, do not necessarily generalize to many everyday-life situations where solving of verbally presented mental arithmetic tasks is required. Therefore, in analogue to many daily life situations, our participants listened to the arithmetic task, kept the information in mind, and gave the solution verbally. PARTICIPANTS Twenty-nine German speaking men ranging in age from 20 to 28 years (mean age ± standard deviation: 23.2 ± 2.5 years) participated in the present study. In order to avoid unwanted variance due to potential gender differences in the functional and structural neuroanatomy of mathematical cognition (Keller and Menon, 2009), we decided to only include men in the present study. According to self-reports, all participants were right-handed and had no actual or past neurological or psychiatric disorder. Written informed consent was obtained from each participant after detailed explanation about the study protocol and the NIRS recording was given. Participants were recruited by online or placard announcements and were rewarded with C20.00 after completion of the experiment. The study was approved by the Ethics Committee of the Faculty of Human Science of the University of Bern. Mental arithmetic task Apparatus and stimuli. Mental arithmetic tasks were arithmetic addition tasks in the number domain from 1 to 99 presented acoustically via loudspeakers at an intensity of 60 dB. Presentation of the tasks was controlled by E-Prime Version 2.0 experimental software (Psychology Software Tools Inc., Pittsburgh, PA, USA). Presentation of a single trial lasted about 1.6-2.6 s. Task procedure. The tasks were presented by a pre-recorded, standardized female voice through loudspeakers. The main reason for using a female voice to present the tasks was that in numerous studies women talkers were generally found to be more intelligible than men talkers (e.g., Bradlow et al., 1996;Markham and Hazan, 2004). Testing took place in a soundproof chamber. All participants were tested individually. During the testing session, the participant sat alone in the soundproof chamber, the experimenter was outside the chamber. Participant's responses were registered by the experimenter by means of an intercom system. The experimental task consisted of three conditions. In the SA condition each trial contained one two-digit and one single-digit addend (e.g., 34 + 8), while in the complex arithmetic (CA) condition two 2-digit addends were presented (e.g., 34 + 57) with the first addend larger than the second in half the trials and the second larger than the first in the other half. To ensure that the tasks involve computational effort, in both conditions the sum of the last digits of both addends exceeded 10 in all tasks. However, complex carrying-in terms of decomposing and reassembling single addends-was required in the CA condition only. In an active control condition, similar stimuli were presented as in the SA and CA conditions. The first addend, however, was exchanged for the letter "Y" (e.g., Y + 68) and the participant's task consisted of simply repeating the second addend to prevent any mental calculation while other processes, such as perceptual processing of the stimuli and verbal responses, were similar to the experimental conditions. The task was divided into 12 blocks with four blocks for each condition. Each block lasted 40 s followed by a 40-s rest period to allow for the hemodynamic response function to return to baseline. The order of blocks was counterbalanced across participants. Within each block, trials were presented in a fixed order. Participants' responses were given verbally. Immediately after the response, the experimenter pressed one of two designated keys referring to a correct or incorrect response logging accuracy and latency of responses. The next trial started immediately after response logging. Participants were not given the option to correct an answer and they did not receive feedback. Two sample tasks for each condition were given prior to the experiment proper to familiarize participants with the tasks. For each of the two experimental conditions, mean response time across all four blocks was computed for correct responses. In addition, the average number of correctly solved tasks per block and the mean hit rate across all blocks were calculated. Thus, hit rate was operationalized as the ratio of the number of correctly solved tasks to the total number of tasks presented within the SA and CA condition, respectively. Near-infrared spectroscopy NIRS measurements were conducted with an ETG-4000 Optical Topography System (Hitachi Medical Co., Japan) using a probe set consisting of 52 channels. A channel is defined as the region between one light emitter and one neighboring photo-detector. The 52 channels were divided into 17 emitters and 16 detectors building three rows with 11 optodes each (see Figure 1). Inter-optode distance was fixed at 30 mm resulting in measuring approximately 15-25 mm beneath the scalp. Two different wavelengths (695 ± 20 nm and 830 ± 20 nm) at a temporal resolution of 10 Hz were used. Changes of absorbed near-infrared FIGURE 1 | NIRS probe set with 52 channels. Emitters and detectors are represented by red and blue circles, respectively. The detector between channel positions 47 and 48 was placed over Fpz according to the 10-20 system. light were transformed into relative concentration changes of O 2 Hb and HHb by means of a modified Beer-Lambert law. The unit of hemoglobin concentration was mmol × mm. Brain activity was indicated by an increase of O 2 Hb as well as a decrease of HHb (Strangman et al., 2002a;Obrig and Villringer, 2003). The probe set was placed over frontal regions with its central optode in the lowest row fixed over Fpz, while both ends of the probe set were placed symmetrically toward T3 and T4 according to the international 10-20 system for electroencephalography (Jasper, 1958). To remove slow baseline drifts and high-frequency instrument noise in O 2 Hb and HHb, raw NIRS data were pre-processed. Using the software package ETG4000 V1.84eK (Hitachi Medical Co., Japan) a moving average filter with a time window of 5 s and a band pass filter with cut-off frequencies of 0.01 and 0.5 Hz were applied. To reduce additional spike-like noise in the continuous data (e.g., head motion artifacts) the signal was improved by applying a method based on the assumption that O 2 Hb and HHb are negatively correlated (Cui et al., 2010). This way, the data were also cleaned from potential artifacts due to the verbal responses given by the participants. Resulting data consisted of solely one parameter, i.e., a linear combination of O 2 Hb and HHb representing hemodynamic response. Larger values thereby indicate higher cortical activation. In a first step, the mentioned linear combination of O 2 Hb and HHb was averaged for each of the 40-s blocks. This step included the subtraction of baseline-activation from task-activation values in each block. In a second step, the mean hemodynamic response for each condition was calculated by aggregating the single values from the first step across the four blocks. Finally, the resulting means in the simple and complex condition were corrected for the activation in the passive control condition by the subtraction method. The resulting values were considered indices of pure calculation-related cortical activation and were tested against 0 (t-tests as proof of activation). To avoid the risk of Type I error due to simultaneous testing, a false discovery rate approach (FDR; Singh and Dan, 2006) was applied. The FDR resulted in p and t values for SA, CA, and the comparison between SA and CA, respectively. In a second step, regions of interest (ROIs) were defined and corresponding channel positions were aggregated (see Results for more details). To estimate accordance between channel positions and cortical topography and to make the data comparable with results provided by fMRI studies, a virtual registration procedure was used (Tsuzuki et al., 2007). This method utilizes structural information from an anatomical database (Okamoto et al., 2004;Jurcak et al., 2005) to provide estimates of the channel positions in a 3D reference frame (Montreal Neurological Institute coordinate system, MNI;Collins et al., 1994). This procedure also allows the estimation of spatial uncertainty due to intersubject variability of the channel positions. Thus, for each channel position the corresponding MNI-space coordinates with an estimated error was calculated (see Table 1). BEHAVIORAL DATA Due to higher task difficulty, participants needed significantly more time to correctly solve a single arithmetic task in the CA than in the SA condition [t (28) = −12.1; p < 0.001; d = −3.18]. Mean response times (± SEM) were 4270 ± 156 ms for the SA and 7244 ± 353 ms for the CA condition. Faster response times in the SA compared to the CA condition implied that participants were presented with a larger number of trials per block in the SA (9.91 ± 0.31) than in the CA (6.23 ± 0.31) condition [t (28) = 20.5; p < 0.001; d = 5.38]. Average number of correctly solved trials per block was 9.28 ± 0.35 and 4.89 ± 0.32 for the SA and CA condition, respectively. As to accuracy of performance, the hit rate of 0.93 ± 0.01 observed under the SA condition was reliably higher than the hit rate of 0.77 ± 0.03 observed under the CA condition [t (28) = 6.8; p < 0.001; d = 1.79]. Altogether, these results clearly indicate that our experimental manipulation of task complexity was successful. NIRS DATA Both conditions induced a significant task-related hemodynamic response. After correcting for simultaneous testing with the means of the FDR method (Singh and Dan, 2006), 43 channel positions yielded statistical significance in the SA condition (see Figure 2A) indicating cortical activation. Channel Positions 1, 2, 3, 4, 5, 6, 10, 16, and 17 failed to reach the 5% level of statistical significance. In the CA condition, the number of significantly active channel positions increased to 49 with only Channel Positions 5, 6, and 16 showing no significant activation (see Figure 2B). A direct comparison of the SA and the CA conditions indicated major differences in cortical activation to be located in the anterior prefrontal area. Reliably higher cortical activation was found in the CA compared to the SA condition (Channel Positions 12,13,14,22,24,25,35,36,46,and 47, in the left hemisphere, and Channel Positions 18, 28, 38, 39, 48, and 49, in the right hemisphere, respectively; see Figure 2C). For further statistical analyses, specific ROIs were defined on the basis of previous findings. In a recent meta-analysis (Arsalidou and Taylor, 2011), the contribution of the prefrontal cortex during number and calculation tasks could be assigned to three distinct cortical regions. While the IFG were involved in the processing of simple numerical tasks, the MFG seemed to be active during cognitive procedural steps like carrying, and the SFG played a significant role in generating strategies during multi-step problems. Proceeding from these findings, all channel positions, likely to cover these prefrontal regions, were considered potential ROIs (see Figure 3). Activity in the IFG was measured by Channel Positions 19, 29, 40, and 50 (left hemisphere) as well as Channel Positions 13, 24, 34, and 45 (right hemisphere). The Data was analyzed by means of a repeated measures analysis of variance (ANOVA) including three within-subject factors. These were (1) Hemisphere (two levels: right and left), (2) Task Complexity (SA and CA) and (3) ROI (IFG, MFG, and SFG). Greenhouse-Geisser corrected p-values are reported where appropriate (for all main effects of ROI and the interactions with ROI) to protect against violations of sphericity (Geisser and Greenhouse, 1958). Analysis of variance revealed a significant main effect of Task Complexity [F (1, 28) = 9.22; p < 0.01; η 2 p = 0.25] with the CA condition evoking a higher hemodynamic response than the SA condition (see Figure 4). As can also be seen from Figure 4, cortical activation decreased from inferior to middle and superior gyri [F (1.15, 32.09 ROI and Task Complexity [F (1.05,29.43) = 0.20; p = 0.67; η 2 p = 0.01] did not reach statistical significance. The interaction between Task Complexity and Hemisphere yielded statistical significance [F (1, 28) = 4.53; p < 0.05; η 2 p = 0.14]. The effects mentioned so far, however, were modified by a significant three-way interaction [F (1.15, 32.13) = 5.22; p < 0.05; η 2 p = 0.16]. To further analyze this interaction a, post-hoc Scheffé test was applied. As to task complexity, the hemodynamic response increased significantly from the SA to the CA condition in all ROIs [right IFG (p < 0.001), left and right MFG (p < 0.001), left SFG (p < 0.05), right SFG (p < 0.01)] except for the left IFG (p = 0.47). With regard to the comparison of the three ROIs, IFG showed a more pronounced hemodynamic response in both conditions as well as both hemispheres [p < 0.001; for the SA condition, left hemisphere (p < 0.01)] compared to MFG. In the SA condition, the left MFG was more active than the left SFG (p < 0.001), all other contrasts between MFG and SFG did not yield statistical significance. For none of the three ROIs cortical activation differed between right and left hemisphere-neither in the SA condition [IFG (p = 0.84), MFG (p = 0.97), SFG (p = 1.0)] nor in the CA condition [IFG (p = 0.47), MFG (p = 0.97), SFG (p = 1.0)]. FIGURE 3 | Defined regions of interest (ROIs As noted above, most of the channel positions outside our ROI also yielded significant changes in hemodynamic response during task presentation. Most importantly, however, the majority of these channel positions did not differ in cortical activation between the SA and the CA condition. The only channel positions being sensitive for task complexity outside the defined ROI represented Channel Positions 12 and 22. In order to provide a control region, the channel positions covering different gyri of the temporal lobe were pooled (right hemisphere: 32, 33, 43, and 44; left hemisphere: 41, 42, 51, and 52). After aggregation, these channel positions did not show any significant differences in hemodynamic response as a function of task complexity [right hemisphere: t (28) = 1.65; p = 0.11; left hemisphere: t (28) = 0.99; p = 0.33]. In other words, the sensitivity to arithmetic task complexity appeared to be restricted to the specified ROIs. DISCUSSION In the present study, changes in prefrontal cortical blood oxygenation during mental arithmetic were quantified by means of NIRS. To our knowledge, this was the first NIRS study on mathematical processing using verbally presented arithmetic tasks that additionally varied in task complexity. The implemented addition tasks required mental effort and the coordination of different mental operations (in a certain sequence), including different arithmetic strategies (carrying, retrieval) as solutions could not be directly retrieved from memory. While the SA tasks required only a simple carry operation with one single digit addend and could be solved without temporarily storing the intermediate total in working memory, CA tasks demanded more computational effort since addends consisted of two digit numbers. Consistent with numerous studies investigating mental arithmetic (Ashcraft and Faust, 1994;Fürst and Hitch, 2000;Ashcraft and Kirk, 2001;, our tasks requiring the more complex carry operation led to more calculation errors compared to the SA tasks, and participants needed more time to solve the CA compared to the SA tasks. The higher number of calculation errors and the longer response times in the CA than in the SA condition indicated that our manipulation of task complexity had been successful. As to overall blood oxygen consumption, both mastery of the simple and the complex arithmetic problems was associated with reliable increases of brain activation as reflected by an increase of oxygen consumption from the control to the two experimental conditions. Furthermore, activation of mainly anterior prefrontal areas was higher in the CA compared to the SA condition indicating that NIRS sensitively displays changes in invested mental effort due to increased arithmetical task difficulty. Although in the SA condition more trials were presented than in the CA condition, oxygen consumption was higher in the CA compared to the SA condition. Against the background of Arsalidou and Taylor's (2011) finding that IFG, MFG, and SFG serve different arithmetical functions, we proceeded from their assumptions and defined corresponding ROIs of the prefrontal cortex. In both task conditions and both hemispheres, the highest activation of all three ROIs was observed in the IFG emphasizing their important role in mental arithmetic. In their meta-analytic fMRI study, Arsalidou and Taylor (2011) proposed inferior prefrontal regions to play a crucial role in the processing of information requiring (rulelike) simple cognitive operations. Also in previous studies, IFG has been linked to task difficulty (Zhou et al., 2007), working memory, and attention (Ischebeck et al., 2009). Our data are partially consistent with Zhou et al.'s (2007) interpretation of IFG activation reflecting task difficulty as it increases from the SA to the CA condition (but see Kong et al., 2005) in the right IFG. However, post-hoc analyses indicated that task complexity did not influence activation of the left IFG suggesting task difficulty being somewhat lateralized to the right IFG. As to the MFG, our finding of increasing bilateral activation in MFG with increasing complexity of the arithmetic tasks is in line with previous findings (cf., Arsalidou and Taylor, 2011). Activity in MFG was lower than in IFG and virtually equal to activity in SFG. Most interestingly, only in the CA condition, the left MFG was significantly more active than the left SFG indicating a large increase in cortical activation from the SA to the CA condition. MFG corresponds roughly to the dorsolateral prefrontal cortex and is typically associated with working memory functions (Owen et al., 2005). These dorsolateral areas are involved when coordination of subprocesses and cognitive control becomes more and more important (Rypma et al., 1999). As outlined above, our CA tasks required more coordination and cognitive control than the SA tasks. This may account for the bilaterally increasing MFG activity from the SA to the CA condition. Compared to IFG and, at least in part, also to MFG, lower activation was measured in SFG. This lower activation might be attributed to the fact that channel positions covering more posterior parts of the SFG did not show any cortical activity at all (Channel Positions 5, 6, and 16). The activation of the SFG can largely be ascribed to further anterior parts of the SFG, namely to channel positions covering the anterior prefrontal cortex. The anterior prefrontal cortex, has been described as being active during goal-oriented coordination of different cognitive sub-operations (Ramnani and Owen, 2004). Because solving of arithmetical problems commonly requires more than one operation (Fürst and Hitch, 2000) and because more complex tasks yield more operations to be coordinated, the increase of activation in SFG from the SA to the CA condition is likely to reflect the higher task demands induced by the CA compared to the SA tasks. It should be noted that the present activation patterns are not identical across all channel positions of the probe set. For the channels outside the defined ROIs, complexity-related increase of brain activation could not be observed except for Channels 12 and 22 (see also Figure 2C). Consequently, when channel positions covering different structures of the temporal lobe were combined to a control region of sorts, these temporal areas of the probe set were not sensitive to task complexity. The sensitivity to task complexity, therefore, seems to be restricted to the specific ROIs defined (i.e., IFG, MFG, SFG). While Arsalidou and Taylor (2011) refer to a hemispheric asymmetry in parietal and prefrontal regions with addition being left lateralized, in the present study no indication of a lateralization effect could be observed, except for a higher sensitivity to task complexity for the right compared to the left IFG. When we compared single mean values of the hemodynamic response across hemispheres by means of indices of laterality as suggested by Binder et al. (1996), lateralization effects were found neither with SA nor with CA tasks. It is important to note, however, that our experimental design differed from the ones employed in previous studies in some important points. The verbal presentation of the arithmetic tasks in the present study could have led to increased cognitive load and, thus, hamper the comparison with previous studies which mainly used visually presented tasks. Furthermore, our participants had to additionally transform auditory information into a symbolic representation and to remember the initial addends during the entire calculation process. Because our design did not control for these processes, it might be possible that cortical activation due to "pure" calculation had been covered by such additional transformation and memory processes. Casasanto (2003) stated that left lateral prefrontal activation is linked to the verbalizability of non-verbal stimuli, whereas activation in right lateral prefrontal areas is related to the imageability of verbal stimuli. This imageability component might represent the very difference between the present study and previous work. At the same time, however, verbally presented mental arithmetic tasks bear the advantage of higher ecological validity. Compared to other brain imaging techniques, NIRS has a few limitations such as a relatively low spatial resolution as compared to fMRI or positron emission tomography. NIRS measurements are also restricted to surface cortical areas only. Nevertheless, it depends on the research question to what extent the latter constitutes, in fact, a disadvantage. Potential disadvantages of NIRS, however, may be outweighed by practical features such as the provision of many more research possibilities (with higher ecological validity), the low sensitivity to movement artifacts, and low procedural and maintenance costs. In sum, with NIRS as a research tool, the present study was able to corroborate the general significance of the prefrontal cortex for mastery of mental addition tasks. With the help of virtual registration methods, an approximate accordance between channel positions and cortical topography was achieved and activation patterns documented in the present approach were linked to findings of previous fMRI studies. More specifically, the increased necessity of cognitive control and the coordination of different sub-operations when addition tasks became more complex could be confirmed in terms of increased blood oxygen consumption in MFG and SFG that were found to be particularly sensitive to enhanced task difficulty. Thus, NIRS technology turned out to represent a highly feasible and non-invasive research method for investigating neural correlates of mental arithmetic. Based on the present findings, in future studies employing NIRS technology, preferential consideration should be given to the identification of different cognitive processes involved in the processing of SA and CA tasks. Furthermore, particular attention should be paid to the functional distinction between the frontal brain regions (IFG, MFG, and SFG) that were shown in Arsalidou and Taylor's (2011) meta-analyses of fMRI studies to be associated with different cognitive functions in mental arithmetic. Bradlow, A. R., Torretta, G. M., and Pisoni, D. B. (1996)
2016-06-17T15:43:39.427Z
2013-05-22T00:00:00.000
{ "year": 2013, "sha1": "923f5d737d98c83a0c47afb04f663e87c97a6e41", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2013.00217/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8562aa65ec1d7b633898b7fa5d05247ba8cfab93", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
231868051
pes2o/s2orc
v3-fos-license
OsCRP1, a Ribonucleoprotein Gene, Regulates Chloroplast mRNA Stability That Confers Drought and Cold Tolerance Chloroplast ribonucleoproteins (cpRNPs) are nuclear-encoded and highly abundant proteins that are proposed to function in chloroplast RNA metabolism. However, the molecular mechanisms underlying the regulation of chloroplast RNAs involved in stress tolerance are poorly understood. Here, we demonstrate that CHLOROPLAST RNA-BINDING PROTEIN 1 (OsCRP1), a rice (Oryza sativa) cpRNP gene, is essential for stabilization of RNAs from the NAD(P)H dehydrogenase (NDH) complex, which in turn enhances drought and cold stress tolerance. An RNA-immunoprecipitation assay revealed that OsCRP1 is associated with a set of chloroplast RNAs. Transcript profiling indicated that the mRNA levels of genes from the NDH complex significantly increased in the OsCRP1 overexpressing compared to non-transgenic plants, whereas the pattern in OsCRP1 RNAi plants were opposite. Importantly, the OsCRP1 overexpressing plants showed a higher cyclic electron transport (CET) activity, which is essential for elevated levels of ATP for photosynthesis. Additionally, overexpression of OsCRP1 resulted in significantly enhanced drought and cold stress tolerance with higher ATP levels compared to wild type. Thus, our findings suggest that overexpression of OsCRP1 stabilizes a set of mRNAs from genes of the NDH complex involved in increasing CET activity and production of ATP, which consequently confers enhanced drought and cold tolerance. Introduction Members of the green plant lineage have chloroplasts with their own organellar genomes that have their evolutionary origins in endosymbiotic cyanobacteria. Proteins encoded by chloroplast genes play crucial roles in photosynthesis and in the expression of photosynthesis-related nuclear genes. Expression of chloroplast mRNAs is regulated at both the transcriptional and posttranscriptional levels [1][2][3], and during post-transcriptional regulation, numerous nucleus-encoded RNA-binding proteins (RBPs) act as a regulator of cleavage, splicing, editing, or stabilization of chloroplast RNAs [3,4]. For example, pentatricopeptide repeat (PPR) proteins, the most abundant protein family in plants, are well-characterized RBPs that mediate RNA editing through interaction with specific chloroplast RNA sequences [5][6][7][8]. The consensus RNP structure of five tobacco (Nicotiana sylvestris) cpRNPs (cp28, cp29A, cp29B, cp31, and cp33) has been solved and their binding affinities to RNA homopolymers, such as poly (G) and poly (U), have been determined. These studies suggest that cpRNPs have a key function in chloroplast RNA metabolism [10,11]. As previously reported for spinach (Spinacia oleracea) 28RNP, a tobacco cp28 and cp31 ortholog, cpRNPs confer correct 3 -end processing of chloroplast mRNAs such as psbA, rbcL, petD, and rps14 [12]. In Arabidopsis thaliana, in silico analysis indicated that the cpRNP protein family is composed of 10 members [13,14]. An A. thaliana null mutant, CP31A, was shown to have defects in RNA editing and to have a number of destabilized transcripts under normal growth conditions [15]. It has also been demonstrated that cpRNPs are required for activity of the NADH dehydrogenase-like (NDH) complex through stabilization of ndhF mRNA and editing of ndhF, ndhB, and ndhD mRNAs [15]. Interestingly, Kupsch et al. [16] found that CP31A and CP29A in A. thaliana are essential for cold stress tolerance through their stabilization of numerous chloroplast mRNAs. Moreover, it has been demonstrated that cpRNPs are highly regulated proteins that respond to various external and internal signals, including light and temperature, which affect both their expression levels and post-translational modification [16][17][18]. While a number experimental systems have been used to elucidate the molecular mechanisms and functions of cpRNPs, there are still several important crop species for which such information is absent, notably rice (Oryza sativa). In later diverging land plants, the light reactions of photosynthesis involve at least two routes through which light energy is converted into NADPH and ATP. Through the first route, ATP and NADPH are generated by electrons released from water to photosystem II (PSII) and photosystem I (PSI) via linear electron transport (LET) [19]. However, while LET generally produces sufficient amounts of NADPH, this is not the case for ATP [20][21][22]. In the second route, an electron can be recycled from either reduced ferredoxin or NADPH to plastoquinone and subsequently to the Cyt b 6 f complex. This cyclic electron transport (CET) requires only PSI photochemical reactions to produce ATP and does not involve the production of chloroplastic NADPH [19,23]. CET consists of two pathways: the PROTON GRADIENT REGULATION 5 (PGR5)/PGR-LIKE PHOTOSYNTHETIC PHENOTYPE 1 (PGRL1)-dependent pathway and the NDH complex-dependent pathway [24][25][26][27]. The former represents the major pathway under normal growth conditions, whereas many studies have shown that NDH-dependent CET is involved in protective or adaptive mechanisms in response to abiotic stresses, such as heat, high light, or drought. In rice, a crr6 mutant, which has a defect in the ndhK gene, shows growth defects under low temperature, low light and fluctuating-light stress conditions [22,28,29], and a tobacco ndhB mutant that is deficient in NDH-dependent CET has decreased relative leaf water content and net CO 2 assimilation under water stress conditions [30]. The salt-tolerant soybean (Glycine max) variety S111-9, which under normal conditions has high expression levels of ndhB and ndhH, shows higher CET activity and ATP accumulation than the salt-sensitive variety Melrose, suggesting a correlation between salt tolerance and NDH-dependent CET [31]. These studies are congruent with the idea that NDH-dependent CET is important for the adaptation of plants to abiotic stress conditions. In this current study, we investigated the significance of OsCRP1, a rice chloroplast ribonucleoprotein, in drought and cold stress tolerance. The OsCRP1 protein was found to have a broad range of binding affinities to chloroplast RNAs, and specifically to regulate NDH complex gene expression. Overexpression of OsCRP1 in rice resulted in increased CET activity and accumulation of ATP, whereas knock-down lines had lower activity under stress conditions. We also found that OsCRP1 overexpressing plants had enhanced drought and cold stress tolerance compared to non-transgenic (NT) plants, whereas the knock-down lines remained susceptible. Overall, these results suggest that overexpression of OsCRP1 confers improved drought and cold tolerance through modulation of NDH-dependent CET. OsCRP1 Is a Rice Nuclear-Encoded and Chloroplast Targeting Ribonucleoprotein Nuclear-encoded chloroplast ribonucleoproteins (cpRNPs) consist of a transit peptide (TP) and two RNA recognition motifs (RRM) that are involved in the interaction with RNA molecules ( Figure S1). The A. thaliana genome is predicted to encode 10 cpRNPs (Figure 1a), and we identified 8 cpRNP proteins encoded by the rice genome based on the conserved RRM protein sequence of AtCP31A, using SmartBLAST (http://blast.ncbi. nlm.nih.gov). In order to name the rice cpRNPs in accordance with published classification, a phylogenetic tree was generated using full length protein sequences from the 10 A. thaliana and 8 rice cpRNPs (Figure 1a). cpRNP protein sequences show a high degree of sequence conservation between dicots and monocots ( Figure S1), and we named the rice cpRNPs OsCRP1 (Os09g0565200), OsCRP2 (Os08g557100), OsCRP3 (Os07g0158300), OsCRP4 (Os03g0376600), OsCRP5 (Os07g0631900), OsCRP6 (Os02g0815200), OsCRP7 (Os09g279500) and OsCRP8 (Os08g0117100) (Figure 1a). (a) Phylogenetic tree created using the neighbor-joining method in CLC sequence viewer using full-length amino acid sequences of the rice and Arabidopsis thaliana chloroplast ribonucleoproteins (cpRNPs). Bootstrap support (100 repetitions) is shown for each node. (b) Quantitative RT-PCR of OsCRP1 in various tissues and at different growth stages. (D, day after germination; L, leaf; R, root; S1, <1 cm in panicle length; BH, before heading; AH, after heading). OsUbi1 (AK121590) expression was used as an internal control, and were plotted relative to the level of mRNA in the (a) Phylogenetic tree created using the neighbor-joining method in CLC sequence viewer using full-length amino acid sequences of the rice and Arabidopsis thaliana chloroplast ribonucleoproteins (cpRNPs). Bootstrap support (100 repetitions) is shown for each node. (b) Quantitative RT-PCR of OsCRP1 in various tissues and at different growth stages. (D, day after germination; L, leaf; R, root; S1, <1 cm in panicle length; BH, before heading; AH, after heading). OsUbi1 (AK121590) expression was used as an internal control, and were plotted relative to the level of mRNA in the lowest-expressing stages (indicated by the asterisk). Data bars represent the mean ± SD of two biological replicates, each of which with three technical replicates (* p < 0.05). (c) Subcellular localization of OsCRP1 in rice protoplasts. Rice leaf protoplasts were transformed with two different constructs and observed using a confocal microscope. (d) Localization of OsCRP1 was confirmed by the observation of GFP fluorescence in leaves of one-week old OsCc1::OsCRP1-GFP transgenic rice plants. OsRbcS::TP1-GFP and OsCc1::GFP were controls for localization in chloroplasts and the cytoplasm, respectively. Scale bar, 10 µm. OsCRP1 was chosen for functional characterization since its transcripts were detected in all tissues from the various developmental stages. The OsCRP1 expression was particulalry abundant in green tissues, including leaves and green flowers, while it remained low in roots at all developmental stages (Figure 1b). To determine the subcellular localization of OsCRP1, we expressed the whole protein or the transit peptide in rice protoplasts as a fusion with green fluorescent protein (OsCRP1-GFP or TP1-GFP) under the control of the 35S promoter ( Figure S2). The GFP fluorescence of OsCRP1-GFP and TP1-GFP, resulting from transformation of the protoplasts with vectors pro35S::OsCRP1-GFP or pro35S::TP1-GFP, respectively, overlapped with the red chloroplast autofluorescence (Figure 1c). To confirm OsCRP1 localization, we also generated transgenic rice plants expressing OsCRP1-GFP, GFP and TP1-GFP under the control of the OsCc1 (rice CYTOCHROME C1) and RbcS (rice small subunit of ribulose bisphosphate carboxylase/oxygenase) promoter, respectively ( Figure S2). The OsCRP1-GFP under the control of the OsCc1 promoter (OsCc1::OsCRP1-GFP) showed uniform yet aggregated patterns of GFP fluorescence in chloroplasts ( Figure 1d). This unique patterns of GFP fluorescence was different from those of two control constructs RbcS::TP1-GFP or OsCc1::GFP that showed either GFP fluorescence evenly distributed in all chloroplasts or no GFP fluorescence within chloroplasts, respectively ( Figure 1d). These data suggest that the OsCRP1-GFP is targeted to a sub-structure of chloroplasts, presummably stroma. OsCRP1 Is Required for the Accumulation of Chloroplast mRNAs To determine whether the OsCRP1 protein was associated with chloroplast mRNA accumulation, we performed a RNA immunoprecipitation (RIP) assay. OsCRP1 levels in the leaves of transgenic rice lines transformed with OsCc1::OsCRP1-GFP were verified by Western blot analysis using a α-GFP antibody ( Figure S3a). RNA-protein complexes in OsCc1::OsCRP1-GFP leaf extracts were precipitated using α-GFP antibodies, and OsCc1::GFP leaf extracts were used as a negative control. We then analyzed the RNA quantity of 23 plastid-encoded genes corresponding to four major chloroplast protein classes: ATP synthase (atp), photosystem I (psa), photosystem II (psb), and the NADH dehydrogenase complex (ndh), by quantitative real time (qRT)-PCR analysis. Most of the chloroplast mRNAs were enriched > 5-fold in extracts from the OsCc1::OsCRP1-GFP lines compared to the control (Figure 2a), indicating that OsCRP1 can bind to a broad range of chloroplast RNAs. To further elucidate the functions of OsCRP1, we generated overexpression (OsCc1::OsCRP1 OX ) and RNAi (OsCc1::OsCRP1 RNAi ) transgenic rice plants using the OsCc1 promoter, which is constitutively active throughout the plant [32] ( Figure S2). Thirty independent transgenic lines were generated, and those that grew normally were selected for further analysis to eliminate the effects of soma clonal variation. Based on the expression levels of OsCRP1 in the transgenic plants, we chose three independent single-copy homozygous lines from each transgene type (OsCc1::OsCRP1 OX ; #7, 9, 12 and OsCc1::OsCRP1 RNAi ; #4, 6, 8) for further study ( Figure S3b). levels of 23 chloroplast genes in these plants, we observed differences for almost all NDH complex genes in the transgenic plants compared to the control, with higher expression in OsCc1::OsCRP1 OX plants and lower expression in OsCc1::OsCRP1 RNAi plants. Their increased and decreased levels of expression in the OsCRP1 OX and the OsCRP1 RNAi leaves, respectively, were validated by qRT-PCR (Figure 2b). In summary, our results suggested that OsCRP1 directly binds to a set of cpRNAs, causing an increase in the mRNA stability of NDH complex genes. The GFP-tagged transgenic rice plants were generated using the OsCc1::OsCRP1-GFP vector. Soluble protein and total RNA extractions and RNA-immunoprecipitation (RIP) assays were conducted with 2-week-old OsCc1::OsCRP1-GFP transgenic leaves. (a) Identification of OsCRP1 target chloroplast RNAs (cpRNAs) by RIP. cDNAs were synthesized using the immunoprecipitated RNAs and α-GFP antibodies, prior to quantitative RT-PCR. All the values were normalized based on total input RNA per sample, and bars represent the mean ± SD of four repeats. (b) Relative expression levels of cpRNAs in total RNA samples from OsCc1::OsCRP1 OX , non-transgenic (NT) and OsCc1::OsCRP1 RNAi plants. qRT-PCR with cDNA from NT and transgenic leaves was performed using 23 chloroplast gene-specific primer sets. All the values were normalized to the internal OsUbi1 control gene, and data bars represent the mean ± SD of two biological replicates, each of which had three technical replicates. Significant differences from the control are indicated by asterisks (Student's t-test, * p < 0.05). To verify the OsCRP1 target genes, we performed an RNA-seq analysis with leaves from OsCc1::OsCRP1 OX (#7, 9,12), non-transgenic control (NT), and OsCc1::OsCRP1 RNAi (#4, 6, 8) plants grown under normal conditions (Table S2). When we analyzed the mRNA levels of 23 chloroplast genes in these plants, we observed differences for almost all NDH complex genes in the transgenic plants compared to the control, with higher expression in OsCc1::OsCRP1 OX plants and lower expression in OsCc1::OsCRP1 RNAi plants. Their increased and decreased levels of expression in the OsCRP1 OX and the OsCRP1 RNAi leaves, respectively, were validated by qRT-PCR ( Figure 2b). In summary, our results suggested that OsCRP1 directly binds to a set of cpRNAs, causing an increase in the mRNA stability of NDH complex genes. Down-Regulation of OsCRP1 Results in Chlorosis under Light Stress Conditions The NDH complex is known to catalyze electron transfer from the stromal pool of reductants to plastoquinone (PQ), which activate the cyclic electron transport (CET) under abiotic stress [30,[33][34][35][36]. We observed an increase in chlorophyll fluorescence after the offset of actinic light, which is caused by the NDH complex catalyzing a reduction of the PQ pool [25]. Moderate heat stress (e.g., 35-42 • C) can affect photosynthesis and cause a significant increase in CET [37], and so we exposed plants in a dark chamber to different temperatures (22 • C, 28 • C and 35 • C) before taking measurements ( Figure 3a). Under normal conditions (22 • C and 28 • C), a similar increase in chlorophyll fluorescence was observed in all plants following illumination. After heat stress, the responsiveness of NDHdependent CET under dark conditions was enhanced in OsCc1::OsCRP1 OX plants compared to NT plants. In contrast, OsCc1::OsCRP1 RNAi plants did not exhibit this characteristic rise in post-illumination fluorescence. Data bars represent the mean ± SD of two biological replicates, each of which had three technical replicates. Asterisks indicate significant differences compared with NT (* p < 0.05, One-way ANOVA). (d) ATP contents in leaves of the OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi plants under before and after light stress conditions. Ten plants were used for each line, and the middle portion of the second leaf from the top was taken for analysis. Data bars represent the mean ± SD of three biological replicates, each of which had two technical replicates. Asterisks indicate significant differences compared with NT (* p < 0.05, One-way ANOVA). Overexpression of OsCRP1 Confers Cold Stress Tolerance Chloroplast RNPs have been shown to confer cold stress tolerance to A. thaliana by influencing multiple chloroplast RNA processing steps [16]. We found that the expression level of OsCRP1 also increased under cold stress conditions (Figure 4a). These observations led us to examine the cold stress tolerance of 2-week-old OsCc1::OsCRP1 plants that had been treated with 4 °C for three days and then allowed to recover for seven days (Figure 4b). Most of the OsCc1::OsCRP1 plants survived (~85% survival rate), whereas only ~50% of the NT and ~30% of the OsCc1::OsCRP1 RNAi plants survived (Figure 4c), suggesting that overexpression of OsCRP1 significantly enhanced cold tolerance. Since cold stress has been reported to reduce the efficiency of photosystem II [39], we measured Fv/Fm values, an indicator of the photochemical efficiency of photosystem II, in plants Plants were then transferred to light intensities of 240~250 µmol m −2 s −1 and phenotyped. All light measurements were made with a LI-250A Light Meter (LI-COR, Lincoln, NE, USA), and photos were obtained 2 weeks after treatment. The analysis was carried out for three biologicals with three technical replicates each. (c) SPAD values for OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi leaves, representing the amount of chlorophyll per leaf. The values were measured for 10 leaves of three representative transgenic lines and NT plants using a Chlorophyll Meter SPAD-502Plus. Data bars represent the mean ± SD of two biological replicates, each of which had three technical replicates. Asterisks indicate significant differences compared with NT (* p < 0.05, One-way ANOVA). (d) ATP contents in leaves of the OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi plants under before and after light stress conditions. Ten plants were used for each line, and the middle portion of the second leaf from the top was taken for analysis. Data bars represent the mean ± SD of three biological replicates, each of which had two technical replicates. Asterisks indicate significant differences compared with NT (* p < 0.05, One-way ANOVA). It has been reported that strong light can cause severe irreversible photodamage, as evidenced by chlorosis in NDH-defective plants [33]. To confirm this phenomenon, OsCc1::OsCRP1 OX , NT, and OsCc1::OsCRP1 RNAi plants were grown for 2 weeks under chamber conditions of moderate light (170~180 µmol m −2 s −1 ) and then exposed to light stress conditions (240~250µmol m −2 s −1 ). OsCc1::OsCRP1 RNAi plants showed chlorosis after 2 weeks of light stress treatments, while no visual symptoms were observed for OsCc1::OsCRP1 OX and NT plants (Figure 3b). This phenotype was confirmed by measuring the leaf chlorophyll using a Soil Plant Analysis Development (SPAD) chlorophyll meter. As shown in Figure 3c, chlorophyll content was similar between OsCc1::OsCRP1 OX and NT plants, while significantly lower in OsCc1::OsCRP1 RNAi plants. These results suggest a correlation between OsCRP1 expression and NDH-dependent CET activity under stress conditions. It has also been shown that the NDH-dependent CET activity is involved in a mechanism by which plants protect against drought, light, and high temperature stresses through increased production of ATP [33,38]. We set out to analyze the ATP contents of the OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi plants before and after exposure to high light stress. Before light stress treatments, the ATP contents of OsCc1::OsCRP1 OX leaves were higher than NT leaves by 3.4%, whereas those of the OsCc1::OsCRP1 RNAi leaves were lower than NT leaves by 4.2% without difference in chlorosis. However, after light stress treatments, the ATP contents of the OsCc1::OsCRP1 OX leaves were higher than NT leaves by 10.8%, whereas those of the OsCc1::OsCRP1 RNAi leaves were lower than NT leaves by 10.2% (Figure 3d). The ATP levels were higher in OsCc1::OsCRP1 OX plants and lower in OsCc1::OsCRP1 RNAi plants compared to NT plants, indicating that OsCRP1 is involved in the increased production of ATP through elevated NDH-dependent CET activity under high light stress conditions. Overexpression of OsCRP1 Confers Cold Stress Tolerance Chloroplast RNPs have been shown to confer cold stress tolerance to A. thaliana by influencing multiple chloroplast RNA processing steps [16]. We found that the expression level of OsCRP1 also increased under cold stress conditions (Figure 4a). These observations led us to examine the cold stress tolerance of 2-week-old OsCc1::OsCRP1 plants that had been treated with 4 • C for three days and then allowed to recover for seven days (Figure 4b). Most of the OsCc1::OsCRP1 plants survived (~85% survival rate), whereas only~50% of the NT and~30% of the OsCc1::OsCRP1 RNAi plants survived (Figure 4c), suggesting that overexpression of OsCRP1 significantly enhanced cold tolerance. Since cold stress has been reported to reduce the efficiency of photosystem II [39], we measured Fv/Fm values, an indicator of the photochemical efficiency of photosystem II, in plants after exposure to cold stress (Figure 4d). The Fv/Fm values of the OsCc1::OsCRP1 OX plants were higher than those of the NT and OsCc1::OsCRP1 RNAi plants during cold stress, indicating that the photochemical efficiency of photosystem II in the OsCc1::OsCRP1 OX plants was less damaged by the cold stress treatments than in NT and OsCc1::OsCRP1 RNAi plants. The NDH complex drives CET around photosystem I and enhances the production of ATP for photosynthesis and increases abiotic stress tolerance [31]. Thus, we set out to analyze the ATP contents of the OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi plants before and after exposure to cold stress. After cold stress treatments, the ATP contents of the OsCc1::OsCRP1 OX leaves were higher than NT leaves by 5.7%, whereas those of the OsCc1::OsCRP1 RNAi leaves were lower than NT leaves by 18.4%. ATP levels in OsCc1::OsCRP1 OX and OsCc1::OsCRP1 RNAi plants were higher and lower, respectively, than those in NT plants, (Figure 4e), indicating that overexpression of OsCRP1 confers cold tolerance via enhancement of NDH-dependent CET under cold stress conditions. OsCc1::OsCRP1 OX leaves were higher than NT leaves by 5.7%, whereas those of the OsCc1::OsCRP1 RNAi leaves were lower than NT leaves by 18.4%. ATP levels in OsCc1::Os-CRP1 OX and OsCc1::OsCRP1 RNAi plants were higher and lower, respectively, than those in NT plants, (Figure 4e), indicating that overexpression of OsCRP1 confers cold tolerance via enhancement of NDH-dependent CET under cold stress conditions. (e) ATP contents in leaves of the OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi plants under before and after cold stress conditions. Ten two-week-old plants were used for each line, and the middle portion of the second leaf from the top was taken for analysis. Data bars represent the mean ± SD of three biological replicates, each of which had two technical replicates. Asterisks indicate significant differences compared with NT (* p < 0.05, One-way ANOVA). Overexpression of OsCRP1 Confers Drought Stress Tolerance We exposed the OsCc1::OsCRP1 OX and OsCc1::OsCRP1 RNAi plants to drought stress by withholding water for 3 consecutive days, during which drought-induced visual symptoms were observed (Figure 5a). Soil moisture content decreased similarly in all the pots, indicating that the drought stress was uniformly applied (Figure 5b). The OsCc1::Os-CRP1 OX plants showed delayed visual symptoms of drought-induced damage, such as leaf rolling and wilting, compared to NT and OsCc1::OsCRP1 RNAi plants. After rehydration, the Two-week-old seedlings were exposed to at 4 • C (low temperature) for the indicated times. OsUbi1 expression was used as an internal control. Values are the means ± SD of three independent experiments. (b) Phenotypes of OsCc1::OsCRP1 OX and OsCc1::OsCRP1 RNAi transgenic rice plants under cold stress at the vegetative stage. Three independent homozygous OsCc1::OsCRP1 OX and OsCc1::OsCRP1 RNAi lines and NT control plants were grown in soil for 2 weeks and exposed to cold stress for 3 days, followed by recovery. (c) Survival rate scored 7 days after recovery. Values represent means ± SD of three repeated tests. Asterisks indicate significant differences compared with NT (* p < 0.05, One-way ANOVA). (d) Chlorophyll fluorescence (Fv/Fm) of OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi plants during a 3-day cold treatment. Fv/Fm values were measured in the dark to ensure sufficient dark adaptation. Data are shown as the mean ± SD (n = 30). (e) ATP contents in leaves of the OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi plants under before and after cold stress conditions. Ten two-week-old plants were used for each line, and the middle portion of the second leaf from the top was taken for analysis. Data bars represent the mean ± SD of three biological replicates, each of which had two technical replicates. Asterisks indicate significant differences compared with NT (* p < 0.05, One-way ANOVA). Overexpression of OsCRP1 Confers Drought Stress Tolerance We exposed the OsCc1::OsCRP1 OX and OsCc1::OsCRP1 RNAi plants to drought stress by withholding water for 3 consecutive days, during which drought-induced visual symptoms were observed (Figure 5a). Soil moisture content decreased similarly in all the pots, indicating that the drought stress was uniformly applied (Figure 5b). The OsCc1::OsCRP1 OX plants showed delayed visual symptoms of drought-induced damage, such as leaf rolling and wilting, compared to NT and OsCc1::OsCRP1 RNAi plants. After rehydration, the OsCc1::OsCRP1 OX plants rapidly recovered, whereas the NT and OsCc1::OsCRP1 RNAi plants did not recover well (Figure 5a). The OsCc1::OsCRP1 RNAi plants showed similar sensitivity to NT plants in their response to the drought stress. Collectively these results suggest that OsCRP1 overexpression enhanced drought stress tolerance. To verify the performance of the plants under drought stress conditions, Fv/Fm values were measured. In OsCc1::OsCRP1 RNAi and NT plants the values decreased one day after exposure to drought stress, while only a slightly decrease was observed in OsCc1::OsCRP1 OX plants on day 2 (Figure 5c). Before drought stress treatments, ATP levels were similarly high in OsCc1::OsCRP1 OX and NT plants, but significantly lower in OsCc1::OsCRP1 RNAi plants. ATP levels in NT and OsCc1::OsCRP1 RNAi plants rapidly declined after exposure to drought stress conditions, whereas OsCc1::OsCRP1 OX plants showed a slow decrease (Figure 5d). Taken together, our results indicate that in rice plants, OsCRP1 modulates CET activity via changing mRNA stability of NDH complex genes, which consequently confers drought stress tolerance. (Figure 5c). Before drought stress treatments, ATP levels were similarly high in OsCc1::Os-CRP1 OX and NT plants, but significantly lower in OsCc1::OsCRP1 RNAi plants. ATP levels in NT and OsCc1::OsCRP1 RNAi plants rapidly declined after exposure to drought stress conditions, whereas OsCc1::OsCRP1 OX plants showed a slow decrease (Figure 5d). Taken together, our results indicate that in rice plants, OsCRP1 modulates CET activity via changing mRNA stability of NDH complex genes, which consequently confers drought stress tolerance. Discussion Chloroplast RNA metabolism is affected by various environmental changes, including light and temperature, and chloroplast RNA-binding proteins (cpRNPs) are known to play a central role in their post-transcriptional processing, such as splicing, editing, and stabilization [7]. It has been reported that in the model dicotyledon, A. thaliana, several cpRNPs enhance abiotic stress tolerance through their function as RNA chaperones [16,40]. However, the underlying molecular mechanisms of their abiotic stress effect have not been well studied in the monocotyledon, rice. Discussion Chloroplast RNA metabolism is affected by various environmental changes, including light and temperature, and chloroplast RNA-binding proteins (cpRNPs) are known to play a central role in their post-transcriptional processing, such as splicing, editing, and stabilization [7]. It has been reported that in the model dicotyledon, A. thaliana, several cpRNPs enhance abiotic stress tolerance through their function as RNA chaperones [16,40]. However, the underlying molecular mechanisms of their abiotic stress effect have not been well studied in the monocotyledon, rice. Several reports have shown that cpRNPs are also involved in editing and 3 -end processing of chloroplast mRNAs [12,16,41], and that regulation of chloroplast mRNA stability by cpRNPs is important for development and abiotic stress responses [16,42]. For example, RIP analyses have demonstrated that A. thaliana CP33A is associated with the stability of multiple chloroplast mRNAs. Moreover, loss of CP33A results in an albino plants that also show aberrant leaf development [42]. A. thaliana CP31A and CP29A are known to interact with and stabilize multiple chloroplast mRNAs that are associated with limiting the effects of cold stress on chloroplast development [16]. Our analysis of the chloroplast-localized cpRNP, OsCRP1, revealed that it has a broad range of target chloroplast RNAs (Figure 2a). Notably, mRNA level of most ndh genes was decreased in NT plant after drought treatment and it suggested that drought treatment could reduce the transcription or stability of those mRNA ( Figure S4a). However, transcript level of most ndh genes was significantly higher in OsCRP1 overexpressing plants compared to NT plants under both normal and drought condition (Figure 2b and Figure S4b). These results suggest that OsCRP1 directly interacts with a set of cpRNAs, improving drought tolerance by enhancing the mRNA stability of NDH complex genes. We propose that the RNA stabilizing mechanism of OsCRP1 involves protecting the target RNA against 3 -exonucleolytic activity; analogous to the mechanism exhibited by A. thaliana CP31A [16]. The chloroplast NDH complex is a ferredoxin (Fd)-dependent PQ reductase that associates with the CET around PSI to catalyze electron transfer [43,44], which in turn leads to a transient increase in chlorophyll a fluorescence after the offset of actinic light [23]. NDH activity was not detectable in A. thaliana CP31 deficient mutants where fluorescence phenotypes were identical with ndhB, ndhD or ndhF mutant lines, suggesting that cpRNPs are critical for chloroplastic NDH enzyme activity [15,24,25,45,46]. We also observed increases in fluorescence in OsCc1::OsCRP1 OX but not in OsCc1::OsCRP1 RNAi plants, under heat stress conditions (Figure 3a). These independent lines of evidence support the idea that OsCRP1 modulates NDH complex activity. It was previously reported that NDH-defective mutants exhibited leaf chlorosis under high light stress, especially in ∆ndhB [33]. Moreover, constitutively high CET elevation in the hcef1 mutant does not occur in the hcef1 crr2-2 (NDH-defective) double mutant, suggesting that NDH modulates CET activity [47]. Here, we also found that leaves of OsCc1::OsCRP1 RNAi plants exhibited chlorosis under light stress conditions (Figure 3b). Furthermore, a decreased SPAD value in the knock-down plants indicated a reduction in CET activity, which correlated with low ATP accumulation in OsCc1::OsCRP1 RNAi plants (Figure 3c,d). When plants are exposed to abiotic stress conditions, such as high light, drought, high salt and cold, large amounts of cellular ATP are needed to support adaptive responses [24,31,48]. We observed improved tolerance of OsCc1::OsCRP1 OX plants to both drought and cold stress, whereas OsCc1::OsCRP1 RNAi and NT plants remained sensitive to drought and cold stress (Figures 4a and 5a). The stress-tolerant phenotype of the overexpressing plants can be explained by enhanced accumulation of ATP (Figures 4e and 5d), and a previous study proposed that increased ATP production by NDH-dependent CET involves vacuolar proton ATPases driving proton import. A study with soybean showed that an outward proton gradient across the tonoplast generated by a proton ATPase enhanced the vacuolar sequestration of Na + , resulting in enhanced salt tolerance [31]. Similarly, the OsCc1::OsCRP1 OX plants generated in this current study accumulated higher levels of ATP than NT plants under cold ( Figure 4e) and drought (Figure 5d) stress conditions, whereas the OsCc1::OsCRP1 RNAi plants had lower levels of ATP than NT plants under the same stress conditions. These observations suggest that OsCRP1 increases ATP generation by enhancing NDH-dependent CET under cold and drought stress conditions, leading to increased abiotic stress tolerance. In summary, we hypothesize that OsCRP1-mediated mRNA stabilization of NDH complex genes results in increased ATP production under stress conditions via enhancement of NDH-dependent CET. During activation of NDH-dependent CET, protons from the stroma are transferred into the thylakoid lumen, causing acidification. Increased proton levels inside the thylakoid lumen drive ATP synthesis and help maintain an ideal NADPH/ATP ratio, enhancing higher stress tolerance in OsCc1::OsCRP1 OX plants ( Figure 6). Decline of photosynthesis activity is one of the key features of plant abiotic-stress response and directly related to crop productivity. Our study provided an additional evidence that cpRNPs could be a promising target locus to develop the abiotic-stress tolerant crops preparing for climate change and sustainable agriculture. the stroma are transferred into the thylakoid lumen, causing acidification. Increased proton levels inside the thylakoid lumen drive ATP synthesis and help maintain an ideal NADPH/ATP ratio, enhancing higher stress tolerance in OsCc1::OsCRP1 OX plants ( Figure 6). Decline of photosynthesis activity is one of the key features of plant abiotic-stress response and directly related to crop productivity. Our study provided an additional evidence that cpRNPs could be a promising target locus to develop the abiotic-stress tolerant crops preparing for climate change and sustainable agriculture. Figure 6. Schematic representation of NDH-dependent CET in OsCRP1 overexpressing plants under abiotic stress conditions. In OsCRP1 overexpressing plants, binding affinity of OsCRP1 to NDH complex RNAs was increased under stress conditions, leading to stabilization of transcripts from NDH complex genes. Hence, the activity of NDH-dependent CET was increased, and protons from the stroma were transferred into the thylakoid lumen, resulting in acidification. The protons drive ATP synthesis, maintaining an optimal NADPH/ATP ratio. Drought and cold stress both induce an increase in ATP demand that may be fulfilled by NDH-dependent CET around photosystem I (PSI). Plasmid Construction and Agrobacterium-Mediated Rice Transformation To generate OsCRP1 (Os09g0565200) overexpression lines, the 969 base pair coding sequence (CDS) was isolated from rice (Oryza sativa cv. Dongjin) cDNA and cloned into the pSB11 vector using the Gateway™ cloning system (Invitrogen, USA). The rice OsCc1 promoter was used as a constitutive promoter [32], and the potato-derived (Solanum tuberosum) 3′pinII as a terminator (OsCc1::OsCRP1 OX ). The OsCRP1 CDS without the stop codon was isolated from rice (O.sativa cv. Nakdong) cDNA and fused to GFP (OsCc1::Os-CRP1-GFP) under control of the OsCc1 promoter with the 3′pinII as a terminator, as before. The bar gene controlled by the CaMV 35S promoter and the 3′nos terminator were used for herbicide resistance selection. For the knockdown construct (OsCc1::OsCRP1 RNAi ), the CDS was isolated from rice (O. sativa cv Dongjin) cDNA and cloned into the pGOS2-RNAi Figure 6. Schematic representation of NDH-dependent CET in OsCRP1 overexpressing plants under abiotic stress conditions. In OsCRP1 overexpressing plants, binding affinity of OsCRP1 to NDH complex RNAs was increased under stress conditions, leading to stabilization of transcripts from NDH complex genes. Hence, the activity of NDH-dependent CET was increased, and protons from the stroma were transferred into the thylakoid lumen, resulting in acidification. The protons drive ATP synthesis, maintaining an optimal NADPH/ATP ratio. Drought and cold stress both induce an increase in ATP demand that may be fulfilled by NDH-dependent CET around photosystem I (PSI). Plasmid Construction and Agrobacterium-Mediated Rice Transformation To generate OsCRP1 (Os09g0565200) overexpression lines, the 969 base pair coding sequence (CDS) was isolated from rice (Oryza sativa cv. Dongjin) cDNA and cloned into the pSB11 vector using the Gateway™ cloning system (Invitrogen, USA). The rice OsCc1 promoter was used as a constitutive promoter [32], and the potato-derived (Solanum tuberosum) 3 pinII as a terminator (OsCc1::OsCRP1 OX ). The OsCRP1 CDS without the stop codon was isolated from rice (O.sativa cv. Nakdong) cDNA and fused to GFP (OsCc1::OsCRP1-GFP) under control of the OsCc1 promoter with the 3 pinII as a terminator, as before. The bar gene controlled by the CaMV 35S promoter and the 3 nos terminator were used for herbicide resistance selection. For the knockdown construct (OsCc1::OsCRP1 RNAi ), the CDS was isolated from rice (O. sativa cv Dongjin) cDNA and cloned into the pGOS2-RNAi vector [49] containing the bar selection marker using the Gateway™ cloning system. Primers used for vector construction are listed in Table S1. All transgenic plants were produced by Agrobacterium tumefaciens (LBA4404)-mediated transformation and tissue culture as previously described [50]. Three representative T 5 homozygote transgenic lines were selected for further studies based on gene expression levels. Subcellular Localization of OsCP31A The detailed method for rice protoplast preparation and transient protoplast transformation has been previously described [51]. The plasmid OsCc1::OsCRP1:GFP DNA transformed into the protoplasts using the polyethylene glycol-mediated method with approximately 10 6 cells per reaction. The transformed protoplasts were incubated for 16 h at 28 • C under dark conditions, and the GFP fluorescence of the transfected protoplasts was observed using a confocal laser scanning microscope (Leica TCS SP8 STED, Wetzlar, Germany) as in Park et al. [51]. qRT-PCR Analysis The cDNAs of total and/or immunoprecipitated RNAs were synthesized using the RevertAid™ First Strand cDNA Synthesis kit with an oligo(dT) primer (Thermo Scientific, Waltham, MA, USA). Based on RNA amount, 20 ng of cDNA was used as a template for qRT-PCR analysis. The PCR enzymes and fluorescent dye was used with the 2× Real-time PCR Pre-mix with Evagreen (SolGent, Seoul, Korea), and the q-RT-PCR experiments were performed with an MX3005p qPCR system (Agilent Technologies, CA, USA). The thermocycling conditions were 95 • C for 10 min followed by 40 cycles of 95 • C for 30 s, 60 • C for 1 min. The gene-specific primer pairs are listed in Supplemental Table S1 and were checked by melting curve analysis (55-95 • C at a heating rate of 0.1 • C s −1 ). The qRT-PCR values of cDNAs synthesized from total RNAs were normalized to the OsUbi1 (Os06g0681400) gene, whereas the total input per experiment was used for the cDNA from immunoprecipitated RNAs. Total RNA samples were extracted from the leaves of OsCc1::OsCRP1 OX , NT and OsCc1::OsCRP1 RNAi plants using the Hybrid-R kit (GeneALL, Lisbon, Portugal). Each sample was treated with 70µL of DNase reaction buffer (DRB) containing 2 µL of DNase I (GeneALL, Lisbon, Portugal) for 10 min to avoid DNA contamination. To synthesize cDNA, 1 µL of RNA was used with oligo dT primers and 1 µL of RevertAidTM reverse transcriptase (Thermo Fischer Scientific, Waltham, MA, USA). Reverse transcription was performed at 42 • C for 90 min and terminated by incubating the reaction mixture for 5 min at 70 • C. qRT-PCR was carried out on a Mx3000p real-time PCR machine with the Mx3000p software and in a 20 µL reaction mixture containing 1 µL of cDNA template, 2 µL of primer, 0.04 µL of ROX reference dye (Invitrogen, Carlsbad, CA, USA), 1 µL of 20X Evagreen (SolGent, Daejeon, Korea), 10 µL of 2× premix, and dH2O. Cycling conditions were 1 cycle at 95 • C for 10 min and 55 cycles at 95 • C for 30 s, 58 • C for 30 s and at 72 • C for 30 s. The analysis was carried out with three biological and three technical replicates. OsUbi1 (Os06g0681400) was used as an internal control in all experiments. Primers used for qRT-PCR are listed in Table S1. Stress Treatments and Tolerance Evaluation OsCRP1 transgenic and non-transgenic plants (O. Sativa cv. Dongjin) were sown on MS (Murashige and Skoog) media and incubated in a dark growth chamber for 4 days at 28 • C. Seedlings were then transferred to a growth chamber with a light/dark cycle of 16 h light/8 h dark and grown for 1 additional day before transplanting to soil. For cold stress treatments, fifteen plants from each line were transplanted into five soil pots (4 cm × 4 cm × 6 cm: three plants per pot) within a container (59 cm × 38.5 cm × 15 cm) and grown for 2 additional weeks in a growth chamber (16h light/8 h dark cycle) at 30 • C. Cold stress was imposed by exposing the plants to 4 • C for 3 days and the plants were then left recover for 7 days of 30 • C. For drought stress treatments, thirty plants from each line were transplanted into ten soil pots (4 cm × 4 cm × 6 cm: three plants per pot) within a container (59 cm × 38.5 cm × 15 cm) and grown for an additional 5 weeks in a greenhouse (16 h light/8 h dark cycle) at 30 • C. Drought stress was imposed by withholding water for 3 days and re-watering for 5 days. Stress-induced symptoms were monitored by imaging transgenic and NT plants at the indicated time points using a NEX-5N camera (Sony, Tokyo, Japan). The soil moisture contents were measured at the indicated time points using a SM150 Soil Moisture Sensor (Delta-T Devices, Cambridge, UK). Transient chlorophyll a fluorescence was measured using the Handy-PEA fluorimeter (Hansatech Instruments, Norfolk, UK) as previously described [52]. Chlorophyll a fluorescence was measured for the longest leaves of each plant after 1 h of dark adaptation to ensure sufficient opening of the reaction center. RNA-Immunoprecipitation (RIP) Analysis RIP experiments were performed as previously described [53,54] at 4 • C, with minor modifications. Leaf tissue from 14-day-old rice seedlings was powdered in liquid nitrogen and the powder incubated with polysome lysis buffer consisting of 100 mM KCl, 5 mM MgCl 2 , 10 mM HEPES, pH 7.0, 0.5% Nonidet P-40, 1 mM DTT, RNase Out RNase inhibitor, 100 units mL −1 (Invitrogen, Carlsbad, CA, USA), 2 mM vanadyl ribonucleoside complexes solution (Sigma-Aldrich, St. Louis, MO, USA), and protease inhibitor cocktail tablets (Roche, Mannheim, Germany) for 20 min with shaking. The supernatant was separated from the crude extract by centrifuging at 16,000× g for 20 min, and after quantification of the soluble proteins using the Bradford method, lysate containing 1 mg protein was used for the next step. To confirm the quality of the OsCRP1-GFP protein, a preliminary immuno-blotting experiment was carried out. Before the immunoprecipitation step, the lysate was clarified by rotating at 4 • C for 2 h with 50% slurry containing protein A-agarose beads equilibrated in lysis buffer containing 1 mg mL −1 bovine serum albumin (BSA). After incubation of the lysate with specific antibodies, the protein-RNA complexes were pulled-down using protein A-agarose beads. The beads were washed four times with polysome lysis buffer without RNase and proteinase inhibitors and an additional four times with the same buffer containing 1 M urea. Finally, the RNA was eluted from the beads with the polysome lysis buffer containing 0.1% SDS and 30 µg proteinase K. The RNA was purified and enriched using Trizol reagent (Invitrogen Life Technologies) and 20 µg glycogen was added during the ethanol precipitation step. RNA-Seq Total RNA was prepared from leaf tissue of two-week-old transgenic and NT plants using the RNeasy plant mini kit (Qiagen, Valencia, Spain), according to the manufacturer's instruction. RNA quality and purity was assessed with a Thermo Scientific Nanodrop 2000 and an Agilent Bioanalyzer 2100. RNA-seq libraries were prepared using the TruSeq RNA Library Prep Kit (Illumina, San Diego, CA, USA) according to the manufacturer's instructions and sequenced (MACROGEN Inc., Seoul, Korea) using the Illumina HiSeq2000 (Illumina, San Diego, USA). Single-end sequences were generated and raw sequence reads were trimmed to remove adaptor sequences, and those with a quality lower than Q30 were removed using the clc quality trim software (CLCBIO). All reads were assembled with the clc_ref_assemble 6 (version 4.06; Aarhus, Denmark) program, using annotated gene and sequences from the rapdb (http://rapdb.dna.affrc.go.jp; 2 February 2019; Chloroplast_GCF_001433935.1_IRGSP-1.0). Analysis of NDH-Dependent CET NDH-dependent CET was determined by monitoring chlorophyll a fluorescence with a mini-PAM (Waltz, Germany) as previously described [25]. Plants were adapted in growth chambers (22 • C, 28 • C and 35 • C dark) for at least 30 min prior to measurements. Leaves were exposed to actinic light (AL: 200 µmol photons m −2 s −1 ) for 5 min after the light was turned on (Fo level: minimum yield of Chlorophyll a fluorescence) to drive electron transport between photosystem II and photosystem I. Maximum fluorescence (Fm) and steady-state fluorescence (Fs) were determined under these conditions. The transient increase in chlorophyll a fluorescence was monitored after actinic light was turned off. Determination of ATP Content ATP measurements were performed as described in the ENLITEN ® ATP Assay Kit (Promega, USA) protocol. Leaf samples (0.05 g) were transferred to 2 mL tubes containing 1ml of Tris-HCl (pH 7.8) and the tubes were then heated in a water bath at 100 • C for 10 min and cooled to room temperature. To determine the ATP content, 10 µL of the cooled samples were added to wells containing 100 µL of rL/L reagent each. The ATP standard curve was obtained using ATP standard samples provided with the kit. Luminescence was measured with a Infinite M200 System (Tecan, Seestrasse, Mannedorf, Switzerland) using the ATP standard curve.
2021-02-11T06:19:42.146Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "71774a6e69dfbdbbe31fcc5eeed41c71731cfe6d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms22041673", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81e59264eda9527d15c67abd38adf261a125d9a1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
73511339
pes2o/s2orc
v3-fos-license
Insulin promotes invasion and migration of KRASG12D mutant HPNE cells by upregulating MMP‐2 gelatinolytic activity via ERK‐ and PI3K‐dependent signalling Abstract Objectives Hyperinsulinemia is a risk factor for pancreatic cancer, but the function of insulin in carcinogenesis is unclear, so this study aimed to elucidate the carcinogenic effects of insulin and the synergistic effect with the KRAS mutation in the early stage of pancreatic cancer. Materials and methods A pair of immortalized human pancreatic duct‐derived cells, hTERT‐HPNE E6/E7/st (HPNE) and its oncogenic KRASG12D variant, hTERT‐HPNE E6/E7/KRASG12D/st (HPNE‐mut‐KRAS), were used to investigate the effect of insulin. Cell proliferation, migration and invasion were assessed using Cell Counting Kit‐8 and transwell assays, respectively. The expression of E‐cadherin, N‐cadherin, vimentin and matrix metalloproteinases (MMP‐2, MMP‐7 and MMP‐9) was evaluated by Western blotting and/or qRT‐PCR. The gelatinase activity of MMP‐2 and MMP‐9 in conditioned media was detected using gelatin zymography. The phosphorylation status of AKT, GSK3β, p38, JNK and ERK1/2 MAPK was determined by Western blotting. Results The migration and invasion ability of HPNE cells was increased after the introduction of the mutated KRAS gene, together with an increased expression of MMP‐2. These effects were further enhanced by the simultaneous administration of insulin. The use of MMP‐2 siRNA confirmed that MMP‐2 was involved in the regulation of cell invasion. Furthermore, there was a concentration‐ and time‐dependent increase in gelatinase activity after insulin treatment, which could be reversed by an insulin receptor tyrosine kinase inhibitor (HNMPA‐(AM)3). In addition, insulin markedly enhanced the phosphorylation of PI3K/AKT, p38, JNK and ERK1/2 MAPK pathways, with wortmannin or LY294002 (a PI3K‐specific inhibitor) and PD98059 (a MEK1‐specific inhibitor) significantly inhibiting the insulin‐induced increase in MMP‐2 gelatinolytic activity. Conclusions Taken together, these results suggest that insulin induced migration and invasion in HPNE and HPNE‐mut‐KRAS through PI3K/AKT and ERK1/2 activation, with MMP‐2 gelatinolytic activity playing a vital role in this process. These findings may provide a new therapeutic target for preventing carcinogenesis and the evolution of pancreatic cancer with a background of hyperinsulinemia. | INTRODUC TI ON Pancreatic ductal adenocarcinoma (PDAC) is a lethal digestive malignancy, and its overall 5-year survival is less than 8%. It is the fourth most common cause of cancer-related death in the United States. 1 Although the incidence of pancreatic cancer has increased recently, the survival rate has not improved significantly. 2 Surgical resection is the only curative treatment for pancreatic cancer, but the surgical excision rate is less than 20% due to poor early diagnosis. 3 Therefore, a better understanding of the molecular mechanisms governing pancreatic cancer carcinogenesis is required for the prevention, early diagnosis and treatment of pancreatic cancer. The mutation of the KRAS proto-oncogene is thought to be an initiating genetic lesion in the stepwise progression of pancreatic cancer. 4 Previous studies revealed that the increasing KRAS mutation frequency correlated with the PanIN stage and it is nearly universal (>95%) in human PDAC. 5,6 Moreover, transgenic mouse models confirmed that the KRAS G12D mutation can reprogramme cells into a duct-like fate, which, in turn, induces acinar-to-ductal metaplasia, pancreatic intraepithelial neoplasia (PanINs) and, ultimately, PDAC. 6,7 Interestingly, in another mouse model with a KRAS G12V mutation, PanINs could be only induced if chronic inflammation and mutation existed at the same time. 8 These studies suggested that the occurrence of pancreatic cancer is more likely to be a combination of genetic and non-genetic events. Growing evidence indicates that there is a close connection between type 2 diabetes and the increased incidence of pancreatic cancer. 9,10 It has been reported that half of the patients with pancreatic cancer have diabetes and a large sample cohort study suggested a 2.17-fold risk of pancreatic malignancy in type 2 diabetic patients. 12,13 In addition, studies in genetically engineered mouse models have also shown that oncogenic KRAS can induce mPanIN spontaneously 14 and that type 2 diabetes caused by a high-fat, high-calorie diet can accelerate the development of precancerous lesions. 15 Numerous studies have investigated how insulin, rather than blood glucose, is an independent risk factor for pancreatic cancer. 16,17 However, the direct contribution of hyperinsulinemia to the increased incidence of pancreatic cancer in type 2 diabetes remains unclear. In this study, we explored the role of insulin in the malignant progression of human pancreatic duct-derived cells and the underlying mechanism. | RNA isolation and quantitative real-time PCR Total RNA was isolated from cells using TRIzol reagent (Life Technologies, Carlsbad, CA) according to the manufacturer's protocol. Then, the RNA was reverse-transcribed using PrimeScript RT Master Mix (Takara, Tokyo, Japan). RT-qPCR was performed to detect the mRNA expression with FastStart Universal SYBR Green Master (Roche, IN), using β-actin as the loading control. The MMP-2 (matrix metalloproteinases 2) and β-actin primers were as follows: for MMP-2, 5′-TAC AGG ATC ATT GGC TAC ACA CC-3′ (sense) and 5′-GGT CAC ATC GCT CCA GAC T-3′ (antisense); and for β-actin, Conclusions: Taken together, these results suggest that insulin induced migration and invasion in HPNE and HPNE-mut-KRAS through PI3K/AKT and ERK1/2 activation, with MMP-2 gelatinolytic activity playing a vital role in this process. These findings may provide a new therapeutic target for preventing carcinogenesis and the evolution of pancreatic cancer with a background of hyperinsulinemia. | Transfection of small interfering RNA The siRNAs used in the study were synthesized by GenePharma | Cell proliferation assay The HPNE and HPNE-mut-KRAS cells were seeded into 96-well plates at a density of 1.5 × 10 3 cells per well. The premixed medium (10 μL of Cell Counting Kit-8 reagent (Dojindo, Tokyo, Japan), 100 μL of medium) was added to each well. After incubation at 37°C for 3 hours in the dark, the absorbance of each well was measured at 450 nm to detect the cell viability via a microplate reader. | Migration and invasion assays The impact of insulin on cell migration and invasion was assessed | Western blot analysis Briefly, protein was extracted using a total protein extraction kit (Keygen BioTECH, Nanjing, China). The mixed ice-cold lysis buffer contains the following reagents: 1 mL lysis buffer, 10 μL 100 mmol/L PMSF, 1 μL protease inhibitors and 10 μL phosphatase inhibitors. The extracted protein was mixed with 5× SDS and boiled. Standard methods were utilized to analyse protein expression, 18 and β-actin was used as a loading control. | Gelatin zymography Both cell lines were grown to 80% confluence and then incubated in serum-free medium. All inhibitors as indicated in the figure legends were added 2 hours prior to insulin, and the cells were allowed to grow for 24 hours. The conditioned medium was concentrated using the Centricon-10 system. Quantified amounts | Statistical analysis Statistical analysis was conducted using SPSS 24.0 statistical software (IBM Corp., Armonk, NY, USA). Differences in the mean of samples were analysed using one-way ANOVA or Student's t test. Statistical data are presented as the mean ± SD (n = 3), and P < 0.05 was considered significant. | Effects of insulin on proliferation, migration and invasion in vitro A previous study demonstrated that insulin could promote proliferation in immortalized pancreatic ductal cell lines, 19 | Involvement of MMP-2 in insulin-induced migration and invasion Both cell lines were incubated with insulin (20 nmol/L) for 24 hours and Western blotting and RT-qPCR were conducted to determine whether insulin modulates the expression of MMPs and critical molecules in the epithelial-mesenchymal transition (EMT) process. As shown in Figure 2A Figure 2E). In addition, the expression of the insulin receptor-beta (IR-β) was detected in both cell lines. As shown in Figure 2E, KRAS mutation induced the increased expression, but insulin stimulation had no effect. It has been suggested that MMP-2 is involved in the regulation of cell migration and invasion. 20,21 To further investigate the role of MMP-2 in the stimulatory effects of insulin on cellular migration and invasion, a blockade study using MMP-2 siRNA was carried out with insulin treatment. The interference efficiency of three siRNAs was evaluated via qRT-PCR and gelatin zymography, and siRNA#2 was selected for the following studies ( Figure 4A,B). | Involvement of insulin receptor in insulininduced migration and invasion To explore the potential mechanism of insulin in promoting MMP-2 expression, we evaluated whether the insulin receptor or insulin-like growth factor 1 receptor (IGF1R) was involved in this process. As shown in Figure 5A | Involvement of PI3K/AKT and ERK1/2 MAPK pathways in insulin-induced migration and invasion It has been suggested that insulin can activate classical PI3K/AKT and MAPK pathways. 22,23 In this study, we found that the phosphorylation levels of GSK3β and MAPK pathways were upregulated in the KRAS mutant cells and the activating effect of insulin on PI3K/ AKT/GSK3β, MEK/ERK signalling pathway had been significantly augmented by the introduction of mutant KRAS gene ( Figure 6A). In addition, there was also upregulated phosphorylation of JNK, ERK1/2 and p38 in both cells with insulin stimulation (Figure 6B-D). Taken together, these results demonstrated that insulin can activate the PI3K/AKT and MAPK pathways in both cell lines. To examine whether PI3K/AKT and MAPK pathways were associated with the insulin-induced increase in MMP-2 gelatinolytic activity, HPNE cells were pre-treated with PI3K/AKT and MAPK (ERK, JNK and p38)-specific inhibitors before exposure to insulin. Compared with insulin treatment alone, wortmannin or LY294002 (a PI3K-specific inhibitor), rapamycin (a mTOR-specific inhibitor) and PD98059 (a MEK1-specific inhibitor) significantly inhibited the insulin-dependent increase in MMP-2 gelatinolytic activity, while LY303511 (a negative control for LY294002), SB203580 (a p38-specific inhibitor) and SP600125 (a JNK-specific inhibitor) had no effect ( Figure 7A,B). Similar results were observed in the HPNE-mut-KRAS The data are expressed as the mean ± SD (n = 3). *P < 0.05, **P < 0.01, ***P < 0.001 vs untreated control cells (Figure 8,B). To further investigate whether the PI3K/AKT pathway has a crosstalk with MEK/ERK pathway for the obviously upregulated phosphorylation of PI3K/AKT pathway in the KRAS mutant cells, we treated both cells with a MEK1 inhibitor (PD98059) and a PI3K inhibitor (LY294002), respectively. We observed a slight induction of AKT pathway in response to MEK1 inhibition in both cells, while inhibition of PI3K had no effect on MEK/ERK pathway ( Figure S1). | D ISCUSS I ON Type 2 diabetes is a systematic disease characterized by hyperinsulinemia and hyperglycaemia. Epidemiological evidence suggests that type 2 diabetes can increase the risk of multiple cancers and that patients who have a history of diabetes for more than 5 years F I G U R E 4 Insulin promoted the migration and invasion activity through the upregulation of MMP-2 gelatinolytic activity. (A and B) The different interference efficiency of three siRNAs for MMP-2 was evaluated by qRT-PCR and gelatin zymography, with both migration (C and D; 36 h) and invasion (E and F; 48 h) capability of the two cell lines suppressed by siRNA#2. Cells that migrated to the lower compartment and adhering to the bottom surface of the membrane were stained and quantified. The number of migrated HPNE cells without insulin treatment on the bottom surface of the membrane was used as a basal control. The data are expressed as the mean ± SD (n = 3). *P < 0.05, **P < 0.01, ***P < 0.001 vs untreated control have a significantly increased risk of pancreatic cancer. Insulin is secreted by the pancreatic β cells and transported through the portal vein. Moreover, the pancreas is exposed to higher concentrations of endogenous insulin than peripheral blood and previous study has demonstrated that the physiological concentration of insulin between 0.2 to 20 nmol/L can protect pancreatic cells from apoptosis F I G U R E 5 Insulin promoted the migration and invasion of the cells via the IR Representative results of MMP-2 activity after insulin stimulation with different receptor inhibitors (A and B). Both cell lines were pre-treated with either HNMPA-(AM) 3 (50 μmol/L) or PPP (20 μmol/L), which are specific insulin receptor tyrosine kinase inhibitors and IGF1R inhibitors, respectively, for 4 h (A and B). In transwell experiments, cells were treated with insulin (20 nmol/L) combined with the inhibition of IR or IGF1R, and then the migration assay (C and D; 36 h) and invasion assay (E and F; 48 h) were conducted. Cells that migrated to the lower compartment and adhering to the bottom surface of the membrane were stained and quantified. The number of migrated HPNE cells without insulin treatment on the bottom surface of the membrane was used as a basal control. The data are expressed as the mean ± SD (n = 3). *P < 0.05, **P < 0.01, ***P < 0.001 vs untreated control via insulin receptor. 24,25 Therefore, the insulin concentration used in our study is physiologically attainable in pancreas. Numerous studies have suggested that insulin, rather than blood glucose, is an independent risk factor for pancreatic cancer. 16,17 Indeed, insulin can promote pancreatic cancer cell viability and cancer progression. 19,26 Therefore, this study explored the role of physiological concentration of insulin in the malignant progression of human pancreatic duct-derived cells, as well as clarifying the underlying mechanism in vitro. The extracellular matrix (ECM) plays an important role in maintaining the integrity of tissue structure, and its degradation and basement membrane breakdown are essential for the early stage of local invasive events. There is considerable evidence that MMPs, particularly MMP-2, play a vital role in promoting tumour invasion, enabling the disintegration of epithelial tissue and cell migration or invasion. 21,27,28 Moreover, the decomposition of ECM leads to the release of ECM-bound factors, which, in turn, are involved in the regulation of pathological parameters, 29 angiogenesis or lymphangiogenesis, 30,31 chronic inflammation, 32 metastasis and tumour growth. 33,34 Importantly, the active MMP isozyme is highly expressed in PDAC cells 35,36 and serum levels of MMP-2 have prognostic significance in pancreatic cancer patients. 37 MMP-2 expression was associated with microvessel density in pancreatic cancer, along with higher lymph node metastasis. 38 Taken together, these findings suggest that MMP-2 may act as a key regulator in the progress of pancreatic tumorigenesis. Furthermore, the most recent research has demonstrated that circulating MMP-2 levels in diabetics were significantly increased. 39 The KRAS mutation is a critical determinant in the early stage of pancreatic ductal adenocarcinoma and is able to drive mature pancreatic cells to de-differentiate into duct-like cells and, ultimately, PDAC. 4 Numerous studies have focused on the role of KRAS mutation in promoting tumorigenesis. In vitro study, microinjection of mutant K-Ras G12V into primary pancreatic ductal cells can induce a phenotypic conversion and an increase in proliferation. 40 Oncogenic KRAS G12D can continuously activate its downstream pathways, F I G U R E 6 Effect of insulin on the activity of PI3K/AKT and MAPK signalling in HPNE and HPNE-mut-KRAS cells. Cells were exposed to 20 nmol/L of insulin for various amounts of time. Whole-cell lysates were extracted for the detection of the protein levels of (A) p-AKT (Ser473) and AKT, p-GSK3β (Ser9) and GSK3β, (B) p-JNK (Thr183/Tyr185) and JNK, (C) p-ERK1/2 (Thr202/Tyr204) and ERK1/2, and (D) p-p38 (Thr180/Tyr182) and p38 by Western blotting. β-Actin served as the loading control. The data are expressed as the mean ± SD (n = 3). *P < 0.05, **P < 0.01, ***P < 0.001 vs untreated control which lead to a series of neoplastic related events, including promotion of proliferation, suppression of apoptosis, changing metabolic pathways, remodelling the microenvironment, evasion of the immune response and cell migration and metastasis. 41 Importantly, a previous study in conditional KRAS G12D mouse model feeding with high-fat high-calorie diet has demonstrated that metabolic syndrome, with hyperinsulinemia as one of its characteristics, could accelerates the development of mPanINs in KRAS LSL-G12D -pdx1-Cre mice. 15 In addition, the relationship between insulin and KRAS mutation has also been studied in lung cancer and it is reported that insulin/IGF1 signalling is important for lung cancer initiation after KRAS mutation. 42 Therefore, KRAS mutation is essential for the occurrence of PDAC via increasing architecture and cytological atypia. In this study, we attempted to model the stages of PDAC in vitro using HPNE to represent the "normal" KRAS wild-type baseline stage and HPNE-mut-KRAS to represent a KRAS mutant stage. 43,44 F I G U R E 7 A, Inhibition of the PI3K/AKT pathway with wortmannin (50 nmol/L) or LY294002 (20 μmol/L) inhibited insulin-mediated (20 nmol/L; 24 h) MMP-2 activation in HPNE cells. LY303511 (20 μmol/L) was used as a negative control. Rapamycin (25 ng/ mL), an inhibitor of mTOR/p70 s6 kinase signalling, also affected insulin-induced MMP-2 activation. B, The MEK1 inhibitor PD98059 (50 μmol/L) significantly inhibited insulin-induced (20 nmol/L; 24 h) MMP-2 gelatinolytic activity, whereas the JNK inhibitor SP600125 (15 μmol/L) and the p38 inhibitor SB20350 (25 μmol/L) had no effect. Cells were pre-incubated with inhibitors for 4 h before insulin treatment. The data are expressed as the mean ± SD (n = 3). *P < 0.05, **P < 0.01, ***P < 0.001 vs untreated control And in vivo study proved that mice with the KRAS mutation could develop mPanIN lesions. 15 We hypothesized that MMP-2 is more likely to participate in the dynamic regulation of ECM remodelling and chronic inflammation in diabetic patients without KRAS mutation. Conversely, elevated MMP-2 probably leads to a higher level of PanIN lesions in mutant patients. Interestingly, we found that siRNA#1 can effectively reduce MMP-2 mRNA but partly reduced insulin-induced MMP-2 gelatinolytic activity. We reviewed the literature and found that different transcripts have different efficiencies when translated into proteins. The efficiency of this process is influenced by many factors, including the efficiency of post-transcriptional translation, protein modification and degradation, as well as environmental factors. 46,47 In our experiments, we detected MMP-2 gelatinolytic activity in the conditioned medium, which was also affected by the exocrine function of HPNE cell lines. Therefore, MMP-2 gelatinolytic activity in conditioned medium can be affected by multiple factors in our study. The mechanism for the regulation of MMP-2 gelatinolytic activity in this study remains largely unknown. Downregulation of the insulin receptor can inhibit cancer cell proliferation and metastasis, altering downstream signalling in vivo. 48 It has been shown that insulin receptors have a high affinity for insulin (±10 −10 mol/L), while IGF1R has a higher affinity for IGF1 and IGF2 (±10 −10 mol/L), which is 100-fold higher than that for insulin. 49 Our data suggest that the KRAS mutation, rather than insulin, can induce an increased expression of insulin receptors, the mechanism of which warrants further study. The use of an insulin receptor tyrosine kinase inhibitor significantly inhibited migration, invasion induced by insulin and MMP-2 gelatinolytic activity. However, inhibition of IGF1R did not have a significant effect. These results suggest that insulin can upregulate MMP-2 via its classical receptor, independent of IGF1R in human pancreatic cells. As mentioned above, MMP-2 plays an important role in the development of PDAC. Moreover, several studies have also shown that overexpression of MMP-2 is associated with the progression of multiple cancers, as well as metastases. 50,51 Inhibiting F I G U R E 9 The PI3K/AKT and ERK MAPK pathways are involved in the regulation of MMP-2. Schematic representation of the proposed mechanism of MMP-2 expression in both HPNE cell lines. Phosphorylation of the insulin receptors by insulin drives the downstream activation of PI3K/AKT and ERK. Increased ERK or PI3K/AKT activity results in enhanced MMP-2 gelatinolytic activity involved in migration, invasion and tumour progression the expression of MMP-2 can significantly suppress tumour progression. 52 Therefore, the IR tyrosine kinase may serve as a promising therapeutic target for preventing pancreatic carcinogenesis. Linsitinib (OSI-906), a dual inhibitor of insulin receptor and IGF1R, for solid tumours has been examined in clinical trials. 53 It may be possible to prevent pancreatic cancer by targeting high-risk groups (eg with a family history) in patients with long-term type 2 diabetes in the future. However, it is of note that a high concentration of insulin can lead to changes in the EMT phenotype of breast cancer cells via IGF1R. 54 In addition, we found that these cell models derived from exocrine tissue required relatively higher doses of insulin to elicit response via IGF1R when compared to physiological insulin dose. 19 This finding suggests the possibility that the pancreatic precursor cancer cells may react to physiological concentrations insulin via insulin receptor, and high levels of insulin would be expected to activate IGF1R. Nonetheless, at the physiological concentration of 20 nmol/L used in this study, there was no significant change in the expression of EMT-related molecules, but this requires further investigation. Increasing evidence has suggested that insulin can activate classical PI3K/AKT and MAPK signalling via binding to insulin receptors. 19,24,55 Additionally, it has been shown that MMP-2 expression is critically mediated by the MAPK or PI3K/AKT pathways in various cell types. 56,57 Our experimental results showed that the phosphorylation of PI3K/AKT and ERK1/2 MAPK signalling molecules in normal HPNE cells is time-dependent and that phosphorylated levels are higher in the KRAS mutant cells. In addition, insulin also increased the phosphorylation of JNK and p38 MAPKs. Further study revealed that the PI3K/AKT pathway has a crosstalk with MEK/ERK pathway ( Figure S1). These results suggested a feedback also observed in other pancreatic cell lines. 63 Our results suggested that the in multiple tumours, including pancreatic cancer. 70,71 The present study provided information that the insulin-promoted MMP-2 gelatinolytic activity was upregulated partly through PI3K/AKT/mTOR signalling. Taken together, our findings suggest that insulin-induced activation of ERK1/2 MAPK and PI3K/AKT/mTOR signalling may be involved in the invasion and migration through the upregulation of MMP-2 gelatinolytic activity. However, the mechanism by which insulin interacts with these two signalling pathways causing cell invasion and migration regulated by MMP-2 is unclear and requires further in vivo investigation. In conclusion, this study demonstrated that insulin regulated MMP-2 gelatinolytic activity via its "metabolic" PI3K/AKT and "mitogenic" ERK1/2 signalling pathways in immortalized human pancreatic ductal cell lines, as well as the synergistic effect of hyperinsulinemia and KRAS mutation in the early stage of pancreatic cancer. Consequently, the induction of MMP-2 by insulin may contribute to the degradation of ECM, the breakdown of the basement membrane, increased local infiltration and distant metastasis, which may explain the increased incidence of pancreatic cancer in patients with hyperinsulinemia and type 2 diabetes.
2019-03-11T17:24:37.799Z
2019-03-05T00:00:00.000
{ "year": 2019, "sha1": "4a2d8690429dea644acc8e91582deb4cac19958a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cpr.12575", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "578fb8f64609ff2c1b91adc426b1a782279fb464", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
18861953
pes2o/s2orc
v3-fos-license
A graph theoretic encoding of Lucas sequences Some well-known results of Prodinger and Tichy are that the number of independent sets in the $n$-vertex path graph is $F_{n+2}$, and that the number of independent sets in the $n$-vertex cycle graph is $L_n$. We generalize these results by introducing new classes of graphs whose independent set structures encode the Lucas sequences of both the first and second kind. We then use this class of graphs to provide new combinatorial interpretations of the terms of Dickson polynomials of the first and second kind. Introduction and main results For any graph G, we call a set S of vertices of G an independent set if no two vertices of S are adjacent. We let i(G) denote the total number of independent sets of G and, for each t ∈ N, we let i t (G) denote the number of independent sets of G of size t; thus, i(G) = t≥0 i t (G). The quantity i(G) was first explicitly considered by Prodinger and Tichy in [5], who referred to it as the Fibonacci number of a graph. We present two of their main results as the following theorem. Here, P n denotes the n-vertex path graph, C n denotes the n-vertex cycle graph (where the 1-vertex cycle is taken to be a vertex with a loop, and the 2-vertex cycle is taken to be a single edge), and we adopt the common conventions F 0 = 0, F 1 = 1, L 0 = 2, and L 1 = 1. Our main result will be a generalization of Theorem 1.1. For this, we will define two new classes of graphs. Fix any n, a, b ∈ N with a ≥ b. Create an n-vertex cycle with vertex set Z n ; for each vertex of the cycle, create an a-vertex complete graph sharing with the cycle only this vertex. Then, for each v ∈ Z n , make vertex v adjacent to a − b additional vertices of the complete graph containing vertex v + 1 (mod n), and denote this graph C(n, a, b). For example, C(6, 5, 3) is the graph: . 1 We refer to this class of graphs, over all valid n, a, b ∈ N, as chainsaw graphs. When referring to a particular chainsaw graph C(n, a, b), we call the n vertices lying on the inner cycle its chain vertices, and we call the set of remaining vertices its blade vertices. This will serve as our generalization of C n , as we will soon see. We generalize the path graph to a graph which we denote P (n, a, b) by considering C(n + 1, a, b) and removing one of the chain vertices (e.g., vertex 0) and all edges adjacent to it. We call these graphs broken chainsaws, and refer to the vertices similarly as chain and blade vertices. With these definitions in place, we now state our generalization of Theorem 1.1. As is common, we let U n (a, b) and V n (a, b) denote the Lucas sequences of the first and second kind, respectively. That is, we let U 0 (a, b) = 0, U 1 (a, b) = 1, and U n (a, b) = aU n−1 (a, b)−bU n−2 (a, b) for n > 1 (so that U n (1, −1) are the Fibonacci numbers); we let V 0 (a, b) = 2, V 1 (a, b) = a, and V n (a, b) = aV n−1 (a, b) − bV n−2 (a, b) for n > 1 (so that V n (1, −1) are the Lucas numbers). We prove this Theorem in Section 2 while discussing some relationships between Dickson polynomials and Lucas sequences and providing some graph-theoretic interpretations of these well-studied objects. We note that Theorem 1.1 is the special case of Theorem 1.2 when a = b = 1. Relationships to Dickson polynomials and a proof of Theorem 1.2 In this section we examine the relationship between Dickson Polynomials and Lucas sequences and discuss some results which will be crucial to proving Theorem 1.2. In the process, we provide new graph-theoretic interpretations of Lucas sequences and Dickson polynomials. As is common, we use D n (X, Y ) and E n (X, Y ) to denote Dickson polynomials of the first and second kind, respectively. That is, we let We start with the following result which is known in finite field theory. See, for example, [2], [3], or [4] for more on this result. For more information on Dickson polynomials in general, see [4]. Theorem 2.1. For any n ∈ N and a, b ∈ Z, 3) and We will prove Theorem 1.2 by showing that the t th term of (2.1) and the t th term of (2.2) can be graph-theoretically interpreted as the number of independent sets in the chainsaw graph C(n, a, b) and the broken chainsaw graph P (n, a, b), respectively, which contain exactly t chain vertices. For this, we will need the following result, which is well known in graph theory, and is not difficult to prove. See, for example, [1]. Lemma 2.2. For any n ∈ N and t ∈ N 0 , we have 5) and we also have With this in place, we are now ready to proceed to the proof of Theorem 1.2. Proof of Theorem 1.2. Fix n, a, b ∈ N so that a ≥ b. As previously discussed, it follows from (2.2) and (2.5) that (1.3) holds if the number of independent sets in P (n, a, b) which contain exactly t chain vertices for t ∈ N 0 is given by First, note that the number of ways to choose t independent chain vertices in P (n, a, b), by definition, is i t (P n ). Then, once t independent chain vertices are chosen, there are t sets of b−1 blade vertices and n − 2t + 1 sets of a − 1 blade vertices with which they share no adjacencies, so (2.7) holds. A similar argument shows that the number of independent sets in C(n, a, b) which contain exactly t chain vertices for t ∈ N 0 is given by i t (C n )b t a n−2t , (2.8) and thus, by (2.1) and (2.6), we have (1.4). A graph-theoretic interpretation of the Lucas Sequence is now established by Theorem 1.2, and from its proof emerges a graph-theoretic interpretation of the terms of the Dickson polynomial.
2014-12-31T01:28:12.000Z
2014-12-31T00:00:00.000
{ "year": 2014, "sha1": "333d1b74da76d7a1f918c56c86bce4872abf6324", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "333d1b74da76d7a1f918c56c86bce4872abf6324", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
6452673
pes2o/s2orc
v3-fos-license
Uterine fibroids – what’s new? Uterine fibroids are the commonest benign tumours of women and affect all races with a cumulative lifetime risk of around 70%. Despite their high prevalence and the heavy economic burden of treatment, fibroids have received remarkably little attention compared to common female malignant tumours. This article reviews recent progress in understanding the biological nature of fibroids, their life cycle and their molecular genetic origins. Recent progress in surgical and interventional management is briefly reviewed, and medical management options, including treatment with selective progesterone receptor modulators, are also discussed. Introduction Uterine fibroids (leiomyomas) are benign monoclonal tumours of smooth muscle, taking origin in the myometrium. They are the commonest benign tumours of the uterus, and are typically round well-circumscribed masses. They are usually multiple, and can range in size from a few millimetres to massive growths of 20cm diameter and more. The aetiology is largely unknown, but they are oestrogen-and progesterone-dependent tumours, very rare before menarche, common in reproductive life, and frequently regress in size after menopause 1 . By age 50, it is estimated that 70% of women will have one or more uterine fibroids, with around 30% of patients symptomatic and requesting treatment. Women of all races are affected, but fibroids are commoner, and develop at an earlier age, in women of African origin 2 . By age 35 years, 60% of African-American women will have fibroids, compared to 40% in Caucasian women of the same age. Other risk factors include age (increasing incidence with age up to the menopause, then usually decreasing in size), nulliparity, genetic factors, early menarche, caffeine, alcohol, obesity and hypertension 3 . Symptoms of fibroids are abnormal uterine bleeding, pelvic pain, dyspareunia, obstructive effects on bladder or rectum, and infertility. Fibroid size does not necessarily determine the severity of clinical symptoms. In a large online survey conducted in eight countries with at least 2,500 participants in each country (4000 in USA), 59.8% of women with a diagnosis of uterine fibroids self-reported heavy and prolonged vaginal bleeding compared to 37.4% in those without fibroids 4 . Pelvic pain at various times in the menstrual cycle and during sexual intercourse were also significantly increased in fibroid patients. Excessive vaginal blood loss can lead to severe anaemia which can even be lifethreatening, yet some patients do not recognise the severity of the problem, may consider their blood loss to be normal, and do not seek help 5 . Uterine fibroids place a large economic burden both on the women who suffer from them, and on the health systems and societies in which they live. Symptoms may lead to significant loss of working time, and in a large survey 24% of women perceived symptoms as a contributing factor in failure to achieve career aspirations 6 . Direct surgical costs alone are high -in the USA, 200,000 hysterectomies are performed annually for fibroids 7 , and when medications, inpatient and outpatient hospital attendances are added, the annual costs are estimated at between 4-9 billion US dollars 8 . These costs do not include lost work time, and other consequences such as spontaneous abortions, pre-term delivery and Caesarean sections. Imaging techniques are the mainstay of diagnosis, with transabdominal or transvaginal ultrasound the most commonly used modality, as it is widely available, inexpensive and usually definitive in diagnosis. MRI may be used to delineate the number, size and location of fibroids in certain cases, and hysteroscopy may be useful to distinguish between subendometrial fibroids and large endometrial polyps. Neither imaging nor hysteroscopic methods are currently reliable in distinguishing benign fibroids from sarcomatous uterine tumours. Surgical treatment takes the form of hysterectomy or myomectomy, the choice depending on the size, number and extent of fibroids, and on the patient's wishes with regard to fertility. Hysteroscopic or laparoscopic myomectomy are considered safe and effective options, but laparoscopic hysterectomy is usually still the standard surgical option in women who do not wish to retain fertility 3 . It should be noted, however, that hysterectomy is not free from short term and long term sequelae -1 in 30 women suffers a major adverse event, and mortality may be between 0.4-1.1 per 1000 operations 9 . Non-surgical interventional treatments also include uterine artery embolization (UAE), and highfrequency MR-guided focussed ultrasound surgery. Until recently, medical management of fibroids was largely confined to symptomatic treatment of pain and bleeding, and the use of gonadotropin-releasing hormone (GnRH) analogues. The latter lead to a hypo-oestrogenic state, fibroids undergo shrinkage, and blood loss and anaemia can be corrected, but duration of treatment is limited by side effects of menopausal symptoms and loss of bone mineral density. More recently, a newer group of agents, the selective progesterone receptor modulators (SPRMs), have shown considerable effectiveness in the medical management of fibroid patients 10 . As well as their effects on fibroid shrinkage, in most patients SPRM treatment leads to rapid control of heavy menstrual bleeding, and correction of anaemia. Oestrogen levels remain at around mid-follicular levels, and, as a consequence, menopausal symptoms and bone loss are not encountered regularly. Histopathology Fibroids are correctly known as leiomyomas, being benign tumours of smooth muscle, taking origin in the myometrium. As the fibroid grows, the cells differentiate into four different cell types that can be reliably characterised: smooth muscle cells, vascular smooth muscle cells and two different subpopulations of fibroblasts. It has been shown that all four cell types derive from a single clonal origin 11 . Macroscopically, the lesions are usually multiple, pale, firm and rubbery, with a whorled cut surface, well demarcated from adjacent myometrium. There may be areas of mucoid change, haemorrhage, or necrosis and calcification visible on gross inspection. Microscopically, they are composed of spindle cells arranged in fascicles that interweave to form a circumscribed lesion. Mitotic activity may be observed, but there are usually less than 5 mitoses per 10 high power fields (HPF), and no atypical forms. Mitotic activity is significantly higher in the secretory phase of the cycle 12 , an observation that suggests importance of progesterone and its receptor PR in fibroid growth. There is a great degree of variability in the amount of extracellular matrix and collagen in fibroids, leading to considerable heterogeneity in histological patterns. Degenerative changes may be superimposed, including hyaline and myxoid change, hydropic degeneration, necrosis and calcification. Notwithstanding this variability in the usual type of leiomyoma, there are several distinct histological variants that may cause some diagnostic difficulty to the histopathologist. Cellular leiomyoma is significantly more cellular than the usual type, but shows no nuclear atypia, a low mitotic index (4 or less mitoses per 10 HPF), and no necrosis. Leiomyoma with bizarre nuclei (previously termed atypical or symplastic leiomyoma) characteristically shows highly pleomorphic extremely bizarre nuclei, often in a background of more typical leiomyoma cells. Mitotic activity is usually low, but karyorrhexis may mimic atypical mitoses, and the histopathologist must be cautious not to diagnose sarcoma, as these are benign lesions. Mitotically active leiomyoma shows a high mitotic index (>10 mitoses per 10 HPF), but no other concerning features, with an absence of nuclear atypia and necrosis. These are likely endocrine related, as they are seen in the reproductive age group, and have been reportedly associated with hormone therapy. Dissecting ('cotyledenoid') leiomyoma is a rare variant which shows locally invasive growth sometimes extending outwith the uterus, often with a prominent degree of hydropic change. Diffuse leiomyomatosis is a rare condition in which multitudes of benign-appearing leiomyomatous nodules blend with uterine smooth muscle, and may extend beyond the uterus into the peritoneal cavity forming tumour-like nodules, grossly resembling disseminated gynaecological cancers. The process is benign, and surgical removal is curative. An uncommon but troublesome group of tumours show histological appearances that may arouse concern about possible leiomyosarcoma, but which fall short of definitively malignant lesions. Described as atypical smooth muscle neoplasms, or smooth muscle tumours of uncertain malignant potential (STUMP), such lesions show an intermediate level of mitotic activity (5 -10 mitoses per 10 HPF), variable necrosis or myxoid change, and a degree of nuclear atypia, sometimes with epithelioid cell morphology. The prognosis of such tumours is unpredictable, but recurrence occurs in approximately 10 -15% of cases. A promising approach to prediction of outcome in such lesions has recently been described 13 in which comparative genomic hybridisation was used to clearly stratify a series of uterine STUMP into two separate prognostic groups: one with prognosis similar to leiomyoma, the other with outcome similar to low grade leiomyosarcoma. Leiomyosarcoma is a frankly malignant neoplasm of smooth muscle origin. Whilst most appear to occur de novo from myometrium, there is evidence that up to 20 -30% may arise from pre-existing benign smooth muscle tumours (see below). This must be a rare event, considering how common benign fibroids are and the rarity of leiomyosarcoma. Life Cycle of Fibroids A careful morphological review 14 led to the hypothesis that fibroid formation may represent an abnormal response to injury. This proposes that normal myometrium may be subject to repeated injury through vasoconstriction and hypoxia during menstruation, and that development of fibroids may represent a reaction to that injury. There are intriguing parallels with processes of wound healing, keloid formation and even the reaction to injury occurring in blood vessels in the formation of atherosclerosis. Additional evidence from Ciarmela's group 15 suggests the action of an inflammatory trigger to excessive production of extracellular matrix by activated myofibroblastic cells in fibroids. Uterine fibroids have a self-limited life cycle of proliferative growth, synthesis of collagen, increasing deposition of extracellular matrix, decreasing vascularity, and ultimately senescence and involution through ischaemic degeneration and inanition 14 . Four phases in the life cycle of fibroids have been described, defined somewhat arbitrarily and representing a continuous process, progressing through phenotypic transformation of the proliferating contractile myocyte and evolutionary selection of a single clone. There is increasing deposition of collagen, and as the process of fibroid growth and development evolves, the phenotype of the clonally proliferating myocytes changes from contractile to collagen synthesising, with significant elaboration of extracellular ground substance. Myocytes become separated from vessels by increased amounts of extracellular matrix, and angiogenesis does not keep up with the increasing size of the fibroid. Ischaemia eventually occurs, and there is cessation of myocyte proliferation and cellular atrophy. In the end stage, there is abundant hyaline matrix enclosing islands of atrophic myocytes, and there may be necrosis and calcification. Processes of cell death, resorption and reclamation now occur, termed 'inanosis' by the authors. These differ from necrosis and apoptosis in their morphology, in their long, protracted durations, and in the absence of any inflammatory or phagocytic response to cell death. Genetic and Molecular Aspects of Aetiology Several lines of evidence point to a significant genetic predisposition to development of uterine fibroids. Women with first degree relatives having fibroids have an increased incidence 16 , and monozygotic twins have higher concordance for fibroids than dizygotic 17 . Up to around 50% of uterine leiomyomas show cytogenetic alterations, including trisomy of chromosome 12, deletions in the long arm of chromosome 7, rearrangements of 12q15 and mutations in MED12 and HMGA2 genes. There is evidence for the existence of a population of cells with stem or progenitor cell characteristics, which can be isolated from normal myometrium and from leiomyoma tissues 18 . It is hypothesised that activating mutations occurring in this cell type give rise to the clonal population of myocytes making up the leiomyoma. Intriguingly, a recent study using whole genome sequencing of uterine leiomyomas showed that multiple fibroid nodules within the uterus can be clonally related, indicating a single cell origin of multiple leiomyomas 19 . This study also reported the occurrence in fibroids of complex chromosomal rearrangements resembling chromothripsis, apparently occurring as a single chromosomal shattering event with up to 20 or more double-stranded breaks, followed by random reassembly. The authors suggest that tumour formation occurs when reassembly leads to the juxtaposition and activation of tumour-promoting genes. Epigenetic mechanisms are also likely to have a key role in fibroid formation. Several tumour suppressor genes have been shown to be abnormally hypermethylated in fibroids compared to adjacent myometrium 20 , as are collagen-related genes and a subset of ER response genes 21 . MED12 Exome sequencing a small series of leiomyoma tissues identified a high frequency of somatic mutations in MED12 (also known as mediator complex subunit 12) gene 22 , and a subsequent larger survey of 225 fibroids from 80 patients identified MED12 mutations in 70% 23 , making MED12 the most frequently altered gene in leiomyomas. MED12 is an X-linked gene that encodes a subunit of the mediator complex that is central to regulation of transcription, and is a crucial element in canonical WNT signalling, known to interact with β-catenin. Most mutations occur in a highly conserved area of exon 2 of the gene, with around 50% occurring as mis-sense mutations of codon 44, and it has been suggested that these may represent 'gain of function' alleles 24 . Less frequently, mutations occur at the intron 1/exon 2 boundary, and even more rarely in exon 1, respectively. There is an inverse correlation between presence of MED12 mutation and leiomyoma size, suggesting that lesions of differing sizes may have different aetiological pathways. MED12 mutations seem relatively specific for leiomyoma, and also occur in around 10 -20% of leiomyosarcomas 25,26 . However, the same type of mutation has also been found in chronic lymphocytic leukaemia 27 , and malignant phyllodes tumour of the breast 28 . HMGA2 Cytogenetically visible alterations in 12q14-15 and 6p21 have been observed in leiomyomas, and rearrangements at these loci map to genes encoding high mobility group proteins HMGA2 and HMGA1, respectively, leading to their overexpression. Overexpression of HMGA2 was found to be the second most frequent genetic alteration in leiomyomas, being present in 7.5-10% 26 . Overexpression was exclusively found in leiomyomas that did not have mutation of MED12, indicating the likelihood of two separate and mutually exclusive pathways of fibroid development 29 . Each group has differing global gene expression profiles, and further evidence indicates that leiomyomas with alterations of MED12 and HMGA2 show different behaviours. In a study of 289 fibroids from 120 patients, it was found that over 85% of MED12 mutated lesions occurred as multiple uterine nodules, whereas 70% of HMGA2 mutated lesions were single nodules 30 . Rarely, uterine leiomyomas may be part of the hereditary leiomyomatosis and renal cell cancer syndrome caused by heterozygous germline mutations in the fumarate hydratase (FH) gene. The disorder has an autosomal dominant pattern of inheritance, and is clinically characterised by the occurrence of multiple (10 to over 100) cutaneous leiomyomas, often painful, occurring in a segmental pattern on trunk and extremities. Leiomyomas in this syndrome present a unique global gene expression profile, without overlap with those associated with MED12 or HMGA2 mutations 23 . Advances in surgical and interventional management of fibroids For many years, hysterectomy has been the treatment of choice for uterine fibroids, and is still the most commonly used treatment. Laparoscopic hysterectomy rates may exceed 90% in some departments, but other surgical and interventional treatments are increasingly available 3 . The treatment selected often depends on the patient's age, her willingness to undergo what is perceived to be a major surgical procedure, and her desires for future fertility or complete amenorrhoea. Guidelines exist in the literature, but there are few clinical trials comparing different treatments. Hysteroscopic myomectomy is suitable for fibroids of certain sizes and locations, being most suited to smaller pedunculated submucous lesions which can be removed by transection of the base with a resectoscopic loop. Some smaller intramural fibroids may be removed hysteroscopically in one-or two-step procedures that involve slicing the lesion into chips, or use of an intrauterine morcellator. Laparoscopic myomectomy is technically more challenging than open laparotomy, but reproductive outcomes are similar, post-operative morbidity is much less and recovery times much shorter. Fibroids are usually removed with the aid of a power morcellator. Morcellation has the drawback of potential peritoneal dissemination of unrecognised uterine sarcoma, and although the risk may have been overemphasised, it remains a theoretical concern. While recognising the difficulty of histopathological diagnosis in specimens obtained after power morcellation, a large series of 10,731 laparoscopic hysterectomies 31 found the incidence of malignancy in morcellated surgical specimens to be 0.06% (six cases). Non-surgical interventional treatments include UAE, an effective and safe alternative to hysterectomy in women in whom retention of fertility is not a priority, although there is little evidence of any poorer fertility outcome with UAE compared to myomectomy 32 . Although this treatment has similar outcomes to surgery in terms of patient satisfaction, risk of major complications and fertility outcome, there is a higher rate of minor complications and subsequent surgical intervention within two to five years, with between 15 and 32% of women requiring further surgery 33 . High frequency magnetic resonance guided focussed ultrasound is a technique whereby ultrasonic energy is directed with MR guidance to within the fibroid, where thermal ablation by coagulative tissue necrosis occurs. The method is not yet widely used, as it is currently expensive, is suitable for only a minority of fibroid patients, and has unknown implications for future fertility 34 . Medical management of fibroids First line management of uterine fibroids usually involves symptomatic treatment of heavy menstrual bleeding, with use of inexpensive non-steroidal anti-inflammatory drugs (NSAIDs), antifibrinolytic agents including tranexamic acid, or contraceptive steroids including the levonorgestrel intrauterine system (Mirena), the latter only suitable for patients in whom the uterine cavity is not distorted by fibroids 35 . Although bleeding symptoms may be alleviated, there is no evidence of fibroid shrinkage, indeed there is reason to believe that progestogen therapy may induce proliferation of leiomyoma cells. Shrinkage of fibroids can, however, be achieved by treatment with GnRH agonists, or with SPRMs. Continuous administration of a GnRH agonist leads to downregulation of pituitary GnRH receptors, with consequent decreased production of FSH and LH, and subsequently of ovarian steroids. Treatment for three to six months has been shown to result in decreased uterine and fibroid size, and duration of hospital stay after surgery 36 . However, the hypo-oestrogenic state induced by GnRH agonist treatment results in menopausal side effects, including loss of bone mineral density, that limit treatment duration usually to six months or less. While there is some evidence that add-back therapy can offer some advantages in these situations 37 , unwanted side effects of therapy with GnRH analogues remain a problem. It has also been observed that rapid regrowth of fibroids occurs after cessation of GnRH treatment. Selective progesterone receptor modulators in management The advent of SPRMs has opened a new and promising avenue of treatment for many patients. The effectiveness of these agents is based on the premise that fibroids show progesterone dependence, and blockade or modulation of progesterone activity at PR results in cessation of proliferation and induction of apoptosis in the fibroid, with consequent shrinkage. SPRMs also rapidly induce amenorrhoea in most patients, providing additional welcome symptomatic relief of bleeding. The mechanism whereby amenorrhoea is induced remains unknown, but it is believed to be a direct effect on the endometrium. Clinical trials of SPRMs in treatment of fibroids have been carried out using a variety of agents, including mifepristone 38 , telapristone acetate 39 , asoprisnil 40 , ulipristal acetate 41 , and vilaprisan 42 . SPRMs induce characteristic morphological changes in endometrium that have not been observed with other pharmaceutical agents, and have been designated PAEC (progesterone receptor modulator-associated endometrial changes 43 ). Uninterrupted treatment with SPRMs for six months or more induces endometrial thickening, and to avoid associated complications, successful clinical trials have utilised an interrupted regime of three months on treatment followed by one month off, with menstrual shedding of the endometrium. In 2012, following large Phase III clinical trials, ulipristal acetate was the first SPRM to be granted a licence from the European Medicines Agency for use in the pre-surgical treatment of fibroids, and it is now being used in many countries worldwide. Early Phase III trials showed that one course of 5 mg ulipristal acetate orally for 12 weeks led to a mean 20-35% reduction in fibroid volume, and that the reduction in volume was maintained for up to 6 months following end of treatment 10,41 . Treatment was also associated with rapid control of uterine bleeding in over 80% of patients. Subsequent trials showed that further fibroid shrinkage occurred with repeated courses, with median reduction in fibroid volume of 71.8% after 4 courses 44 . Histopathological assessment of fibroids resected after ulipristal acetate treatment has shown induction of apoptosis and remodelling of extracellular matrix in the lesions 45 . Subsequent studies established endometrial safety of up to eight courses, using a repeated interrupted regime of three months treatment followed by one month off treatment with menstrual shedding 46 . As oestrogen levels are not suppressed on treatment, menopausal symptoms and bone mineral loss are not significant clinical issues. Ulipristal acetate is now licensed for repeated 12 week courses, but must be prescribed with a one month break between courses, to avoid adverse endometrial effects. A retrospective analysis of 21 patients who enrolled in two of the clinical trials of ulipristal acetate, who had myomectomies and wished pregnancy after treatment, reported successful pregnancies in 15 patients (71%), with birth of 13 healthy babies and 6 early miscarriages 47 . What Does the Future Hold? After many years of relative neglect, the pathogenesis of uterine fibroids is now receiving more attention, and we are beginning to gain a foothold in understanding the molecular genesis of these very common and troublesome tumours. These are the first necessary steps in the journey towards effective non-surgical treatment and perhaps even prevention. Excellent progress has been made in the laparoscopic surgical treatment of fibroids and this will continue, perhaps with robotic and other developments. However, the long-term goal must be to develop effective medical treatments, and the advent of SPRMs opens up the prospect of safe therapy without the troublesome side effects of previous medical treatments, with the potential to greatly improve the quality of life of huge numbers of women around the world. Competing interests The author has current consultancies with Bayer, PregLem, Gedeon Richter and HRA Pharma. Grant information The author(s) declared that no grants were involved in supporting this work.
2017-12-11T01:52:27.285Z
2017-12-07T00:00:00.000
{ "year": 2017, "sha1": "d5ca508611dd7db1e75f36d87f691d94941a6c1f", "oa_license": "CCBY", "oa_url": "https://f1000research.com/articles/6-2109/v1/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "811088daba4781e6453eab596b50928529848362", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14868270
pes2o/s2orc
v3-fos-license
Net exchanges of methane and carbon dioxide on the Qinghai-Tibetan Plateau from 1979 to 2100 Methane (CH4) is a potent greenhouse gas (GHG) that affects the global climate system. Knowledge about land–atmospheric CH4 exchanges on the Qinghai-Tibetan Plateau (QTP) is insufficient. Using a coupled biogeochemistry model, this study analyzes the net exchanges of CH4 and CO2 over the QTP for the period of 1979–2100. Our simulations show that the region currently acts as a net CH4 source with 0.95 Tg CH4 y−1 emissions and 0.19 Tg CH4 y−1 soil uptake, and a photosynthesis C sink of 14.1 Tg C y−1. By accounting for the net CH4 emission and the net CO2 sequestration since 1979, the region was found to be initially a warming source until the 2010s with a positive instantaneous radiative forcing peak in the 1990s. In response to future climate change projected by multiple global climate models (GCMs) under four representative concentration pathway (RCP) scenarios, the regional source of CH4 to the atmosphere will increase by 15–77% at the end of this century. Net ecosystem production (NEP) will continually increase from the near neutral state to around 40 Tg C y−1 under all RCPs except RCP8.5. Spatially, CH4 emission or uptake will be noticeably enhanced under all RCPs over most of the QTP, while statistically significant NEP changes over a large-scale will only appear under RCP4.5 and RCP4.6 scenarios. The cumulative GHG fluxes since 1979 will exert a slight warming effect on the climate system until the 2030s, and will switch to a cooling effect thereafter. Overall, the total radiative forcing at the end of the 21st century is 0.25–0.35 W m−2, depending on the RCP scenario. Our study highlights the importance of accounting for both CH4 and CO2 in quantifying the regional GHG budget. Introduction Methane (CH 4 ), second only to carbon dioxide (CO 2 ), is an important greenhouse gas (GHG) that is responsible for about 20% of the global warming induced by human activity since preindustrial times (IPCC, 2013). Because CH 4 has a much higher global warming potential (GWP) than CO 2 in a time horizon of 100 years, and actively interacts with aerosols and ozone (Shindell et al 2009), even small changes in atmospheric CH 4 concentration will have profound impacts on the future climate (Bridgham et al 2013. Quantifying regional and global methane budgets has therefore become a research priority in recent years (Kirschke et al 2013). Among all natural sources of CH 4 , global wetlands are the single largest source that is responsible for emissions of 142-284 Tg CH 4 per year (Kirschke et al 2013. CH 4 emissions from wetlands are the net results of CH 4 production under anaerobic conditions, and oxidation by oxygen and transport through the soil and water profile (Olefeldt et al 2013). Over 90% of the CH 4 emitted to the atmosphere is oxidized by chemical reactions in the troposphere (Kirschke et al 2013), while soil sinks through methanotrophy are indispensable as well (Spahni et al 2011. The relative strength of sources and sinks determines the global net CH 4 emission on various spatial or temporal scales. CH 4 emissions from natural wetlands on the Qinghai-Tibetan Plateau (QTP) have drawn increasing attention . This world's highest plateau has been a large reservoir of soil carbon for the past thousands of years because of the slow soil decomposition rate and relatively favorable photosynthetic conditions compared with high-latitude cold ecosystems (Kato et al 2013). The high carbon storage is distributed over a sporadic landscape that accommodates more than half of China's natural wetlands, and hence the QTP is responsible for 63.5% of CH 4 emissions from natural wetlands in China . In recent decades, natural wetlands on the QTP have been expanding (Niu et al 2012). With more available soil carbon substrate due to higher plant production and litter fall input (Zhuang et al 2010, Piao et al 2012, CH 4 emissions from this region have been accelerating and are projected to increase under future climate conditions . Recently, several site-level studies have advanced our understanding of CH 4 emissions from wetlands on the QTP. Major emissions occur during the growing season, and net effluxes can vary from −0.81 to 90 μg m −2 h −1 at different locations (Jin et al 1999, Hirota et al 2004, Kato et al 2011. Furthermore, flux tower observations indicate that non-growing season CH 4 emissions in an alpine wetland can contribute to around 45% of the annual emissions (Song et al 2015). These high variations in space and time are the results of complex environmental control over CH 4 emissions. Chen et al (2009) found that water table depth is a key factor that controls the spatial variations in CH 4 emissions in an open fen on the eastern edge of the QTP. Soil temperature is also an important regulator of CH 4 emissions, since the QTP is underlain by extensive permafrost (Chen et al 2010, Yu et al 2013. In addition, plant succession from cyperaceous to gramineous species during wetland degradation will result in a net reduction in plantaided transport of CH 4 to the atmosphere (Hirota et al 2004, Chen et al 2010. In contrast to the noticeable progress in field measurements and experiments, there is still a lack of systematic and quantitative understanding of the regional CH 4 budget for the QTP. The estimated wetland CH 4 emissions of 0.56-2.47 Tg per year (Jin et al 1999, Ding and Cai 2007, Zhang and Jiang 2014 should be viewed as preliminary results, because they are calculated using a book-keeping approach, i.e. the estimate is a product of the wetlands area, the flux measurements at a few sites, and the approximate number of frozen-free days. In addition, the role of the soil consumption of atmospheric CH 4 in most of the alpine steppe and meadow zones is highly uncertain (Kato et al 2011, Wei et al 2012, but should not be neglected when quantifying the regional net methane exchange (NME). More importantly, it is necessary to consider both CH 4 and CO 2 dynamics in quantifying the regional GHG budget of the QTP. Here, we quantify the GHG budget of CH 4 and CO 2 for both historical data and for the 21st century on the QTP using a coupled biogeochemistry model, the terrestrial ecosystem model (TEM) (Zhuang et al 2004. The model is calibrated against field observations using a global optimization method and applied for the period 1979-2100. The carbon-based GHG effect of CH 4 and CO 2 , represented by the NME and net ecosystem production (NEP, calculated as the difference between the gross primary production and ecosystem respiration), are examined using radiative forcing impact (Frolking et al 2006). Our research objectives are to: (i) quantify sources and sinks with respect to CH 4 and CO 2 in the historical period of 1979-2011; (ii) explore how the NME and NEP will respond to future climate changes; (iii) attribute the relative contribution of CH 4 and CO 2 to the regional carbon budget and radiative forcing, and (iv) identify research priorities to reduce quantification uncertainties of the GHG budget in the region. Model framework The TEM is a process-based ecosystem model that simulates the biogeochemical cycles of C and N between terrestrial ecosystems and the atmosphere (Zhuang et al 2004(Zhuang et al , 2010. Within the TEM, two submodels, namely the soil thermal model (STM) and the updated hydrological model (HM), are designed to simulate the soil thermal profile and hydrological processes with a daily time step, respectively. The STM is well documented for arctic regions and the Tibetan Plateau (e.g. Zhuang et al 2010, Jin et al 2013. The HM, inherited from the water balance model (WBM) (Vörösmarty et al 1989), has six layers for upland soils and a single box for wetland soils (Zhuang et al 2004). We assume the maximum wetland water table depth to be 30 cm following Zhuang et al (2004), so that soils are always saturated below 30 cm. These physical variables then drive the carbon/nitrogen dynamics module (CNDM), which uses spatially explicit information on climate, elevation, soil, vegetation and water availability, as well as soil-and vegetationspecific parameters, to estimate the carbon and nitrogen fluxes and pool sizes of terrestrial ecosystems (Zhuang et al 2004(Zhuang et al , 2010. The methane dynamics module (MDM) was first coupled with the TEM by Zhuang et al (2004), to explicitly simulate the processes of CH 4 production (methanogenesis), CH 4 oxidation (methanotrophy) and CH 4 transport between the soil and the atmosphere. During the simulation, the water table depth estimated by the HM, the soil temperature profile estimated by the STM and the labile soil organic carbon estimated by the CNDM, as well as other soil and meteorological information, were fed into the MDM so that the whole TEM and MDM are fully coupled (figure S1). CH 4 production, a strictly anaerobic process, is modeled as a function of the labile carbon substrate, temperature, pH, and the redox potential in saturated soils. CH 4 oxidation, occurring in the unsaturated zone, depends on the soil CH 4 and O 2 concentration, temperature, moisture and the redox potential. The net CH 4 fluxes at the soil/water-atmosphere boundary are summations of different transport pathways (i.e. diffusion, ebullition and plantaided emission), with a positive value indicating a CH 4 source to the atmosphere and a negative value for a CH 4 sink (Zhuang et al 2004). Field measurements and model calibration Data used to calibrate the TEM were measured at the Luanhaizi wetland on the northeastern Tibetan Plateau (37°35′ N, 101°20′ E) from July 2011 to December 2013. This wetland is classified as an alpine Marsh, and has accumulated rich soil carbon of 24.5% on average for the top 30 cm soil layer due to slow decomposition. An eddy covariance measurement system was installed at a height of 2 m above the wetland surface, which recorded CH 4 fluxes every half hour (Yu et al 2013). A micro-meteorology station was set up adjacent to the eddy covariance tower to measure major environmental variables at half hour frequency, including air and soil temperature, total precipitation, downward shortwave irradiance and relative humidity. We parameterized the STM using the measured soil temperature at a 5 cm depth. Major parameters for the MDM were calibrated so that the simulations match the observed CH 4 fluxes. Due to a lack of high quality measurements, the water table depth was calibrated indirectly by fitting the final CH 4 fluxes, and compared to the observed precipitation as a reference. Parameters for the CNDM were obtained from our previous modeling studies on the Tibetan Plateau (Zhuang et al 2010. The mathematical structure of the TEM is fixed and can be expressed as below: where f is a conceptual function that represents all processes within the TEM, Y y y ŷ ( , , ) is a vector of the model outputs (e.g. time series of daily soil temperature or CH 4 fluxes), X is the model input data, ( , , , , ) … are independently and identically distributed (i.i.d.) errors with zero mean and constant variance. The goal of model calibration with classical methods is thus to find a parameter set θ such that the predefined statistics of e can be minimized. In contrast, Bayesian theory treats θ as random variables having a joint posterior probability density function (pdf). The posterior pdf of θ can be evolved from the prior distribution with observations Y such that: | is the likelihood function. Assuming a non-informative prior in the form of p ( ) 1 θ σ ∝ − and residuals which are i.i.d normal (Vrugt et al 2003), the likelihood of a specific parameter set θ′ can be computed as: The influence of σ can be integrated out when L Y ( ) θ′| is plugged into equation (2), so that the posterior pdf of θ′ is: For complex nonlinear system models like the TEM, however, it is impossible to obtain an explicit expression for p Y ( ), post θ′| making analytical optimization out of the question (Vrugt et al 2003). Alternatively, Markov chain Monte Carlo (MCMC) methods are well suited to solving these problems. In this study, we implemented an adaptive MCMC sampler, named the shuffled complex evolution Metropolis algorithm (SCEM-UA) for global optimization and parameter uncertainty assessment. While the theoretical bases and computational implementation of the SCEM-UA can be found in (Vrugt et al 2003), we outline the key steps with an example of STM optimization below: (i) Initialize parameter space. Select parameters to be calibrated, and assign prior range to each parameter (table 1). (ii) Generate sample. Randomly sample s sets of parameter combination { , , , } is a vector of eight parameters for the STM in table 1. (iii) Rank sample points. Compute the posterior density of each i θ ⎯ → using equation (3), and sort the s points in 8-dimensions with decreasing order of posterior density. Store the s points and their corresponding posterior in array D with dimensions of s 9 × . (iv) Initialize Markov chains. Assign the first k elements of D as the starting locations of k sequences. (v) Partition D into complexes. Partition s rows of (vi) Evolve each complex and sequence. Use the sequence evolution Metropolis algorithm to evolve each sequence and complex (Vrugt et al 2003). (vii) Shuffle complexes. Combine each point in all complexes back with D in order of decreasing posterior density. (viii) Apply stop rule. Stop when convergence criteria are satisfied; otherwise, go to step (v). A maximum of 100 000 runs for the STM and 200 000 runs for the MDM was imposed to override the stopping rule for computational considerations. We selected s 500 = and k 10 = following Vrugt et al (2003) for complex model optimization. Given the complexity of our coupled-TEM, each run cost at least 1 minute, including reaching model equilibrium, model spin-up and transient simulations for the period 2011-2013. In this case, parallel implementation of the SCEM-UA was required to obtain computational efficiency. We developed a parallel R version of the SCEM-UA using the Open Message Passing Interface (MPI; Gabriel et al 2004) and the Rmpi package (Yu et al 2002) on Purdue University's Conte computer clusters. This parallel SCEM-UA program would evoke a master node that controlled the workflow and message communication among 10 slave nodes. Each slave node was responsible for the computation of posterior density (i.e. step (iii)) and for the evolution of a specific group of complexes and sequences (i.e. step (vi)). Compared to the parallel implementation of the SCEM-UA in Vrugt et al (2006), our program does not require the target model (e.g. the TEM in this study) to be written in the MPI structure, but only treats it as a black box to be executed using a system call in R. The source code for this parallel R version of the SCEM-UA is available upon request. Regional extrapolation To make spatiotemporal estimates of the CO 2 and CH 4 fluxes on the QTP using the TEM, the model was run at a spatial resolution of 8 × 8 km and with a daily time step from 1979 to 2100. Static data including vegetation types, elevation, and soil texture were the same as those in Jin et al (2013). Soil pH data was derived from the China Dataset of Soil Properties for Land Surface Modeling by Wei et al (2013). The wet soil extent from Papa et al (2010) was used to determine the distribution of wetlands (CH 4 source) and uplands (CH 4 sink) within each pixel ( figure 1(b)). For this temporally dynamical and 25 × 25 km resolution data set, we interpolated the maximum fractional inundation over 8 × 8 km using the nearest neighbor approach. It should be noted that the actual wetland distribution was less continuous than was shown in figure 1(b). The inland water body was excluded based on the IGBP DISCover Database (Loveland et al 2000). Seasonal flooding of the wetland was indirectly represented by the fluctuation of the water table depth below and above the soil surface in our model. Climate input data for the historical period of 1979-2011, including radiation, precipitation, air temperature and vapor pressure, were interpolated from the latest global meteorological reanalysis product ERAinterim (0.75°grid) published by the European S4). A total of 24 simulations (4 scenarios × 6 GCM datasets) were processed to construct future projections. These selected GCMs, compared with many other candidates, in general had smaller biases in surface temperature and total precipitation across the Tibetan Plateau (Su et al 2013). A detailed description of these CMIP5 GCMs and the data processing method are provided in methods S1. Analysis To highlight the spatial pattern of the climate induced changes in GHG fluxes, we calculated the spatial difference between baseline simulations using ECMWF data in the 2000s and future predictions with four RCP scenarios in the 2090s. A statistical test of the difference in means was performed, and regions with statistical insignificance (α = 0.05) were masked out. To quantify the GHG effect of the continuous CH 4 and CO 2 fluxes from the QTP for the study period of 1979-2010, we calculated the net radiative forcing impact according to Frolking et al (2006). The lifetime of an individual net input of CH 4 into the background atmosphere was represented by a first-order decay function such that: where r 0 is the initial CH 4 perturbation, and CH4 τ is the adjusted time for CH 4 (∼12 y). A linear superposition of 5 different first-order decay pools was used to describe the more complicated behavior of CO 2 in the atmosphere: where the fraction of each pool, f , i and the pool specific adjustment time, , i τ were set to be 26%, 24%, 19%, 14%, 18% and 3.4, 21, 71, 421, 10 8 y, respectively (Frolking et al 2006). The instantaneous total radiative forcing from individual gas contributions since the reference time (here the year 1979) is given by: in which i 0-4 = is CO 2 and i 5 = is CH 4 , and i ξ is a multiplier for indirect effects (1.3 for CH 4 and 1.0 for CO 2 ), A i is the GHG radiative efficiency (1.3 × 10 −13 W m −2 kg −1 for CH 4 and 0.0198 × 10 −13 W m −2 kg −1 for CO 2 ), f i is the fractional multiplier (1 for CH 4 and see equation (6) for CO 2 values), and s ( ) i Φ is the net flux of GHG i at time s relative to the reference year of 1979. The integral term is thus the cumulative flux of gas i at time t since the reference start point (i.e. the year 1979) after partial to complete decay in the atmosphere. The numerical integration was applied with an annual time step. It should be noted, however, that our calculation here was only for the goal of comparing the relative contributions of CO 2 and CH 4 fluxes since 1979 to the radiative forcing, rather than to give an accurate estimation of the absolute values of the GHG effect. Model optimization and validation By applying the SCEM-UA method, initial parameter ranges evolved into narrower posterior intervals (table 1). Both the STM and MDM outputs were able to reproduce the seasonal dynamics of the observed daily soil temperature and CH 4 fluxes (figure 2). The adjusted R 2 and RMSE for the soil temperature simulated with the optimal STM are 0.95 and 1.88°C, respectively. The model performance is comparable with other modeling results (Wania et al 2009, Jin et al 2013, Zhu et al 2014, with only one underestimation during the summer of 2013. The MDM also performed well (R 2 = 0.82, RMSE = 18.41 mg CH 4 m −2 day −1 ) in capturing the annual magnitude, cycling and a small peak of CH 4 burst in spring. Considering the uncertainties in the CH 4 flux measurements (Yu et al 2013) and the CH 4 model structures , our model performance was comparable with similar studies (Wania et al 2009, Lu and Zhuang 2012, Zhu et al 2014. Parameter uncertainty was well constrained (figures 2(a), (c)), indicating that a global search near the optimal space would produce many parameter sets to allow model simulation matching the observations. The water table depth followed the observed daily precipitation pattern (figure 2(b)). A severe drought was detected for the 2012 summer, which was also reflected by the distinctively low CH 4 emission during that year. Due to a limited number of available field studies of CH 4 fluxes on the QTP, our model validation was done by comparing values reported in the literature to our simulations for the nearest 8 × 8 km pixel (table 2). Model estimates reasonably match most field measurements with respect to the mean and range, except for an apparent underestimation of the exceptionally high CH 4 emission rate from the littoral zone in Chen et al (2009). Considering the high variation among field measurements from different wetland types and vegetation covers, the model simulations were able to fit the mean (R 2 = 0.87, RMSE = 67 for all data and R 2 = 0.96, RMSE = 21 mg CH 4 m −2 day −1 excluding Chen et al (2009)). The validation gave us confidence for model extrapolation and regional CH 4 budget estimation. Simulated annual mean CH 4 emissions from potential wetland areas increased gradually from 6.3 gCH 4 m −2 y −1 in 1979 to 8.5 in the 2050s under different scenarios, but diverged noticeably thereafter ( figure 3(c)). The highest emissions for the 2090s from RCP8.5 projections (12.7 1.9 ± gCH 4 m −2 y −1 ) increased 70% relative to the beginning of the 21st century, followed by 46%, 30% and 16% under RCP6.0, RCP4.5 and RCP2.6, respectively. The relative change percentages of CH 4 emissions were in general higher than those of NPP, implying that CH 4 production was more favored than photosynthesis under future climate conditions. Annual mean CH 4 uptake density (increased from 0.13 in the early 1980s to 0.17 gCH 4 m −2 y −1 around the 2050s, and ended up between 0.16 to 0.23 gCH 4 m −2 y −1 in the 2090s) was much smaller than that of emissions, while the interannual variations were similar under the four RCP scenarios ( figure 3(d)). Spatial patterns of the decadal mean NME showed substantial variations over the QTP (figure 4). For the 2000s, net CH 4 emissions simulated using reanalysis data were similar to the average of the RCP scenarios (figures 4(a) and (b)), with high net CH 4 emissions occurred at Zoige wetland and in the southwest part of the QTP. Net CH 4 sinks based on the two maps were comparable in magnitude, but high CH 4 uptakes were found at the southern edge of the QTP in figure 4(a) rather than in the northeastern part in figure 4(b). At the end of the 21st century, the NME increased noticeably with respect to the magnitude over the QTP except for some southeastern parts (figures 1(c)-(f)). Among the models, RCP8.5 led to the strongest CH 4 emissions in wetlands and the highest CH 4 consumption in uplands. The difference in means between the RCP scenarios and the 2000s historical mean was (b)). The patterns were comparable with contemporary estimation for alpine grasslands by Zhuang et al (2010) and Piao et al (2012). Strong CO 2 sinks occurred in the southeast of the QTP, where forest and shrubs were the dominant vegetation cover ( figure 1(a)). CO 2 sources accounted for a very limited portion of the total study area. Spatial patterns of NEP evolved substantially under future scenarios (figures 5(c)-(f)). The QTP became a uniform carbon sink under RCP2.6, indicating disproportional climate impacts on different ecosystem types. Increases in NEP were profound under two median scenarios, with differences statistically significant (α = 0.05) for most of the grassland (figures S6(f), (g)). Among these, the highest sink (up to 50 gC m −2 y −1 ) occurred in the forest ecosystem under RCP4.5 and in meadow regions under RCP6.0. NEP under RCP8.5 was distinctively lower than results under other scenarios and not statistically different from the historical 2000s mean (figure S6(h)), suggesting similar changes in the magnitude of NPP and R H for all ecosystem types. Regional GHG budget Our estimate of the total CH 4 emissions from natural wetland over the QTP was 0.95 Tg CH 4 y −1 during the 2000s, which was within the estimation range of 0.13-2.47 Tg CH 4 y −1 from several other studies (table 3). The high variation among different estimates was mainly due to the uncertainty in wetland area estimates. Total CH 4 consumption from those upland soils were 0.19 Tg CH 4 y −1 (i.e. 20% of the regional CH 4 emissions) for the 2000s, and increased slower than emissions under future scenarios (table 4), indicating that the QTP was likely to be a stronger CH 4 source in the 21st century. The simulated regional NEP of 10.22 − Tg C y −1 for the period 2006-2011 was lower than those for other modeling studies covering similar regions (Zhuang et al 2010, Piao et al 2012, but higher than that of Yi et al (2014). The near neutral properties of the historical NEP (also see figure 3(b)) were consistent with the results of Fang et al (2010), who found that soil C stock in China's grassland did not show a significant change during the past two decades. Future NEP increased under all RCP scenarios (table 4). Counter-intuitively, higher increases in NEP occurred under RCP4.5 and RCP6.0 instead of the warmest and wettest RCP8.5, indicating that carbon accumulation was more likely favored with modest warming and wetting. Instantaneous radiative forcing due to net CH 4 emission and net CO 2 sequestration since 1979 increased to a plateau during the 1990s, dropped to zero around the 2010 s, and decreased almost linearly afterwards ( figure 6(a)). Therefore, climate change in the 21st century is likely to trigger negative feedback (cooling) in the climate system. Among these, the fastest decreased rate occurred under RCP4.5, indicating that this moderate climate warming scenario stimulated vegetation production much more than the methanogenesis process. A seemingly level-off trend after the 2080s was identified for the RCP8.5 scenario, most likely because of declining NEP ( figure 3(b)). Cumulative radiative forcing was positive until the 2030s, and become negative in an accelerated manner ( figure 6(b)). Thus, the cumulative GHG fluxes from the QTP will exert a slight warming effect on the climate system until the 2030s, but an increasingly stronger cooling effect thereafter. The tipping point was 50 years after the reference zero time point, roughly the time required for nearly complete removal of the initial CH 4 perturbation input. At the end of the 21st century, cumulative mean radiative forcing was between −0.25 and −0.35 W m −2 , depending on different RCP scenarios. Overall, our results show that given a sustained CH 4 source and CO 2 sink on the QTP, the net GHG warming effect will only peak after a few decades and will eventually contribute a more cooling effect to the climate system. Model optimization Overall, our model calibration results were satisfactory, but a few tips should be mentioned when applying the global optimization method to complex system models with high parameter dimensions. Global optimization for problems without explicit analytic expressions of the objective function is challenging, because the algorithm must avoid being trapped by several local optima, while maintaining robustness in the presence of parameter interaction and non-convexity of the response surface, and having high efficiency in searching high dimensional space . When applying the SCEM-UA, a tradeoff between goodness of fit and computational cost is still a problem for users. The speed of algorithm convergence highly depends on the model structure and parameter interactions. In our case, major parameters evolved to a narrow range after 100 000 total iterations (figure S2). In contrast, the MDM optimization converged much slower despite the smaller number of parameters to be optimized. We argue that this is mainly due to the multi-scalar function method used to simulate CH 4 production and oxidation. For instance, if scalars f x ( ) 1 and f x ( ) 2 are shaped with two parameters only, it is intuitive to imagine that many pairs of these two parameters can produce similar results, as long as they can adjust scalars f x ( ) 1 and f x ( ) 2 in the opposite direction. Given that the scalar approach is extensively implemented in current ecosystem models (Zhuang et al 2004, Zhu et al 2014, convergence criteria (such as the Gelman and Rubin diagnostic suggested by Vrugt et al (2003)) can hardly be reached for most parameters when calibrating these models. On the other hand, sub-optimal is usually sufficient in practice. Increasing the sampling size from parameter space can push the posterior of the objective function to the high-end value (figure S2), but is by no means a guarantee of better model performance. Most likely, many behavioral parameter combinations with a similar capability of reproducing the observations can be found by a search conducted in the feasible parameter space (Vrugt et al 2003). As shown in our example, the top 500 parameter sets (whose values can differ) for either the STM or MDM produced close goodness-of-fit and small variations in simulations (figure 2), indicating that the benefits from additional searching are marginal. Quantification of total GHG effect To quantify and compare the net GHG effect of a sustained CH 4 source and CO 2 sink on the QTP, the total radiative forcing rather than the more widely known metrics of the GWP was computed in our study according to Frolking et al (2006). As a tool originally designed for evaluating and implementing policies to control multi-GHGs, the GWP is defined as the timeintegrated radiative forcing due to a pulse emission of a given component, relative to a pulse emission of an equal mass of CO 2 (IPCC, 2013). It has usually been integrated over a somewhat arbitrary time horizon of 20, 100 or 500 years, of which the choice of 100 years is most commonly adopted (e.g. CH 4 is 28 times CO 2 in terms of warming effect). Applying the GWP to biogeochemical cases could be problematic as GHG fluxes are often sustained and temporally dynamical from a long existing sink or source, even though many studies continue to use it because of its simplicity (e.g. Zhu et al 2013, Gatland et al 2014, Vanselow-Algan et al 2015. The method proposed by Frolking et al (2006) goes beyond the standard GWP approach such that (1) persistent emission or uptake of GHGs can be accounted for, and (2) the instantaneous radiative forcing for each year of simulation is quantified to allow a comparison of multiple gases in common units at any given time. Although the assumption of a near constant background atmosphere in Frolking's method is open to debate, and the decision on the appropriate pulse emission to consider (the new flux rate versus the change in the flux rate) could greatly change the results of the simulation, it appears a useful method in biogeochemical studies when accounting for the budget of multiple sustained GHG fluxes. Frolking's method has evolved with new findings in the atmospheric sciences. For example, the fraction of CO 2 remaining in the atmosphere after a pulse input can be divided into four components (Joos et al 2013) instead of the original five-pool setting, and the multiplier of the indirect effect for CH 4 is now 1.65 according to IPCC (2013). A test of these and other parameter changes is beyond the scope of this study. Uncertainties and future work This is the first study to quantify both CH 4 emissions and net carbon exchanges on the QTP. However, the quantitative analysis is uncertain due to incomplete representation of physical and biogeochemical processes in the model (Bridgham et al 2013, Bohn et al 2015, inaccurate model assumptions (Meng et al 2012), variations in the forcing climate data (Su et al 2013) and the extrapolation from site to regional scale. While progresses in model structures and mechanisms are usually slow and incremental, efforts to reducing data uncertainty are more feasible in a short period to improve model predictability. First, seasonal and inter-annual dynamics of the wetland extent is critical to CH 4 modeling. Synthetic aperture radar (SAR) is currently the first choice to delineate wetland distribution, such as the monthly distribution of surface water extent with ∼25 km sampling intervals used in this study (Papa et al 2010). However, optical remote sensing is highly sensitive to cloud or vegetation cover. Alternative data from passive and active microwave systems that will penetrate cloud and vegetation cover is favored (Schroeder et al 2010), but is currently unavailable over our study area. Some contemporary methane models, such as VIC-TEM and SDGVM, are capable of outputting dynamical wetland extent as an internal product (Hopcroft et al 2011, Lu andZhuang 2012), but they tend to substantially overestimate wetland area . In order to capture the seasonal flooding area, our study combined a static max potential inundation map and simulated water table depth to represent the wetland fluctuation. Improving the modeling abilities to capture wetland distribution and extent, especially the seasonality of water table dynamics, should be a research priority in the future (Zhu et al 2011). The second uncertainty in quantifying CH 4 emissions on a regional scale is from the spatial-scale extrapolation across highly heterogeneous but poorly mapped wetland complexes (Bridgham et al 2013). In this study, we only calibrated our model at one alpine wetland site by assuming that the remaining wetlands over the QTP have the same inert characteristics, but differ with climatic conditions. As a matter of fact, alpine wetlands on the QTP can be classified into peatlands, Marshes, and swamps, which have distinct characteristics in vegetation cover, hydrological processes, and soil history . However, due to severe field experimental conditions, high quality observational data are mostly limited to the Lunhaizi wetland in Haibei Station (Hirota et al 2004, Yu et al 2013 and the Zoige wetland (Chen et al 2009. Field researchers should consider expanding field measurement footprints in the future, and data sharing with modelers is highly recommended. Finally, much of the uncertainty in future projections is due to the poor agreement among CMIP5 GCMs under the RCP scenarios (figures S3, S4). The majority of the GCMs have cold biases of 1.1-2.5°C in air temperature for the QTP, while they overestimate annual mean precipitation by 62%-183% (Su et al 2013). A multi-model ensemble approach, as was suggested in the IPCC AR5 report (IPCC, 2013), was used to configure the climatology uncertainty. While the statistical interpolation method used to generate fine resolution data is simple and computationally efficient (Wilby et al 1998), dynamical downscaling with regional climate models was more ideal to compensate for the shortage of coarse output from GCMs and to capture details of the complex surface properties of the QTP (Ji and Kang 2012). However, due to the high computational cost, dynamically downscaling climate data that cover representative GCMs under all RCP scenarios is generally not available to ecosystem modelers. A publicly accessible database like this would greatly benefit the research community in future studies. Conclusions Using a coupled biogeochemistry model framework, this study analyzed the carbon-based GHG dynamics over the QTP for the period 1979-2100. Our model simulations at the site level were able to closely match the field-observed soil temperature and CH 4 flux after calibrating the model parameters using the SCEM-UA global optimization algorithm. Our study showed that the region currently acts as a CH 4 source (emissions of 0.95 Tg CH 4 y −1 and consumption of 0.19 Tg CH 4 y −1 ) and a CO 2 sink (14.1 Tg C y −1 ). In response to future climate change, the CH 4 source and the CO 2 sink strengthened, leading to an increasingly negative perturbation of radiative forcing. The spatial patterns and temporal trends of the NME and NEP highly depend on the RCP scenario. Climate-induced changes in the magnitude of the NME were statistically significant for most of the QTP, while spatial changes in the NEP were only significantly apparent under RCP4.5 and RCP6.0. The instantaneous radiative forcing impact is determined by persistent CO 2 sequestration and recent (∼5 decades) CH 4 emission. The cumulative GHG effect was a negative feedback (cooling) to the climate system at the end of the 21st century. Uncertainties in our model estimations can be reduced by including more explicit information on wetland distribution and classification, and more reliable future climate scenarios. Additional observational data from representative wetland ecosystems shall be collected to improve future quantification of these carbon-based GHGs.
2016-04-15T21:25:03.864Z
2015-08-18T00:00:00.000
{ "year": 2015, "sha1": "189fe042501f8e3face5cc8224ac8e38977bb0c5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/10/8/085007", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0978a2a75ec7578adeea059e764cf4cac3b9bb7a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
54224928
pes2o/s2orc
v3-fos-license
How Marine Protected Areas Are Governed: A Cultural Theory Perspective : Marine Protected Areas (MPAs) have become recognized as important management tools for marine and coastal ecosystems in the last few decades. However, the theoretical underpinnings of MPA regimes have arguably not yet received sufficient attention. This paper attempts to remedy this by exploring how the Cultural Theory initiated by Dame Mary Douglas can provide a theoretical foundation for the current debates about the design of MPA regimes. It does so by firstly noting that the various types of MPA governance discussed in the literature correspond to the ways of organizing, perceiving and justifying social relations recognized in Cultural Theory. The article continues by setting out how Cultural Theory helps to explain when and why MPA regimes succeed or fail to reach their goals. In particular, the article highlights the practical importance of accommodating all ways of organizing and perceiving social relations in any MPA management plan. Finally, the paper suggests that further systematic, empirical work for assessing MPAs needs to be undertaken so as to corroborate the arguments advanced in this paper. Introduction Marine Protected Areas (MPAs) have become one of the most widely used tools to manage marine and coastal ecosystems in the last few decades [1,2]. During this period, discussions about the governance aspects of MPAs have seen a steady increase [3][4][5]. Mounting pressures on coastal and marine resources, the adoption of international conservation targets to increase the size of protected areas and the prevalence of nominally declared protected areas (or "paper parks") are some of the main reasons that have resulted in this trend. An MPA is loosely defined as any coastal or marine area, including its resources, which is regulated through formal or informal arrangements. MPAs can cover areas from less than 1 square kilometres to more than 100,000 square kilometres (often labelled Large-Scale Marine Protected Areas or LSMPAs), both types of MPAs sometimes co-existing in a same region, as best illustrated in the Pacific [6]. In their early development, the bioecological aspects were the main focus and little attention was given to their social aspects. Yet, a number of researchers have shown that social, economic and institutional aspects of MPAs are the main determinants of the degree of acceptance from communities and that these have a significant impact on their long-term success [4,7]. Despite this increase in debate, the theoretical underpinnings of MPA management regimes still have not received sufficient attention [8]. Governance here is defined as "the formal and informal arrangements, institutions and mores which determine how resources or an environment are utilized; how problems and opportunities are evaluated and analysed; what behaviours are deemed acceptable or forbidden; and what rules and sanctions are applied to affect the pattern of resource and environmental use" [9] (pp. [90][91]. According to our understanding, there are at least four basic, and one hybrid, forms of MPA governance to be found in recent literature: top-down, centralized management; bottom-up, community-based management; private management led by private industry or non-governmental organizations (NGOs); mismanaged (or paper park) MPA (Mismanaged MPAs are those that have been associated with collusion, corruption and nepotism. We argue that the so-called "paper parks" form the main component of this category). These basic forms of MPA governance are often hybridized, i.e., two or more of them are combined, which gives momentum to what are often labelled co-managed MPAs; in this framework, the diverse stakeholders involved follow different rationalities and therefore have to define compromises. The notion of finding and using a hybrid form of governance as an alternative approach for environmental governance has received increased recognition in recent decades [6,10]. In this paper, we argue that the basic forms of MPA management overlap with the four ways of organizing, perceiving and justifying (namely, hierarchy, egalitarianism, individualism and fatalism) that are set out in Cultural Theory pioneered by anthropologist Dame Mary Douglas. On the basis of this argument, we outline various contributions that Douglas' Cultural Theory can make to MPA governance discussions. The first of these concerns Cultural Theory's ability to capture different perceptions and behaviours of individual and collective actors in a socio-ecological system, in a relatively simple, yet comprehensive manner. The second contribution lies in the notion of clumsy or polyrational solutions. These are policy solutions that emerge from creatively combining and accommodating the theory's four different ways of organizing, perceiving and justifying social relations. According to Cultural Theory, only such policies can be effective and widely endorsed [11,12]. The concept of clumsy solutions has been proposed and implemented in a number of environmental studies [13,14], including coral reef management [15][16][17]. In this article, we extend these applications to MPA governance. The third and final contribution concerns how to organize decision-making so as to create viable and sustainable MPA governance. Cultural Theory contends that, in order to be viable, MPAs need to combine all the basic types of governance regimes. We will refer to the resulting forms of collaborative governance as messy regimes [18]. The remainder of the paper consists of five sections. In the first, we introduce Cultural Theory. In the subsequent section, we briefly discuss Cultural Theory's concept of clumsy solutions. In the third, we describe the relation between the four management regimes and the elements of Cultural Theory's typology. We further illustrate that the rationalities distinguished in Cultural Theory provide a possible theoretical foundation for each of the management regimes in the MPA literature. We also give several examples of how MPAs based on the predominance of a single rationality (instead of on a combination of diverse rationalities) tend to fail to reach their official targets. In the fourth section, we describe a relatively recent form of natural resource management (also adopted for MPAs), the co-management regime and discuss the contribution that Cultural Theory could make to the further understanding and development of such a regime. We exemplify our explanations empirically using the case of Tubbataha Reefs Natural Park in the Philippines. The last section offers suggestions for future research. Cultural Theory Cultural Theory (shorthand for "theory of socio-cultural viability") has been developed for over five decades. It has emanated from the grid-group typology that was introduced by British anthropologist Dame Mary Douglas in the 1970s [19,20] and further developed by Aaron Wildavsky, Michael Thompson and Richard Ellis [21][22][23], among others. In her typology, Douglas identified two dimensions of sociality (grid and group) and argued that people's involvement in social life can be captured and assessed according to these two dimensions. "Grid" stands for the degree to which role differentiation and stratification constrain the behaviour of individuals. "Group", by contrast, represents the extent to which an overriding commitment to a social unit constrains the thoughts and actions of individuals. As illustrated in Figure 1, Cultural Theory derives four ways of organizing, perceiving and justifying social relations (often called "ways of life" or "social solidarities") by assigning two values (high and low) to the grid and group dimensions. These ways of organizing are usually dubbed individualism, fatalism, hierarchy and egalitarianism. Moreover, each of the ways of organizing comes with a set of beliefs, biases and behaviours suited to upholding and abiding by that method of organizing. For instance, Cultural Theory [24,25] states that each of the "myths of ecological stability" (or "views of nature") discovered by ecologist C.S. Holling [26] sustains and justifies (indeed, renders natural) one of its ways of organizing. Sustainability 2018, 10,252 3 of 23 of the ways of organizing comes with a set of beliefs, biases and behaviours suited to upholding and abiding by that method of organizing. For instance, Cultural Theory [24,25] states that each of the "myths of ecological stability" (or "views of nature") discovered by ecologist C.S. Holling [26] sustains and justifies (indeed, renders natural) one of its ways of organizing. Hierarchy occupies the top right quadrant of the grid/group map. Within this quadrant, strong group boundaries and binding prescriptions are the main characteristics of the actors' social environment [23]. In this social setting, people are subject to control from both others (qualified members/experts) as well as socially imposed roles (determined by those higher up in the echelon, such as scientific experts, tribal elders, or religious leaders). This way of life creates hierarchically nested social groups, characterized by orderly and ranked relationships, in which members are assigned different roles and responsibilities [24]. Humans are perceived as imperfect but controllable and redeemable through firm and enduring, top-down institutions. Fairness is determined by law (authority) and those who do not follow the law are seen as guilty or liable [27]. In this social setting, people adhere to "a procedural rationality that is more concerned with the proprieties of who does what than with trying to evaluate the outcome" [24] (p. 7). Actors understand ecosystems with the help of a "perverse/tolerant" myth of nature. That is, they assume that ecosystems are stable, until pushed beyond certain limits. Environmental management requires certified experts to determine these limits and statutory regulation to ensure that human activity is kept within them. Egalitarianism occupies the bottom right quadrant of the grid/group map. Actors within this social solidarity have less internal role differentiation and no individual has the authority to exercise control over others. As a consequence, resolving internal disputes is difficult [23] (p. 6). Intensive interactions among members and "shared opposition to the outside world" maintain the group's strength [27] (p. 400). Actors within this domain develop "a communal and critical rationality, which Hierarchy occupies the top right quadrant of the grid/group map. Within this quadrant, strong group boundaries and binding prescriptions are the main characteristics of the actors' social environment [23]. In this social setting, people are subject to control from both others (qualified members/experts) as well as socially imposed roles (determined by those higher up in the echelon, such as scientific experts, tribal elders, or religious leaders). This way of life creates hierarchically nested social groups, characterized by orderly and ranked relationships, in which members are assigned different roles and responsibilities [24]. Humans are perceived as imperfect but controllable and redeemable through firm and enduring, top-down institutions. Fairness is determined by law (authority) and those who do not follow the law are seen as guilty or liable [27]. In this social setting, people adhere to "a procedural rationality that is more concerned with the proprieties of who does what than with trying to evaluate the outcome" [24] (p. 7). Actors understand ecosystems with the help of a "perverse/tolerant" myth of nature. That is, they assume that ecosystems are stable, until pushed beyond certain limits. Environmental management requires certified experts to determine these limits and statutory regulation to ensure that human activity is kept within them. Egalitarianism occupies the bottom right quadrant of the grid/group map. Actors within this social solidarity have less internal role differentiation and no individual has the authority to exercise control over others. As a consequence, resolving internal disputes is difficult [23] (p. 6). Intensive interactions among members and "shared opposition to the outside world" maintain the group's strength [27] (p. 400). Actors within this domain develop "a communal and critical rationality, which stresses fraternal importance and sororal cooperation" [24] (p. 7). Within this egalitarian way of life, "fairness is equality of result" whereas "blame is put on the system" [27] (p. 400). The actors adhere to a view of nature as "fragile." They see the world as delicate and intricately interconnected and nature as ephemeral. Any small disturbance could lead to a complete collapse of the system. Therefore, the only solution for environmental problems is voluntary simplicity and precautionary principles must be firmly imposed on those who are not tempted to share the simple way of life. The bottom left side of the grid/group map portrays the individualistic way of organizing and perceiving social relations. Here, individual freedom has primacy. "All the boundaries are provisional and subject to negotiation" [23] (p. 7). Within this social setting, individuals adhere to a substantive, results-oriented rationality. People are seen as inherently self-seeking and atomistic and the preferred management institution is the one that works with the grain of the market. Fairness is seen as equality of opportunity, which should ensure that those who invest the most get out the most. Actors view nature as "benign", i.e., as highly resilient and able to recover from any exploitation. As resources are therefore understood to be unlimited, trial and error can go on unimpeded. The top left quadrant of the grid/group map represents the fatalistic (or despotic) way of organizing, perceiving and justifying social relations. In this social setting, people are expected to be fickle and untrustworthy and each actor therefore has to focus on maintaining and (if at all feasible) improving his or her position vis-à-vis others [28]. Power considerations and survival are dominant themes and fairness cannot be expected to be achieved in this life. Hence, the rationality that prevails is a deeply cynical one. Actors experience the world as unknowable and adhere to the myth of nature that portrays ecosystems as "capricious." "Why bother?" is therefore the rational management response [29]. According to Cultural Theory, these ways of life are interdependent, yet constantly in competition with one another. Each way of life compensates for certain features of experience and wisdom that are missing in the others and offers an alternative plausible account of how we should live with one another and with nature. Yet, each way requires all the others in order to be sustainable [29]. Therefore, each social domain-at any level of analysis, from a family to an international regime-is characterized by the waxing and waning, merging and splitting, of the four ways of organizing and perceiving social relations. Moreover, policy discourses are in constant flux due to the enduring clash between policy actors adhering to alternative ways of organizing and perceiving, which forces them to constantly update, revise and re-invent their preferred policies in light of the criticisms received (even though their fundamental assumptions-those concerning nature, human nature, justice, risk, time, space and so on-remain unchanged). As such, Cultural Theory is a dynamic approach, which stresses that both social domains and policy discourses are forever being transformed. The theory also recognizes that actors can adhere to alternative (combinations of) ways of organizing, perceiving and justifying social relations in different social settings. Clumsy Solutions Cultural Theory's typology helps to view social and environmental issues from four alternative policy perspectives, each one emanating from and representing, a specific way of organizing and perceiving social relations. The approach takes an additional step in arguing that successful solutions to pressing social and environmental ills tend to creatively and flexibly combine all these alternative policy perspectives. Such forms of governance are usually called "clumsy" or "polyrational" solutions [13,30] and creatively mix individualistic, egalitarian, hierarchical as well as fatalistic views on what the problems are and how they should be resolved. The theory predicts that a policy solution that does not employ all these rationalities will fail to reach its goals [31,32]. "Success" and "failure" of policies are of course highly contested concepts and will be evaluated on the basis of different norms and values by adherents to alternative rationalities. Yet by postulating that its four ways of organizing, justifying and perceiving social relations are interdependent (i.e., cannot be sustained by themselves), Cultural Theory offers a potential resolution to the problem posed by moral pluralism [33]. This postulate namely implies that policies that do not combine insights from all policy perspectives will not only fail to reach the goals of the excluded perspectives but will also fail to achieve the aims of the included perspectives. In other words, Cultural Theory predicts that "non-clumsy", or overly monolithic, policies will fail to meet their own targets and meet with widespread public rejection. Clumsy solutions are thus akin to Cass Sunstein's [34] concept of "incompletely theorized agreements" and John Rawls' [35] notion of "overlapping moral consensus": policies that are endorsed by a large majority of stakeholders, albeit for different reasons and from alternative moral vantage points. The clumsy solutions hypothesis has received ample empirical support [30]. It has been validated in a great many case studies, including the handling of radioactive materials in hospitals [36], pension reform in Europe [37], development projects in Nepal [38,39], reducing landslide risk in Southern Italy [40], global efforts to combat climate change [29], the WHO's efforts to reduce malaria [41] and contemporary whaling [42], to name but a few examples. The US Environmental Protection Agency currently claims that the notion of clumsy solutions informs its stakeholder dialogues and future policy-making (see: https://www.epa.gov/risk/multi-criteria-integrated-resource-assessment-mira). However, it is important to note that not all problems may have clumsy solutions. The types of clumsy solutions may also differ at any given time and place, despite the similarity of the problems. Thus, it is vital to create enabling conditions, for example messy institutions, that can foster the emergence of such solutions. When it comes to governing marine and coastal resources, clumsy solutions can pertain to many issues, including how to manage resources, how to fund resource management, who has the right to access resources and how they can do so, the scale of protection areas, capacity building, livelihood diversification, conservation incentives and resource stock rebuilding [43,44]. On all these issues, stakeholders are likely to adhere to different understandings of what the problems at hand are and how these should be addressed [45]. A practical example of a clumsy solution can be found in the management of Tubbataha reefs in the Philippines, which we explore in section five. MPA Management Regimes and Cultural Theory Cultural Theory has several implications for understanding MPAs and their governance. According to the approach, any MPA can be viewed as a constantly changing combination of hierarchical, egalitarian, individualistic and fatalistic ways of organizing, perceiving and justifying social relations. In addition, Cultural Theory recognizes that the perspectives of MPA stakeholders (those of including policy analysts) may contain elements from several of these rationalities and that stakeholders' views will change over time. Yet the approach also states that, through debate and polarization, the perspectives of many stakeholders will, by and large, tend to fragment along the four fault lines set out by the approach. Furthermore, Cultural Theory highlights the risks involved in allowing decision-making about MPA management to be dictated by stakeholders (and analysts) advocating a monolithic perspective on MPA governance. Due to the inherent socio-cultural diversity and change of MPAs, such as overly narrow perspective will fail to reach its goals and create other havoc besides. Hence, clumsy solutions (that are flexible and seek to exploit, rather than reduce, socio-cultural plurality) are needed for sustainable MPA governance. Cultural Theory's four ways of organizing, perceiving and justifying social relations can be used to derive four-ideal typical ways of governing MPAs. Interestingly, these four ideal-types appear to overlap, to some extent, with the basic forms of MPA governance that have been distinguished in the academic literature. In short, centralized MPA management is favoured by the hierarchical way of organizing, a community-based MPA is the egalitarian ideal, entrepreneurial MPA management (EMPA) represents the individualistic preference, while an opportunistic or mismanaged MPA is expected by the fatalistic rationality. Our argument is not that any existing community-based MPA can be expected to be fully egalitarian, or that any EMPA is wholly individualistic, etc. Indeed, Cultural Theory states that this cannot be the case, as it holds that socio-cultural diversity is ineradicable. Instead our argument is that, as each of these modes of organizing MPAs has been widely implemented and advocated over other modes, Cultural Theory can contribute to a more general explanation of MPA failure and how this can be prevented. Below, we start this explanation by outlining the four ideal-typical modes of MPA governance that can be deduced from Cultural Theory. We do so in Table 1. We then note the overlap between these ideal-types and the four basic modes of MPA management that have been widely advocated and implemented. We also list the objections that have been raised against each of these modes and illustrate these objections with an empirical example. All this allows us, in Section 5, to highlight the need for clumsy MPA governance. Hierarchy-Top-Down, Centralized MPA Management Marine protection began, as did terrestrial based conservation, with a centralized, government-led, top-down approach. Nowadays, top-down, centralized environmental management regimes are widely found in colonial and post-colonial tropical countries [5,46]. Such an approach usually covers a large area in which the central government and its experts act as the primary sources for decision-making and management activities. Proponents of this type of regime often argue that there are four primary benefits of having a centralized top-down policy: (1) it potentially offers economic benefits; (2) it is based on a strong scientific basis; (3) it is faster in terms of program implementation; and (4) it is more likely to protect critical habitats [47,48]. Conversely, scholars have argued that some major drawbacks of this type of management regime include the lack of local and context-specific knowledge that is used in decision-making and the limited extent to which local people are involved [46,49]. As a result, policy makers might not be fully aware of the social, economic and ecological impacts of the implementation of their decisions at the local level, as local people's aspirations are rarely considered. There have been many discussions on how a strictly top-down, centralized management style failed to manage coastal and marine resources [50]. Prominent examples include the failure of the Californian government's attempts to establish MPAs in its state waters from 1998 to 2002 [51,52], the Florida Keys Marine Sanctuary [53], several MPAs in Southeast Asia [54] and Mafia Island in Tanzania [55]. Nevertheless, scholars have also noted that centralized management of coastal and marine resources could also work well under certain circumstances [5]. A case in point is the management of the Great Barrier Reef Marine Park (GBRMP), which represents a more adaptive approach [56][57][58]. Cultural Theory can help to explain why centralized, top-down management has both strengths and weaknesses and how it may be possible to exploit the former, while reducing the latter. A centralized, top-down management regime is favoured by the hierarchical rationality (This is not to say that all facets of such regimes are necessarily hierarchical in practice. Rather, it is to assert that the attempts to put into place centralized, top-down MPAs have often relied on a hierarchical rationality). The typical management institution within this regime is the (central) government, which tends to employ a regulatory style of management. The objective is controllability of how, when and by whom the resources can be accessed. The preferred policy focuses on the functioning of the ecosystem and ensures the balance between short-term and long-term benefits. Policies that support limitation of access (gear, time, space and harvest) are favoured. In a hierarchical setting, marine resources tend to be perceived as available but within certain limits. Thus, the management regime focuses on ensuring that the limits are never crossed. Resource management employs experts to define these limits and then imposes statutory regulation to ensure that economic and social activities are bound by those limits. Any anthropogenic and natural disturbances can be assimilated as long as they do not reach critical levels. Resource scarcity becomes a problem of how to increase supply in order to meet the needs of people. Popular access to resources should be controlled. Creation of zoning systems that regulate access becomes a requirement. A stable supply of marine resources (e.g., fish) can be regarded as an appropriate measure of marine resource availability. However, the supply can be enlarged through an increase in the size and numbers of protected areas (the no-take zone). Ecosystem restoration, using the latest technology and the best available science, is considered as desirable. Alternative livelihood programs-via tourism or aquaculture-are highly recommended to limit the pressure in resources and to ensure proper functioning of the ecosystem. The ideal spatial scale of MPA management is large and often crosses national and regional scales. The boundary is typically delineated according to ecological or political considerations. The style of learning from this regime is anticipatory, which means that possible outcomes from any activities have to be theoretically predicted. Monitoring, evaluation and enforcement are required and conducted by the government. This top-down, centralized approach presupposes that resource management should be exclusively performed by technical experts who are objective, rational and guided by the "best available science," rather than general people who are perceived by managers to be subjective and non-rational [60]. This management regime is applied to ensure that the control over resources is limited to the (central) government, as resources are robust only up to some limit. Therefore, it requires strict guidelines, proposed by experts, to sustain available resources. Such guidelines are typically categorized as forms of integrated management. The role of community is limited accordingly to ensure efficient resource allocation and imposed compliance becomes a main feature. This subsection provides a brief example of how a stringent top-down and linear, science-driven approach, implemented by the California Department of Fish and Game (DFG), failed to work in the earlier process of redesigning and expanding MPAs in California, through its Marine Life Protection Act (MLPA). The MLPA was enacted in 1999 as a response to public concerns over the declining health of ocean ecosystems and depletion of marine resources in California. The Act required the DFG to redesign and improve the system of MPA governance along the Californian coast [61,62]. Two initial efforts to implement the MLPA in 2000 and 2002 were unsuccessful in achieving the stated goals due to unclear objectives, a linear scientific approach, lack of citizen participation, insufficient funding and a shortage of administrative expertise [51,52,63]. The MLPA was initially advocated by a small group of stakeholders (called policy entrepreneurs by Weible [52]) in California, who were willing to pledge resources to promote a network of MPAs in the state. According to Weible [52], three factors motivated the entrepreneurs to set the stage for the MLPA: (a) a perceived problem of marine resource degradation and management; (b) a belief in MPA as an effective tool for ocean management; and (c) the view that they had the ability to change the present (marine) policy through a legislative process. Although the initial bill was struck down by a veto from California's governor Pete Wilson in 1998, the second effort to pass the law was successful in 1999. A year later, the first attempts to implement the Act were undertaken. Along with the mandate to redesign the MPA networks in California, the DFG was also tasked with creating a Master Plan Team. The team consisted exclusively of (natural) scientific experts and was created with the view to making a preliminary recommendation for the MPA placement sites. During the process, efforts to reach the public were very limited. Only one effort to solicit (specific) stakeholder feedback via mail survey was conducted early in 2001 but the feedback received was not taken into consideration by the Master Plan Team [52]. After approximately 15 months of work, in the summer of 2001, the DFG organized ten public meetings along the coast, at which it presented the recommendations from the Master Plan Team. The results were highly problematic for the DFG. Along with widespread citizen disappointment with the draft recommendations document-which was science based (extremely technical) and lacked stakeholder inputs-the public consultation processes were also ineffective. In response to these unintended results, the DFG abandoned the Master Plan Team and its recommendations in the winter of 2001 [63]. The second attempt to implement the MLPA was conducted the following summer. In this attempt, a more collaborative process, involving the Master Plan Team and affected stakeholders, was employed. However the process came to a halt in 2003, partially due to the then reigning financial crisis in California [63]. From a Cultural Theory point of view, it can be argued that the failure was primarily caused by the way this MPA was organized. From the brief case study, it is clear that the MPA was government centric and heavily dependent on the technical experts for deciding on the management plan. The lack of active public involvement, which is the core of the egalitarian way of organizing, hampered the whole initiative. Egalitarianism-Bottom-Up, Community-Based MPA Management The bottom-up, community-based form of MPA governance is commonly employed in the tropics, such as in Southeast Asia, the Pacific and Africa (Under community-based MPA practices, we include locally managed marine areas (LLMAs)). It arose as a response to the perceived inadequacies of the resource management paradigm that was highly centralized (state-centric) and that often relied solely on science and required a lot of technical expertise [64,65]. The new form of MPA governance also benefited from discussions on the role of local communities in protected areas, as well as the role of science and local knowledge in the determination of objectives and policy planning [49,66,67]. Community-based MPAs are generally based on the premise that local communities have a greater interest in the sustainable use of the resources and have extensive knowledge of local resources and exploitation practices. Hence, it is likely that they will be more effective in managing access to resources through local arrangements [68]. Scholars have listed numerous benefits of community-based resource management, such as increasing local people's acceptance and participation, improvement of the local economy, as well as a deepening of democratization [69][70][71]. However, in places with long histories of state-centric policies, community-based resource management initiatives are likely to face important challenges [5], in particular in relation with their minimization of differences in resource managers' status and power. First, these MPAs often require public mobilization, which may not always be in constant supply. Furthermore, community-based MPAs typically require a high level of common interests and shared norms, which may also not prevail [49]. Finally, community-based MPAs sometimes fall prey to what Lane and Corbett [72] call "the tyranny of localism." According to this notion, (local) community-based decisions are not necessarily better or fairer. Indeed, these run the risk of magnifying inequalities and hindering democracy, if the operation of power relations at the local level is ignored. A bottom-up, community-based management regime is the preference of the egalitarian rationality and can therefore be expected to include more egalitarian features than other types of MPAs. In an egalitarian setting, marine resources are perceived as fragile and intricately interconnected. Therefore, resources should only be used to fulfil people's basic needs. Moreover, members of the community (and only they) should have equal access to resources. The primary objective of this management form is sustainability-to ensure long-term resource benefits for future generations. The management regime emphasizes equal access and distribution, as well as the community's responsibilities with regard to its limited marine resources. The managing institution is typically the community itself. Its leader is often chosen on a voluntary basis, according to the level of willingness of a community member to invest his or her time and resources. The style of management is consensus-based and preventive. The evaluation of resource management is frequently benchmarked according to pristine environmental conditions and equal social benefits. Stable supply and free access to marine resources to all community members can be regarded as appropriate measures of marine resource availability. Any policies required to restrict access to and limit pressure on, resources can come into existence, only after agreement of the whole community. Due to the belief that marine resources are limited and fragile, any new activities and technologies (for instance, tourism, gear and aquaculture) can only be employed if there is no significant effect on the marine environment. The use of low-cost and small-scale technology for resource utilization and restoration is ideal. Local production and consumption are favoured. Monitoring and enforcement tend to be organized on a voluntary basis, usually by the community members. Typically, resources are managed for communal benefits in small social units, such as villages and tribes. The practice of community-based MPAs can usually be found in areas that have strong traditional and communal interactions, such as the Asia-Pacific islands [73][74][75]. Ecological Successes and Social Failures-A Tale of San Salvador CBMPA This subsection provides a brief illustration of how, in San Salvador in the Philippines, interpersonal conflicts resulted in social disruption and thus undermined the commitment to communally manage an MPA. San Salvador Island is a 380 ha village island, located on the western coast of Luzon, the Philippines. The no-take San Salvador MPA was initiated in 1988 by the local community-led by Lupong Tagapangasiwa ng Kapaligiran (LTK), or the Environment Management Committee-in response to extensive illegal fishing activities and the lack of dedicated law enforcement staff. The MPA itself, which covers an area of 127 ha, was enacted in July 1989 through Masinloc Municipal Ordinance [76]. The community agreed to ban all fishing activities within the MPA. Additionally, they also set up and enforced a systemic sanction system, ranging from warnings and fines, to boat confiscation, depending on the severity of the violation. In the early phases of the MPA's development, the community received strong external support from both local and international institutions in the form of the Marine Conservation Project for San Salvador (MCPSS). During the ten years of its implementation, the ecological aspects-fish abundance, fish diversity and coral cover-seemed to be improving, yet the socio-economic aspects were weakening [54]. A political change in 1997 resulted in a conflict among MPA supporters, as the previous village head (who was still serving as the head of the municipality's warden's group for the MPA) was unwilling to coordinate enforcement efforts with his successor. Christie [54] argues that this interpersonal conflict between the two key supporters of the MPA was the primary reason for non-fulfilment of the socio-economic goals of the San Salvador MPA. But it can be argued that this personal conflict between two stakeholders was able to disrupt matters so significantly, due to the predominantly egalitarian mode in which the San Salvador MPA had been organized. As a result, it lacked, for example, the administrative rules and dispute settlement mechanisms that characterize the hierarchical way of organizing and that could have contained the conflict. In contrast, an egalitarian social setting is based on consensus among all participants and can therefore easily break down in the face of interpersonal conflict. Individualism-EMPAs Management The private sector also plays an important role in the management of coastal and marine resources. Their role in marine conservation is often captured by the term "entrepreneurial MPAs" (EMPAs) [77]. EMPAs are typically small in scale and supported commercially by private organizations or individuals [77,78]. Colwell has pointed out that the private sector (and dive resorts in particular) "which have a vested economic interest in promoting abundant marine life, can become the primary stewards of small-scale, commercially supported MPAs in coral reefs areas" [77] (p. 110). EMPAs arguably emerge from the inability of governments and local communities to exploit economic revenues from areas that have high potential economic value [77,79]. EMPAs have, for instance, been implemented on Tanzania's Chumbe Island [80], Malaysia's Sugud island [81], Vietnam's Hon Ong island [82] and Indonesia's Gili Trawangan island and in Pamuteran village [79]. Despite the ecological success of the EMPAs, great challenges remain to be addressed [80,82]. As Christie and White [5] have argued, private management of natural resources tends to create disputes as it is a centralized type of management, specifically for the resources that have previously been publicly owned. Thus, gaining support from local communities and governments, as well as possessing strong management and marketing strategies, are some important criteria to be fulfilled for any EMPA initiatives to be successful. EMPA management is preferred by Cultural Theory's individualistic rationality and can therefore be expected to contain more individualistic traits as compared to other types of MPAs. According to this perspective, the management style of this regime has to be adaptable, private and laissez-faire. Hence, regulations need to remain conditional and negotiable. Moreover, policies are favoured that focus on short-term economic benefits and that maximize individual choice ("user-pays" principle). Limits on access to and exploitation of, resources are viewed as undesirable, as resources are perceived to be resilient-i.e., able to recover from any exploitation. Indeed, resources are deemed a source of personal wealth and prosperity. Resource scarcity is seen as a market problem. Any activities of ecosystem restoration, tourism and aquaculture are considered desirable, as long as they are economically viable. Furthermore, the site selection of EMPAs has to be primarily market-based and profit-driven and not primarily derived from any conservation or other criteria [77,79]. MPA design should be mostly up to the individual investor, who should also have the right to decide on the level of investment in the MPA's management [80], as some of them have already served as a "de facto steward" for local marine resources [77]. Typically, public involvement in the management form is limited. EMPA coverage is decided at the most appropriate (or most efficient) scale. Additionally, monitoring and enforcement activities are only conducted as deemed necessary by individual investors. Mind the Gap-Lessons from a Hotel Managed MPA in Vietnam This subsection illustrates some potential problems of a strictly individualistic approach to an MPA. The lack of active community involvement and support may potentially create conflict and foster a law-breaking, self-serving attitude in the community, thus degrading the marine resources. Whale Island Resort (WIR) was established in 1997 on a small island (100 ha) in Vha Phong Bay, Khanh Hoa Province, in south-central Vietnam. As a response to the declining fish population and coral cover, due to illegal fishing activities, increasing solid waste pollution from nearby villages and growing numbers of fishing vessels, the owner of the resort took the initiative to outline an MPA for some parts of the island [82,83]. In 2001, they managed to lease a larger area (including coastal waters) from the provincial authorities for a ten year-period. This eventually became an EMPA area (named Whale Island Bay Reserve). The MPA boundaries were marked by the local coastguard using buoys that covered around 11 ha of the area. Moreover, in 2005, the owner decided to enclose another bay in the peninsula (opposite to the older MPA and the resort) to create an additional MPA of 5 ha, entitled Whale Island Bay Peninsula Reserve [82]. Svensson et al. [82] emphasize that the total cost to maintain the MPAs was relatively low, at about USD 10,000 per year, including the (sea) portion of the lease, the wardens' salaries and other maintenance costs. Although local community members acquiesced in the initial effort by the resort to establish an MPA, most of them did not agree with later processes [83]. In the first few years, fish poaching inside the MPA was a frequent occurrence. To ensure compliance, local coast guards were contacted on a regular basis to convey warnings and confiscate the fishing gear of regular offenders. Consequently, poaching activities in the first MPA saw a reduction due to frequent patrol by the wardens. However, the opposite was true for the second MPA [82]. The phenomenon of fishing the line (fishing just on the MPA borders) during night-time arose through the use of extractive fishing gear (fishing nets rather than hook and line) [82,83]. The case study shows an example of an MPA that was organized in a more individualistic way. The MPA was initiated and managed by a commercial actor, the resort. It was clear that most of the management activities were designed to ensure the benefits to that individual actor. No public engagement activities were conducted to obtain local support for the MPA. Therefore, frequent (illegal) poaching happened at times when there were no MPA wardens active. Fatalism-Mismanaged (or Paper Park) MPAs Fatalism (or despotism) is associated with opportunistic (mismanaged) MPAs. Typically, holders of fatalistic rationality use MPAs as a means to enhance their individual power, both financially and socially. The value of a resource in this type of MPA is solely interpreted in terms of individual survival, even at the expense of others. The overall condition of a resource is not of concern, as the demand for resources is perceived to be unmanageable anyway. Typically, this particular management regime resembles a fiefdom, i.e., a personalized, top-down approach in order to maintain established privileges. The style of management is conservative and intimidating, which is oriented to accommodate the needs of particular, powerful individuals. Management evaluations are not favoured and are only conducted to maintain the relative power of interested individuals. The roles of either western or local knowledge are often not prominent in this type of management. Likewise, the design process and scale of the area are to be decided secretively and usually, haphazardly. Public involvement is not highly valued and compliance is enforced. Monitoring is conducted only for the sake of maintaining power. A "Perfect" Park-Community Displacement at the Bijágos Archipelago, Guinea-Bissau This subsection aims to provide an illustration of an opportunistically organized MPA (or, in terms of Cultural Theory, a fatalistic or despotic one). In this setting, different stakeholders pursued their own interests at the expense of other people's needs. It resulted in corruption, local conflicts among the stakeholders, little compliance and ultimately resource degradation. The Bijágos Archipelago is located in the Atlantic Ocean off the coast of Guinea-Bissau. It consists of about 88 islands and islets, only 20 of which are populated year-round. It has a rich marine and coastal biodiversity, including mangrove forests, sea turtles, marine mammals and other protected species [84,85]. Prior to 1980, fishing was primarily conducted by local people in the Bijágos Archipelago, as a subsistence activity during the off-farming season [86]. Later on, fishing in the area also began to involve people from the mainland (i.e., the Nhominka), who had different objectives and interests compared to the indigenous islanders [87]. These mainlanders migrated seasonally and set up temporary camps on the coasts of the islands. These camps were later expanded and became permanent settlements of migrant people. In 1996, due to the richness of its resources and its unique local culture, the islands were designated as a UNESCO Biosphere Reserve. Following this, in 2000, two (marine) national parks, the National Marine Park of João Vieira Poilão (NMP-JVP) and the Orango National Park (PNO), were established [87]. Only in 2014, the International Union for Conservation of Nature (IUCN) and the governmental agencies of Guinea-Bissau created an Institute for Biodiversity and Protected Area (IBAP), whose function is to manage the parks. According to Cross [87] (p. 695), "the protection agenda coincided with a move to rein in small-scale fishing," as fishing activities, particularly from the migrant people, had become identified as a potential threat to other marine megafauna. Furthermore, armed parks' officials forcibly chased out the migrant fishers and destroyed their settlements within the parks' areas, as the park rules came into force. In order to cope with the eviction, the migrant fishers moved to the nearby areas, where the local inhabitants lived, so as to continue their livelihood. However, their movement faced resistance from the local inhabitants and thus horizontal conflicts started to emerge [87]. Following the conflicts, the state government finally tried to solve the situation by monitoring the migrant fishing camps and their fishing activities. The state government enforced a strict regulation involving different type of permits (identity papers, boat licenses, fishing permits, etc.) Due to these excessive rules, the migrants have started to employ various law-breaking strategies. Among the local inhabitants, these actions have resulted in a split between those supporting and those rejecting, the state intervention. In addition local communities have complained about the corrupt and fraudulent behaviour of government officials and migrant fishers [87]. From a Cultural Theory perspective, this MPA was predominantly organized in a fatalistic manner, as most (if not all) actors tried to get ahead at the expense of others. The lack of active participation and equal treatment of the public (i.e., the egalitarian way of organizing), the absence of effectively enforced laws (provided by the hierarchical way of organizing) and a dearth of clearly established, individual ownership rights (part and parcel of the individualistic way or organizing) were at the heart of the conflicts that prevailed in the Bijágos Archipelago. The above case studies illustrate Cultural Theory's proposition that monolithically organized MPAs will fail to achieve environmental sustainability regardless of the form that this uniformity has taken (centrally managed, community-based or entrepreneurial). In the next section, we argue that the approach also has implications for our understanding of when and why a more hybrid version of MPAs can succeed. Co-Management as a Messy Management Regime One of the most lauded approaches in governing natural resources is collaborative management or co-management [88]. It has emerged as an alternative for the continued divergence between the top-down, state-centric style of MPA management and the bottom-up community-based style [89][90][91]. Co-management is generally defined as a management approach that involves the sharing of power and responsibility, usually between the government and the community, in the form of an equitable partnership to achieve goals in managing a resource or an area [88,92]. It promotes "formal" collaboration and encourages positive communication between the resource users and the government. Carlsson and Berkes [92] summarize the characteristics underpinning co-management concepts and definitions in the literatures as follows: (1) typically associated with the natural resource management concept; (2) a form of partnership between the government, the public and the private actors; and (3) an ever changing, dynamic process. The earliest use of the term "co-management" can be traced back to salmon fisheries management in the state of Washington in the late 1970s [93]. In its early development, scholars had only seen it as a simple partnership arrangement between the government and the local resource users in managing resources that had emerged as an alternative to bridge the gap between a purely state-centric, centralized type of management and a purely community based type of management [89][90][91][92]. However, recent developments in the literature show that co-management has become a more dynamic and complex arrangement and covers multidimensional aspects of the management process, specifically in its role as a bridging institution, in knowledge generation, in social learning and in adaptive management [88]. In terms of Cultural Theory, a co-management approach can be the potential "messy" institution that helps to combine the four basic governance regimes distinguished in the approach. In a co-management setting, hierarchical, egalitarian, individualistic and fatalistic ways of perceiving and organizing are being mixed and combined. For instance, hierarchical actors (in some cases, the government) may see co-management as an opportunity to enhance initial acceptance from the wider community, to improve compliance and to ensure diversity in innovations to finance the MPAs. Individualistic actors may perceive co-management as representing new opportunities for expanding their businesses and achieving more profit, for example in tourism or sectors of aquaculture. The egalitarian-minded (typically, the local community) could feel that they finally have a significant role in deciding what to do in order to reduce inequality according to their local wisdom and needs. In the co-management MPA setting, divergent needs of multiple stakeholders are therefore accommodated and conflicting interests may possibly be minimized. Multiple uses through zoning of MPAs can then become viable (e.g., different zones for traditional use, tourism, reserves, education and fisheries). Typically, the goals of the MPAs are not solely to protect resources but also to improve local capacity and livelihoods. On these grounds, Cultural Theory predicts that, as a messy MPA, co-management should enable stakeholders to generate clumsy solutions more successfully than approaches that are closer to monolithic MPAs. Typically, this is because clumsy solutions are dynamic solutions based on the recognition that policy efforts need to be as pluralist as the current environmental and social problems [11,12]. Figure 2 provides an illustration of how clumsy solutions emerge from clashing, combining and accommodating four different MPA regimes. A Dynamic Collaborative Governance-Tubbataha Reefs Natural Park (TRNP) In this sub-section, we highlight how a messy MPA regime, the Tubbataha Reefs Natural Park (TRNP), enabled the emergence of a more effective and widely endorsed, indeed clumsy, management solution. Initiated as a strictly hierarchical approach, the management process led to major conflicts among stakeholders. However, after a significant transformation in late 1990s, in which all of the conflicting stakeholders were consulted and involved, the MPA became more acceptable to the public. Nestled in the middle of the Sulu Sea, on the southwest corner of the Philippines, Tubbataha reefs constitute the largest coral reef atoll in the country. It was the richness of the natural resources in the area that made scientists, governmental agencies, NGOs, private industries and local people take a keen interest in participating in the area's management. Since the beginning of the 1980s, the area has been an important fishing zone and scuba diving destination for both locals and foreigners. However, as early as in 1989, the reef had sustained heavy damages due to illegal activities such as fish bombing and poisoning, wildlife collections and dropping anchors from boats [94,95]. Addressing the complex issues in Tubbataha reefs, in August 1988, a Presidential Decree created the Tubbataha Reefs Natural Park (TRNP). The TRNP covered about 97.030 ha and was the country's first national marine park. The vision of the TRNP was to effectively conserve the area to maintain ecological integrity, to contribute to the equitable distribution of benefits to all stakeholders and to sustain socio-economic development for present and future generations. The Tubbataha Protected Area Management Board (TPAMB)-consisting of 20 member institutions-was responsible for devising future policies, while daily operations were to be handled by the Tubbataha Management Office. In total, seven categories of stakeholders have been involved in the TRNP: national government agencies, local government units, regional governmental units at provincial level, fishing operators, private industries (dive tour operators), NGOs, research institutions and non-user people [96]. These stakeholders have employed a variety of ways of perceiving, justifying and solving the problems at hand. We would argue that the prime reason for the success of the TRNP has been the ability to incorporate all these competing rationalities into its management regime. The accommodation of all competing rationalities was reflected in both its management plan and the structure of the TRNP's governing body. A Brief History of TRNP The history of the TRNP management regime is one of a very complex, yet adaptive process. In 1989-1990, the first draft of the management plans was released based on limited information and sporadic monitoring and enforcement were conducted [97]. In the period from 1991 to 1994, research expeditions were conducted, the management plan was re-drafted, the park became a UNESCO world heritage site and the Navy was mandated to guard it. In the next three years, multiple local, regional and international institutions worked together to implement a management plan, under the Presidential Task Force, chaired by the secretary of the Department of Environment and Natural Resources (DENR). The TPAMB was formed from 1998 to 1999, as a result of a series of workshops involving multiple stakeholders. Soon after, a park manager was appointed and the PAMB became operational. An entrance fee to the park was initiated in the same year, based on a study by WWF and the Coastal Resource Management Project (CRMP) analysing the willingness of consumers to pay for the service. From 2000 to 2002, the Tubbataha Management Office was established, regular monitoring and research were conducted and livelihood programs were launched, including the implementation of entrance fees and a permit. The management plan was again revised in 2004 to incorporate a park effectiveness assessment, as well as a monitoring and evaluation program, while the park's boundaries were expanded [94,95,97]. In 2011, a new update of the management plan that aimed to balance the legal, economic and participative incentives was implemented and approved by the TPAMB [98]. Competing Rationalities Hierarchical actors in this MPA, including the Department of Environment and Natural Resources and other central government agencies, saw the overt exercise of the freedom of the community to act beyond their suppositions as a problem. For instance, fishermen can fish as much as they want and even use illegal tools; seaweed operators can claim the area for their farms, dive operators can compete to bring as many tourists as possible to the area. Hierarchical stakeholders perceived that those activities could act as a potential threat to the integration of the community as a whole and thus, structured centralized management was required. This was fortified by their view of nature, according to which ecosystems are robust but only up to a point. Therefore, in their view, establishing limits to resource utilization was urgently needed to ensure the stability of the ecosystem. Egalitarian stakeholders and especially local communities (It may be surmised that NGOs are typically more egalitarian than other actors. But according to Cultural Theory [21], NGOs can take on many shapes-from highly egalitarian (e.g., Earth First!), via rather hierarchical (such as The Sierra Club) and individualistic (The Adam Smith Institute), to fatalistic-and any combination thereof. As such, NGOs are not necessarily different from any other organizations (although Cultural Theory expects that, overall, NGOs tend to include more egalitarian features than other types of actors)), saw the extractive use of resources and the inequality of the early management processes of the TRNP as key problems. Local communities were not involved in early planning but they faced direct consequences as soon as the ban on any extractive activities was implemented. Egalitarian actors demanded equal representation in decision-making, rule enforcement, as well as a share of the benefits as the solution to the problem of the Tubbataha reef conservation [95]. Individualistic actors, such as fishing and tourism operators, believe in the resilience of ecosystems and therefore they did not acknowledge that the utilization of resources was causing many problems. They perceived the problem as arising from government intervention in the private management of resources in the TRNP. Fishing operators did not agree with the ban on extractive activities since it limits their fishing grounds and consequently impacts their income. Meanwhile, dive tour operators saw this as a potential opportunity to keep their business running without additional investments to ensure the reefs remained healthy. Fatalistic actors (some overseas poachers/fishers and local government units) usually do not have an opinion on the possible environmental consequences of intensified resource extraction. Rather, the issue of resource extraction is seen as an opportunity to maintain, or strengthen, their power. Hence, in this particular case, fatalists perceived the TRNP as a problem. With its implementation, some local government units lost the power that they used to have, as the management jurisdiction of the area fell upon different people and institutions, while the access for overseas fishers to catch fish and other commercial species became limited. The Emergence of a Clumsy Solution The year 1999 saw a turning point in the ineffective, early management strategies with the formation of the TPAMB, a multi-stakeholder body that directed the management of the TRNP, representing various interests in the Tubbataha reefs area. Following the establishment of the TPAMB, the management plan of the TRNP, based on the stakeholder agreements from a multi-stakeholder workshop, was also endorsed. Despite intense discussions and contradictory exchanges during the workshop, all participants accepted the end result. An unexpected position was taken by the representatives of fishing operators during the workshop. They agreed to support the MPA area, after understanding the potential "spill-over" benefits from the no-take area to the adjacent fisheries. The summary of the agreement reads as follows [95] (p. 57): (Note that (1) Cagayancillo is the nearest community settlement to the TRNP; it is within Palawan Province of the Philippines; (2) The bill was finally enacted in 2009 (known as TRNP Act of 2009)). 1. Cagayancillo fishers to respect the no-take zone area. 2. Commercial and Palawan fishers to respect the no-take zone. 3. Divers and dive operators to pay user fees. 5. PCSD to draft bill and authorize pilot collection of user fees. 6. PAMB to establish a Tubbataha Management Office. 7. Phil Navy and Coast Guard to establish and staff a ranger station. Since then, the management plan has been reviewed and updated a number of times but the features of the agreements remain, more or less, the same. Interestingly, the management plan was acceptable and when implemented, beneficial to all stakeholders. Hierarchical actors have benefitted from improved compliance, ecosystem stability and control of the community. For egalitarian stakeholders, the benefits have mostly been the improvement of local participation in management and the amelioration of local livelihoods via the sharing of user fees [94]. The benefits to individualistic stakeholders have come from ensuring the continuation of their business operations and the spill over effects. Although it is difficult to pinpoint the benefits to fatalists; in this case, with the implementation of the TRMP management plan, they could start to think about the strategies they could implement to get back the power they used to have. Concluding Remarks This paper has illustrated that Cultural Theory provides a helpful theoretical underpinning with which to understand the current discourses about how to structure and implement MPA governance regimes. Each of Cultural Theory's rationalities adequately captures a well-known type of MPA governance. Top-down, centralized management is based on its hierarchical rationality, bottom-up community-based management is informed by its egalitarian rationality, entrepreneurship management represents the individualistic rationality, while paper parks often embody the fatalistic way of life. The recent approach of co-management is a messy institution as it combines all the four ways of organizing and perceiving set out in Cultural Theory. As such, it typically involves negotiations between various stakeholders. As explained above, there are three contributions that Cultural Theory has, in principle, to offer to the study and practice of MPA governance. The first of these concerns the theory's ability to function as a simple, yet comprehensive, heuristic model for understanding different behaviours and perceptions of individual and collective actors in a socio-ecological system. For instance, this helps to identify fatalistic actors and ways of organizing, which are often overlooked but which are nevertheless important. This feature can help to strengthen the commonly used technique of stakeholder analysis. As Billgren & Holmén [99] have argued, stakeholder analyses often identify the key stakeholders without understanding why these behave the way they do. The second contribution lies in its hypothesis that clumsy or polyrational solutions can protect ecosystems in an effective and widely accepted manner. This hypothesis has received wide empirical support outside the study of MPA governance and should therefore also be tested within it. The third contribution pertains to the question of how to organize MPA governance. While recognizing a suitable management approach, namely co-management, the existing literature has typically only recommended governmental, community-based and private management regimes as essential components of co-management. Cultural Theory deepens this analysis by highlighting a fourth type of regime (the fatalistic one), by spelling out many of the perceptions and beliefs accompanying these different ways of organizing and by elucidating why mixed regime types can function better. We do not claim that all MPAs have to be co-managed in the same manner but we assert the importance of accounting for other management regime attributes in order to achieve sustainable MPAs in a collaborative setting. Moreover, depending on the peculiarities of the social interactions in a particular MPA, any of the four management regimes may take a (provisional) dominant role in a mixed, collaborative, regime. In order to corroborate the arguments made in this paper, it would be important to undertake systematic empirical studies of MPAs around the world. This would entail classifying existing MPAs according to Cultural Theory's categories with a view to assessing whether MPAs that follow a single way of organizing are failing and whether those that combine ways of organizing are succeeding. We hope that this research will serve as a good starting point for such future endeavours.
2019-05-20T13:03:50.210Z
2018-01-19T00:00:00.000
{ "year": 2018, "sha1": "7ece9280ca5e10b4c83e842b8a8850ba30ec85ec", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/10/1/252/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f68d17084f35bd127d511280e9bceaa9c7a0ae81", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
20976038
pes2o/s2orc
v3-fos-license
Social inequality and exposure to magnetic fi elds in the metropolitan region of São Paulo , Southeastern Brazil OBJECTIVE: To estimate the prevalence of exposure to magnetic fi elds generated by transmission lines (TL) and characterize the exposed population. METHODS: Information about TL in the metropolitan region of São Paulo, Southeastern Brazil, was provided by the electricity companies and mapped out using geographic information system (GIS). Demographic and socioeconomic data were obtained from the 2000 Census and added to the GIS in another layer. Households and their inhabitants that were located at a distance from the TL that was suffi cient to generate a magnetic fi eld ≥ 0.3 μT (microteslas) were deemed to be exposed. The prevalence was estimated according to the area of the corridors of exposure along the TL. Two approaches were used to delimit the corridor width: one consisted of widths that were predefi ned by the TL voltage, and the other consisted of calculation of the magnetic fi eld. The socioeconomic information on the exposed and non-exposed populations were compared by applying the two-proportion test (α = 5%). RESULTS: In the corridors with predefi ned widths, the prevalence of exposure was 2.4%, and in the calculated corridors, the prevalence was 1.4%. Both methods indicated higher prevalence of exposure among the younger population, and among individuals with lower education and income levels (p < 0.001). CONCLUSIONS: The prevalence of exposure to magnetic fi elds generated by TL in the metropolitan region of São Paulo was lower than what has been observed in other countries. The results indicate inequality in the exposure to magnetic fi elds in this urban area, with greater risk to vulnerable populations such as children and socioeconomically less favored individuals. DESCRIPTORS: Electromagnetic Fields, adverse effects. Environmental Exposure. Risk Factors. Socioeconomic Factors. Health Inequalities. Cross-Sectional Studies. INTRODUCTION Society lives with risks created by its own organizational system.So-called post-modernity 4 has brought countless benefi ts for mankind's comfort and wellbeing.On the other hand, this has created threats such as the results from emissions of hazardous waste and effl uents, which contaminate the air, ground and water and may have consequences for human health. 11mong the risks generated through technological advances, electromagnetic pollution results from use of electricity and domestic electrical apparatus, such as microwave ovens, video monitors and cell phones.a Electromagnetic fi elds vary greatly with regard to frequency, measured in Hertz (Hz).Electricity produces electromagnetic fi elds of extremely low frequency (between 50 and 60 Hz).The greatest concern is in relation to magnetic fi elds (measured in microteslas, μT), which may cross through common construction materials, while electrical fi elds are attenuated by most of these materials. 12ntroversy still exists with regard to the effects on health from exposure to magnetic fi elds, given that they induce weak electric currents and insuffi cient energy to directly damage the DNA and trigger the process of carcinogenesis. 5,10At the end of the 1970s, the fi rst epidemiological studies evaluating residential exposure to magnetic fi elds and the risks of leukemia, cancer and other health outcomes were published. 7,10These studies have been carried out with various designs (case-control, 1,2,6,8,9 cohort 13,14 and ecological 15 studies) and a variety of exposure assessment methods. One of the main problems faced in characterizing the effects of magnetic fi elds on health in epidemiological studies is the methodological diffi culty of quantifying the exposure.Many studies have estimated the exposure and its effects by means of marked-out corridors and distances from homes to the transmission line, 1,2,6,8,[13][14][15] while making the assumption that greater proximity of homes to transmission lines gives rise to greater exposure.The magnetic fi eld in the exposure corridors along the lines is proportional to the current.Therefore, lines of different voltages and loads should be categorized by corridors of exposure in which the width varies according to the intensity and other characteristics, such as the cable geometry, pylon height, etc. Studies have reported corridor widths of 500 m, such that they would include exposures ≥ 0.05 μT, 13,14 and between 40 m for 33 kV transmission lines and 300 m for 420 kV lines 8 such that they would defi ne exposures of ≥ 0.1 μT; 9 or corridors of 100 m on each side of transmission lines, for estimated exposures ≥ 0.2 μT, 14 or along 110 kV and 380 kV lines. 1 Several studies have presented statistically signifi cant results correlating exposure to magnetic fi elds and development of outcomes such as cancer and leukemia.information systems (GIS) to mark out corridors and determine the distances of homes from transmission lines, as an assessment of exposure.The present study had the objectives of estimating the prevalence of exposure to magnetic fi elds generated by transmission lines and characterizing the population exposed. METHODS This was a cross-sectional study developed as part of the EMF-SP project EMF-SP, b which was conducted in the metropolitan region of São Paulo, Southeastern Brazil.Currently, the metropolitan region has a population of 19,697,337 inhabitants and a population density of approximately 2,479.6 inhabitants/km².c Data on the transmission lines that cross this area were furnished by the electricity companies that participated in the project, and the lines were mapped using MapInfo GIS software (Professional version 8.5; MapInfo Corporation, New York, NY, USA). The IBGE base map of census tracts for the municipalities of the metropolitan region of São Paulo, which contains information relating to household, population, age and socioeconomic characteristics and the schooling and income levels of heads of households, was added to the GIS in another layer, along with the information on the transmission lines and exposure corridors. To evaluate exposure, households (and their inhabitants) that were within the limits of a corridor along a transmission line with an estimated magnetic fi eld ≥ 0.3 μT were defi ned as exposed.The width of these exposure corridors was determined based on two methods: Corridors of predefi ned width for each transmission line, based on epidemiological studies that used similar methods for evaluating exposure. 8,9The width of the corridors along the transmission lines varied according to the voltage (88 kV line, 60 m; 138 kV line, 100 m; 230 kV line, 150 m; 345 kV line, 200 m; and ≥ 440 kV line, 250 m), in order to correspond to a magnetic fi eld ≥ 0.3 μT.If there were several transmission lines within the same area served, the width of the corridor was based on the line with the highest voltage. Corridors calculated such that the mean exposure to magnetic fi elds would be ≥ 0.3 μT.The calculations were performed by the Instituto de Pesquisas Tecnológicas (Institute of Technological Research), using technical information on each of the lines crossing the study region, such as the mean annual load, voltage, positioning and phase distance, among others.The width of these corridors varied according to the intensity of the magnetic fi eld and the characteristics of the respective transmission lines, with widths between 20 m and 200 m along each line. The cutoff point of 0.3 μT to characterize exposure was adopted based on the meta-analysis of Greenland et al. 3 In this, from the results of 12 epidemiological studies, the odds ratio for childhood leukemia and exposure ≥ 0.3 μT was estimated to be 1.7 (95% CI: 1.2; 2.3), compared with exposure ≤ 0.1 μT (reference group). In both methods, the GIS software summed the population and household values of the census tracts within the areas of interest, in the case of tracts completely within the corridors.In addition, through proportional sums, the software estimated values relating to census tracts that were partially contained within each corridor.The population values were proportionally corrected for the year 2008 through information obtained from the website of the SEADE Foundation.c For the socioeconomic analysis, indicators for the head of household that would represent the extremes of schooling levels (up to fi ve years of schooling and ≥ 13 years of schooling) and income levels (up to two minimum monthly salaries and ≥ 20 minimum monthly salaries) were used.These would show greater or lesser socioeconomic vulnerability.The proportions of these income and schooling categories between exposed and non-exposed individuals were compared using a proportions test (α = 5%). RESULTS The metropolitan region of São Paulo is crossed by a network of 2,571 km of aerial transmission lines (Figure 1), among which 88 kV lines are the most frequent type (879.2 km; 34.2% of the total), followed by 345 kV lines (26.1%) (Table 1). The corridors of predefi ned width wholly or partially included 2,568 census tracts with a total of 474,011 inhabitants.The prevalence of exposure in these corridors was 2.4%.The calculated corridors included 2,316 census tracts with a total of 269,924 inhabitants and a prevalence of exposure of 1.4%. As shown by Table 2, around half of the individuals within the corridors, using both methods, were under 24 years of age.On the other hand, the proportion of elderly individuals (≥ 70 years) living in the areas of the corridors was lower than the proportions of other age groups. The prevalence gradually diminished towards older age groups, with lower values found from the age of 40 years onwards.The group ≥ 80 years of age presented the lowest prevalence of exposure. The prevalence of exposure was highest in the group of heads of households with the lowest schooling levels, and the prevalence was lower in the groups with higher schooling levels (Figure 2).The analysis on income levels among the heads of households showed results that resembled those relating to schooling levels and followed the trend of increasing prevalence in groups with lower income.Households whose heads had the lowest incomes were predominantly in corridors close to transmission lines.This prevalence was highest among groups of heads of households without any income or with not more than three minimum monthly salaries.On the other hand, the lowest prevalences were among households whose heads had incomes greater than ten minimum monthly salaries (Figure 3).By comparing income and schooling levels together between the exposed and unexposed populations, it was observed that among the exposed individuals, there was a greater proportion of heads of households with not more than fi ve years of schooling and monthly income of not more than two minimum salaries.On the other hand, among the individuals who were not exposed, the proportion of heads of households with 13 years of schooling or over and monthly income greater than or equal to 20 minimum salaries was greater.All these differences between exposed and unexposed individuals were statistically signifi cant in relation to corridors delimited using both methods (p < 0.001). DISCUSSION The prevalence of exposure to magnetic fi elds generated by transmission lines ranged from 1.4% to 2.4%, depending on the method used to defi ne the exposure corridors.The highest prevalence rates occurred among the child and adolescent populations (up to 18 years of age), and the lowest prevalence was among the elderly population over the age of 70 years. There was greater exposure among populations living in situations of socioeconomic vulnerability, given that there was greater prevalence among populations with lower schooling and income levels.However, the cross-sectional design of the present study only indicates these inequalities: it does not allow the mechanisms to be defi ned. Comparison between the results from the present study and the results in the literature, regarding the prevalence of exposure to magnetic fi elds shown by the two methods for delimiting corridors, showed that that the prevalence values from the present study were lower than those found in other countries.However, the other studies used corridors of widths that differed from those in the present study, thereby impairing comparisons. To evaluate occurrences of breast cancer, Kliukiene et al 8 estimated corridors of widths ranging from 40 m for 33 kV transmission lines to 300 m for 420 kV lines, such that the corridors included exposures ≥ 0.05 μT.They found that the prevalence of exposure was 5% among women in Norway.The corridors in the study by Olsen et al 9 also varied according to the transmission line voltage, such that they defi ned exposures ≥ 0.1 μT, in order to investigate childhood cancer. Baumgardt-Elms et al 1 used corridors of 100 m in width along 110 kV and 380 kV lines in Hamburg, Germany, to evaluate the risk of testicular cancer.The exposure prevalence found was 6.9% among the cases and 5.8% among the controls (OR = 1.3; 95% CI: 0.56; 2.8). 1 In a study on childhood cancer in Finland, Verkasalo et al 13 used corridors of 500 m along transmission lines, making the assumption that within this width there would already be a magnetic fi eld ≥ 0.01 μT.In another study to evaluate depression, 14 using a similar method, a statistically signifi cant risk for the outcome of exposure was found. Draper et al 2 , in the United Kingdom, used distances ≤ 600 m from 275 and 400 kV transmission lines to evaluate exposure and found a statistically signifi cant association, such that there was greater risk of childhood leukemia and exposure prevalence of 4% among children ≤ 14 years. As an aid in evaluating residential exposure in Japan, Kabuto et al 6 used the distances of homes from transmission lines of 22 to 500 kV, such that people living not more than 99 m from a line were considered to be exposed, while ≥ 100 m was taken to be the reference group.At distances of up to 50 m, the results were statistically signifi cant, with an increased risk of acute lymphoblastic leukemia among children. It is possible that the lower prevalence of exposure found in our study may have been due to the fact that many transmission lines in São Paulo go through regions with lower densities of homes: areas with other uses such as industrial zones, commercial zones, rural areas, forested areas (Serra da Cantareira, for example) or riverbanks (such as along the Pinheiros and Tietê rivers). The low prevalence values may have two interpretations.On the one hand, this may be a positive factor, given that these fi elds have harmful effects on health.[4]6 Furthermore, as a negative result, the greater prevalence among the population of lower schooling and income levels shows that magnetic fi elds are yet another burden on populations in situations of greater socioeconomic vulnerability. Figure 1 . Figure 1.Transmission lines in the Metropolitan Region of São Paulo, Southeastern Brazil, 2008. Figure 3 . Figure 3. Percentage of heads of households living in the exposure corridors, according to monthly income.Metropolitan region of São Paulo, Southeastern Brazil, 2008. The EMF-SP project was coordinated by the Brazilian Association for Electromagnetic Compatibility and funded by the National Electricity Agency (Project No. 0390-041/2004), with participation by electricity transmission and distribution companies in the state of São Paulo.c Fundação Sistema Estadual de Análise de Dados.Município de São Paulo.São Paulo: 2008[cited 2008 Sep 29].Available from: http://www.seade.sp.gov.br/produtos/msp/index.php b Table 1 . Length of electricity transmission lines.Metropolitan Region of São Paulo, Southeastern Brazil, 2008. Table 2 . Distribution of the population living in the metropolitan region of São Paulo according to corridors of exposure to magnetic fi elds generated by transmission lines.Metropolitan Region of São Paulo, Southeastern Brazil, 2008. a Results corrected for the year 2008 Figure 2. Percentage of heads of households living in exposure corridors, according to schooling level.Metropolitan Region of São Paulo, Southeastern Brazil, 2008.
2017-06-09T21:30:38.415Z
2010-08-01T00:00:00.000
{ "year": 2010, "sha1": "4e218742f939c49010a5ca1368c7910f71c3bb10", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rsp/a/vjVfPr4ftPfcWwnv65VN4td/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4e218742f939c49010a5ca1368c7910f71c3bb10", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
14894481
pes2o/s2orc
v3-fos-license
The use of nutritional supplements in dressage and eventing horses The aim of the study was to determine which types of nutritional supplements were used in dressage and eventing horses, and the reasons that owners used supplements. An online questionnaire was distributed through British Eventing and Dressage websites, to collect data on demographics of owners and their horses, supplements used and their opinion on health and performance problems. Data were evaluated using descriptive analysis, Sign and Fisher's exact tests for quantitative data, and categorisation of qualitative data. In total, 599 responses met the inclusion criteria (441 dressage and 158 eventing horse owners). Participants had 26.4 (3–60) (mean (range)) years of riding experience, owned 1.2 (0–10) horses and used 2 (0–12) supplements in their highest performing horse. The main health and performance issues identified for dressage were ‘energy/behaviour’, ‘lameness’ and ‘back and muscle problems’. The main issues for eventing were ‘stamina and fitness levels’,’ lameness’ and ‘energy/behaviour’. The main reasons for using supplements in their highest performing horse were ‘joints and mobility’, and ‘behaviour’ for dressage, and ‘electrolytes’, and ‘joints and mobility’ for eventing. Lameness and behavioural problems were significant concerns within both disciplines. There was incongruence between owners’ opinions of problems within their discipline and their reasons for using supplements. INTRODUCTION There are a large range of equine nutritional supplements currently available in the UK, with 171,400 purchases annually. In 2011, the market was worth £34 million, and the mean annual supplement spend/person was £198 (BETA 2011). There are a number of factors that may determine a horse owner's choice of nutritional supplement for their horse, including discipline-specific health and performance problems, and any pre-existing problems or injuries in an individual animal (Williams and Burk 2010). There is a lack of published studies on why horse owners use different supplements, and currently, the scientific evidence on the efficacy of supplements in the prevention and management of health and performance problems is limited (Noble and others 2008, Vandeweerd and others 2012, Talbot and others 2013. The majority of research relates to osteoarthritis; however, a systematic review of the effect of nutraceuticals on clinical signs of pain or lameness in 2012 concluded that there was a low strength of evidence for efficacy in the horse (Vandeweerd and others 2012). Research into the use of nutritional supplements in people has shown that demographics, lifestyle characteristics and any concurrent medical conditions affect individuals' decisions to use nutritional supplements (Gunther andothers 2004, Jasti andothers 2003). There is currently no data on how horse owners choose nutritional supplements for their horse, nor is there any information on their opinions of health and performance issues within different equine competitive disciplines. The aims of this study were to determine which types of nutritional supplements horse owners/riders use in their horses, and how their choice of supplements is related to their opinion of health and performance problems in different disciplines. The two study populations chosen were dressage horses and eventing horses as there are differences in the athletic demands and veterinary problems associated with these two disciplines (Murray and others 2010b). The study objectives were to: ▸ describe the demographics and experience of owners/riders of dressage and eventing horses who participated in the survey; ▸ evaluate owners'/riders' opinions on the most important health and performance issues within the disciplines of dressage and eventing; ▸ determine which types of supplements horse owners/riders were using and their reasons for doing this. MATERIALS AND METHODS An online questionnaire was developed to gather information from owners and riders competing in dressage and eventing. Nutritional supplements were described as 'nutritional supplements commonly used to improve performance or prevent or treat health problems'. The questionnaire was divided into four sections. Section 1 was general information about the participant (including age, sex, years of riding experience, discipline and level at which they were competing). Section 2 was the participant's opinion on the health and performance problems within their discipline, and information on the horses that they own/ride, and the nutritional supplements they feed. Section 3 was specifically about any health and performance problems in their highest performing competition horse, any supplements used in this horse, the reasons for using this and their opinion of the supplements used. Section 4 asked about the sources of information used for choosing nutritional supplements and the participant's opinion on these different sources (see online supplementary item 1). This section relating to the sources of information is described in a separate report (Gemmill and others 2016). The questionnaire included closed multiple-choice questions, single-answer and multiple-answer options, and open free-text questions (www.surveymonkey.com). Categories of nutritional supplements were developed based on the different supplements that were currently commercially available. Links to the questionnaire were distributed primarily via British Eventing, British Dressage, Dodson and Horrell Ltd, and University of Nottingham websites, and there were secondary distribution through press releases and social media to other websites. Inclusion criteria for analysis were horse owners and/ or riders who were competing in dressage or eventing. Responses from people competing in other disciplines or in both of these disciplines were excluded from analysis. Data that met the inclusion criteria were downloaded into Microsoft Excel (Microsoft Office Suite 2007, Microsoft). Data analysis included descriptive analysis for quantitative data (mean, mode, range and percentages). For categorical data, graphs were used to analyse responses and assess normality of data. Mean or mode values and ranges, and percentages of distributions across each discipline were recorded in tables for analysis of numerical responses. Fisher's exact test (Minitab 16.1.0, Minitab) was used to compare the health and performance problems between the two disciplines of dressage and eventing. The Sign test (Minitab 16.1.0, Minitab) was used to compare dressage and event riders' views on nutritional supplements, and to compare differences in the perceived importance of supplements. Results were considered statistically significant at P<0.05. Qualitative data were analysed by categorising responses and ranking frequency of occurrence. In total, 820 horse owners/riders participated in the questionnaire, and the completion rate for the survey was 80 per cent (656/820). Also, -599 participants met the inclusion criteria, competing in either dressage (441 respondents) or eventing (158 respondents). The number of responses for each category (x) compared with the total number of responses for that question (y) were given for each section (x/y). For both dressage and eventing categories, the majority of participants were female and were in the 22-34-year-old-age category (Table 1). Participants had a wide range of riding experience, with the number of years that participants had been riding for ranging from 5 to 60 years for dressage and 3 to 50 years for eventing ( Table 1). The majority of riders competed at novice affiliated level in both disciplines (36.8 per cent of participants for dressage and 49.7 per cent for eventing), and the competition category that had the fewest participants was advanced affiliated for both disciplines (16.5 per cent of participants for dressage and 8.8 per cent for eventing) ( Table 1). The mean number of horses that each participant competed was one horse for dressage owners and two horses for eventing (Table 2). When asked about their highest performing or 'top' competition horse, the mean age of this horse was 11 years and the most common breed was thoroughbred or thoroughbred cross for both dressage and eventing ( Table 2). The highest performing competition horse was competing in a mean of 12 events per year for dressage and 17 events per year for eventing, and was fed a mean of two supplements for both groups (range 0-6 supplements for dressage and 0-12 supplements for eventing) ( Table 2). The main reasons participants identified for giving supplements to dressage and eventing horses were to treat a specific problem (Table 2). Also, 29 out of 542 respondents to this question stated that they did not use supplements (20 dressage owners and 9 eventing horse owners) ( Table 2). Free-text responses by participants describing what they considered to be the most important health and performance problems within their discipline were reviewed and categorised. The most frequently identified categories of problems within the discipline of dressage were (1) energy levels and behavioural issues (42.2 per cent), (2) lameness (37.1 per cent, including joint problems, arthritis, tendon, ligament and soft tissue injuries) and (3) back and muscle problems (15.4 per cent). Other problems identified less frequently included gastrointestinal problems (including colic, digestive problems and gastric ulcers), hoof condition and hoof balance, respiratory problems, dehydration and electrolyte balance, and stamina. Participants were then asked for the reasons that they fed nutritional supplements, choosing three ranked reasons from multiple predefined categories. The most frequently identified reasons for feeding supplements to dressage horses were (1) joints and mobility (78.4 per cent), (2) vitamins and minerals (46.1 per cent) and (3) behaviour (45.9 per cent) (n=388) (Fig 1). The most frequently identified health or performance problems within the discipline of eventing (using the categories described above) were (1) stamina and fitness levels (43.9 per cent), (2) lameness (41.9 per cent) and (3) energy levels and behavioural issues (37.1 per cent). Other problems identified less commonly included respiratory problems, back and muscle problems, hoof condition and balance, gastrointestinal problems, and dehydration and electrolyte balances. The most frequently identified reasons that eventers (n=132) gave for using supplements (choosing from multiple predefined categories) were (1) electrolytes (70.5 per cent), (2) joint and mobility (68.9 per cent) and (3) behaviour (43.9 per cent) (Fig 2). Categorisation of free-text responses to the question, 'Out of all the supplements that you feed, what is the name of the supplement you consider to be the most important?' produced a wide range of responses. However, for both dressage and eventing owners/riders, joint supplements were named most frequently (57 per cent, n=492). This was followed by behavioural supplements for dressage horses (9 per cent, n=376) and electrolytes for eventing horses (8.6 per cent, n=116). The majority of respondents listed the products they used, but some also gave reasons of why they used them, and these reasons were categorised. All of the behavioural supplements that were listed by participants were used for a calming effect on the horse. Behavioural supplements were mentioned by a few eventing horse owners and riders, but at a much lower level compared with the dressage horse owners/riders. The reason that the majority of eventing horse owners/riders gave for the importance of electrolytes was related to the intensity of the cross-country phase. The frequency with which electrolytes were identified was significantly greater for eventing owners/riders compared with dressage owners/riders (P<0.05). The last section of the questionnaire related specifically to the owner's/rider's decisions and approaches to use of supplements in their highest performing competition horse. This included the main reasons for using nutritional supplements in their highest performing horse (with multiple-option responses), the single most important reason for using supplements (single response), and any health and performance problems that affected this horse (multiple-option responses). The main reason (multiple responses) and most important reason (single-set answer option) for using nutritional supplements in their highest performing horse for both dressage (41.8 per cent) and eventing owners/riders (35.6 per cent) (n=496) was joints and mobility. Behaviour was ranked as the second reason for using supplements for dressage owner/riders (15.1 per cent) and electrolytes for eventing owners/riders (10.2 per cent), and vitamins and minerals were ranked as the third reason for feeding supplements for both disciplines (dressage 12.9 per cent, eventing 10.2 per cent). Behavioural issues were considered by owners to be the most important health and performance problem in their highest performing horse for both dressage and eventing, followed by joints and mobility (n=374). The number of owners/riders identifying electrolytes as a problem in their horse was significantly higher for eventing compared with dressage (P<0.05 Fisher's exact test). The majority of dressage and eventing horse owners and riders (n=509 total respondents, dressage n=371, eventing n=122) felt that they could see a marked difference in their horse when they have fed a specific supplement to target a problem. Opinions varied between the five answer options from: 'no difference seen' to 'could not cope without it' (Fig 1). DISCUSSION The aim of this study was to investigate some of the reasons why horse owners and riders chose nutritional supplements and how this varies between competitive disciplines. The findings illustrated that a wide range of different supplements were used, and most owners and riders perceived that they were important to their horse's health and performance. Lameness and/or joint problems were identified as important issues in both disciplines, which is consistent with previous literature (Pearson 2009, McIlwraith 2010b, Murray and others 2010a, Murray and others 2010b. This study also highlighted the perceived importance of behavioural problems within both disciplines. There were incongruencies in owners'/riders' opinions of health and performance issues within their competitive disciplines and their horses, and the supplements that they were using, which warrant further investigation. Owners/riders of dressage horses identified behavioural issues and energy levels as the most important issue within their discipline, followed by lameness, then back and muscle problems. They also identified behavioural issues as the main problem in their highest performing horse, followed by 'joints and mobility'. However, their main reason for feeding supplements was for 'joints and mobility' problems. There were different trends for responses from owners/riders of eventing horses. They identified stamina and fitness as the main issue in their discipline, followed by lameness, and behaviour issues and energy levels were the third most commonly identified health and performance issue for eventing. However, the main problem in their highest performing horse was identified as behavioural issues, followed by joints and mobility, which mirrors the response from dressage owners and riders to this question. Once again, their opinion of health and performance problems was not mirrored in the use of supplements, and the main reasons for giving supplements were for electrolytes, and 'joints and mobility'. There are a number of possible reasons for the incongruency between owners' opinions of main problems and their reasons for using supplements. Further study is needed to investigate this, but possible factors may include the limited number of scientific studies on behavioural issues in horses. There is also limited evidence on the efficacy of behavioural supplements (Freire and others 2008, McCall 2009, Noble and others 2008, Talbot and others 2013 or how owners obtain and assess the different source of information on supplements. In contrast, there are considerably more studies on the use of supplements to enhance joint function and mobility (Hanson and others 1997, Clayton and others 2002, Forsyth and others 2006, Gupta and others 2009, Pearson and Lindinger 2009, McIlwraith 2010. This may explain why owners and riders consider these supplements to be important to use in their horse, although the evidence of efficacy is still low (Vandeweerd and others 2012). Another reason for the perceived importance of using nutritional supplements for 'joints and mobility' may be because owners are using supplements as a preventative measure (to try and reduce the risk of developing joint and mobility problems) rather than using them as a solution to a current health or performance problem. Further research, for example, using focus groups or interviews, would help to explore owners'/riders' perceptions and the factors that may affect their decision-making. This study had a number of limitations; the use of an online survey may have introduced some bias as the websites may not be accessed by all of the potential participants. Secondary distribution through a range of media, including equine magazines, Facebook and Twitter sites, increased exposure to a wider population. Use of online surveys is becoming increasingly common, but direct mailings of target populations may be more effective in some cases. The study by Murray and others (2010a) investigated risk factors for lameness in dressage horses, using a questionnaire sent to all members registered with British Dressage in 2005 (totalling 11,363), with a response rate of 22.5 per cent. The present study had a lower number of respondents, and response rates could have been improved by directly mailing all members of the disciplines. The study gave a description of nutritional supplements, and had predefined categories of types of supplements for a number of questions. 'Balancers' were not included as a separate category, and the study did not specify whether they should be included as 'supplements'. Previous published studies on nutritional supplements have focused on small numbers of horses in specific populations (Hoffman and others 2009) or those competing at the top of their discipline Williams 2008, Leahy andothers 2010). This study had a larger population of participants, and the population demographics showed a wide distribution of age and experience of both participants and horses. Previous studies have investigated the incidence of different health problems in performance horses (Kaye 2006, Murray and others 2010a, Singer and others 2008, but this is the first study to report owners'/riders' perceptions of health and performance problems in their discipline. The mixed methodology produced some aspects of the data that were captured with predefined categories, but also used free text for key questions to enable participants to express their opinions. The open free-text questions in this study produced a wide range of responses, which differed from existing literature, and highlighted the value of qualitative data collection. There is potential for bias within the study, based on participants' opinion of the focus of the questionnaire, for example, asking participants to select a supplement from a predetermined list may limit their choices or participants may give the answers that they think the researcher is looking for. An alternative methodology is the use of interviews and/or observations and visits to determine what supplements participants are using and why; however, this usually limits studies to a smaller number of participants. Questionnaires can be useful to collect data from a wider population, to identify key themes and areas that can be investigated in more detail using interviews or focus groups. This study highlighted differences between the disciplines of dressage and eventing. It identified the perceived importance of electrolyte supplements in eventing horses and behavioural supplements in dressage horses, which reflects some of the current literature and differences between the two disciplines (Meyer 1986, Schott 2010, Pagan 2010 an understanding of the issues within their discipline and choosing supplements based on this. Electrolyte balance problems have previously been identified as a problem in eventers (McCutcheon andothers 1995, Ecker and. In this study, dressage riders identified energy and behaviour as the main problem within their discipline, but their main reason for feeding supplements was stated as joints and mobility issues. As discussed earlier, this may reflect the lack of evidence on nutritional supplements for behaviour, but in addition to this, many musculoskeletal issues may cause changes in behaviour and performance. Previous literature has identified that lameness and back problems are key issues in dressage horses (Wennerstrand andothers 2004, Murray andothers 2010a), and these may affect the horse's ability or willingness to work. Once again, further research using interviews or focus groups would be beneficial to explore people's reasoning and rationales for their decisions. Despite the low levels of evidence for most supplements, this study showed that they were widely used across both disciplines (with only 29 of 542 participants not using supplements), with most owners stating that they felt that the supplements made a marked difference to their horse. Given the widespread use of supplements and perceived value towards the horse's health and performance, it can be argued that the veterinary profession should have an understanding of owners'/riders' opinions and concerns. CONCLUSIONS This study identified differences in horse owners'/ riders' choice of nutritional supplements compared with their opinions of health and performance problems in their competitive discipline. This incongruency may reflect the current levels of evidence for different types of supplements. Future studies using interviews or focus groups would be beneficial to explore some of the factors that influence horse owners'/riders' decisionmaking. The study also identified both the perceived importance of behavioural issues in dressage and eventing, and the frequent use of behavioural supplements in individual horses. This highlights the need for research into the incidence, frequency and causes of behavioural problems in performance horses, and further research into the efficacy of nutritional supplements in the horse.
2016-05-04T20:20:58.661Z
2014-06-12T00:00:00.000
{ "year": 2016, "sha1": "1a553b460e3171241c5c264dc19b62db80290c59", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1136/vetreco-2015-000154", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a24a7bc54f1428b1d8b839746b2d976ef1cc4071", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Engineering" ] }
218686756
pes2o/s2orc
v3-fos-license
Optical control of ERK and AKT signaling promotes axon regeneration and functional recovery of PNS and CNS in Drosophila Neuroregeneration is a dynamic process synergizing the functional outcomes of multiple signaling circuits. Channelrhodopsin-based optogenetics shows the feasibility of stimulating neural repair but does not pin down specific signaling cascades. Here, we utilized optogenetic systems, optoRaf and optoAKT, to delineate the contribution of the ERK and AKT signaling pathways to neuroregeneration in live Drosophila larvae. We showed that optoRaf or optoAKT activation not only enhanced axon regeneration in both regeneration-competent and -incompetent sensory neurons in the peripheral nervous system but also allowed temporal tuning and proper guidance of axon regrowth. Furthermore, optoRaf and optoAKT differ in their signaling kinetics during regeneration, showing a gated versus graded response, respectively. Importantly in the central nervous system, their activation promotes axon regrowth and functional recovery of the thermonociceptive behavior. We conclude that non-neuronal optogenetics targets damaged neurons and signaling subcircuits, providing a novel strategy in the intervention of neural damage with improved precision. Introduction Inadequate neuroregeneration remains a major roadblock toward functional recovery after nervous system damage such as stroke, spinal cord injury (SCI), and multiple sclerosis. Extracellular factors from oligodendrocyte, astroglial, and fibroblastic sources restrict axon regrowth (Liu et al., 2006;Yiu and He, 2006;Liu et al., 2011;Lu et al., 2014;Schwab and Strittmatter, 2014) but eliminating these molecules only allows limited sprouting (Sun and He, 2010), suggesting a down-regulation of the intrinsic regenerative program in injured neurons (Sun and He, 2010;He and Jin, 2016). The neurotrophic signaling pathway, which regulates neurogenesis during embryonic development, represents an important intrinsic regenerative machinery (Ramer et al., 2000). For instance, elimination of the PTEN phosphatase, an endogenous brake for neurotrophic signaling, yields axonal regeneration (Park et al., 2008). An important feature of the neurotrophin signaling pathway is that the functional outcome depends on signaling kinetics (Marshall, 1995) and subcellular localization (Watson et al., 2001). Indeed, neural regeneration from damaged neurons is synergistically regulated by multiple signaling circuits in space and time. However, pharmacological and genetic approaches do not provide sufficient spatial and temporal resolutions in the modulation of signaling outcomes in terminally differentiated neurons in vivo. Thus, the functional link between signaling kinetics and functional recovery of damaged neurons remains unclear. The emerging non-neuronal optogenetic technology uses light to control protein-protein interaction and enables light-mediated signaling modulation in live cells and multicellular organisms (Zhang and Cui, 2015;Khamo et al., 2017;Johnson and Toettcher, 2018;Leopold et al., 2018;Dagliyan and Hahn, 2019;Goglia and Toettcher, 2019). By engineering signaling components with photoactivatable proteins, one can use light to control a number of cellular processes, such as gene transcription (Motta-Mena et al., 2014;Wang et al., 2017), phase transition (Shin et al., 2017;Dine et al., 2018), cell motility (Wu et al., 2009) and differentiation (Khamo et al., 2019), ion flow across membranes (Kyung et al., 2015;Ma et al., 2018), and metabolism (Zhao et al., 2018;Zhao et al., 2019), to name a few. We have previously developed optogenetic systems named optoRaf (Zhang et al., 2014;Krishnamurthy et al., 2016) and optoAKT (Ong et al., 2016), which allow for precise control of the Raf/MEK/ERK and AKT signaling pathways, respectively. We demonstrated that timed activation of optoRaf enables functional delineation of ERK activity in mesodermal cell fate determination during Xenopus laevis embryonic development (Krishnamurthy et al., 2016). However, it remains unclear if spatially localized, optogenetic activation of ERK and AKT activity allows for subcellular control of cellular outcomes. In this study, we used optoRaf and optoAKT to specifically activate the Raf/MEK/ERK and AKT signaling subcircuits, respectively. We found that both optoRaf and optoAKT activity enhanced axon regeneration in the regeneration-potent class IV da (C4da) and the regeneration-incompetent class III da (C3da) sensory neurons in Drosophila larvae, although optoRaf but not optoAKT enhanced dendritic branching. Temporally programmed and spatially restricted light stimulation showed that optoRaf and optoAKT differ in their signaling kinetics during regeneration and that both allow spatially guided axon regrowth. Furthermore, using a thermonociception-based behavioral recovery eLife digest Most cells have a built-in regeneration signaling program that allows them to divide and repair. But, in the cells of the central nervous system, which are called neurons, this program is ineffective. This is why accidents and illnesses affecting the brain and spinal cord can cause permanent damage. Reactivating regeneration in neurons could help them repair, but it is not easy. Certain small molecules can switch repair signaling programs back on. Unfortunately, these molecules diffuse easily through tissues, spreading around the body and making it hard to target individual damaged cells. This both hampers research into neuronal repair and makes treatments directed at healing damage to the nervous system more likely to have side-effects. It is unclear whether reactivating regeneration signaling in individual neurons is possible. One way to address this question is to use optogenetics. This technique uses genetic engineering to fuse proteins that are light-sensitive to proteins responsible for relaying signals in the cell. When specific wavelengths of light hit the light-sensitive proteins, the fused signaling proteins switch on, leading to the activation of any proteins they control, for example, those involved in regeneration. Wang et al. used optogenetic tools to determine if light can help repair neurons in fruit fly larvae. First, a strong laser light was used to damage an individual neuron in a fruit fly larva that had been genetically modified so that blue light would activate the regeneration program in its neurons. Then, Wang et al. illuminated the cell with dim blue light, switching on the regeneration program. Not only did this allow the neuron to repair itself, it also allowed the light to guide its regeneration. By focusing the blue light on the damaged end of the neuron, it was possible to guide the direction of the cell's growth as it regenerated. Regeneration programs in flies and mammals involve similar signaling proteins, but blue light does not penetrate well into mammalian tissues. This means that further research into LEDs that can be implanted may be necessary before neuronal repair experiments can be performed in mammals. In any case, the ability to focus treatment on individual neurons paves the way for future work into the regeneration of the nervous system, and the combination of light and genetics could reveal more about how repair signals work. assay, we found that optoRaf and optoAKT activation led to effective axon regeneration as well as functional recovery after central nervous system (CNS) injury. We note that most of the previous optogenetic control of neural repair studies were based on channelrhodopsion in C. elegans , mouse DRG culture (Park et al., 2015a) or motor neuron-schwann cell co-culture (Hyung et al., 2019). Another study used blue-light activatable adenylyl cyclase bPAC to stimulate neural repair in mouse refractory axons (Xiao et al., 2015). These work highlighted the feasibility of using optogenetics to study neural repair but did not pin down the exact downstream signaling cascade mediating neuronal repair. Additionally, most studies focused on peripheral neurons that are endogenously regenerative. Here, we specifically activated the ERK and AKT signaling pathways and performed a comprehensive study of neural regeneration in both the peripheral nervous system (PNS) and CNS neurons in live Drosophila. We envision that features provided by non-neuronal optogenetics, including reversibility, functional delineation, and spatiotemporal control, will lead to a better understanding of the link between signaling kinetics and functional outcome of neurotrophic signaling pathways during neuroregeneration. Results Light enables reversible activation of the Raf/MEK/ERK and AKT signaling pathways To reversibly control the Raf/MEK/ERK and AKT signaling pathways, we constructed a single-transcript optogenetic system using the p2A bicistronic construct that co-expresses fusion-proteins with the N-terminus of cryptochrome-interacting basic-helix-loop-helix (CIBN) and the photolyase homology region of cryptochrome 2 (CRY2PHR, abbreviated as CRY2 in this work). Following a similar design of the optimized optoRaf (Krishnamurthy et al., 2016), we improved the previous optogenetic AKT system (Ong et al., 2016) with two tandom CIBNs (referred to as optoAKT in this work) ( Figure 1-figure supplement 1A). Consistent with previous studies, the association of CIBN and CRY2 took about 1 s, and the CIBN-CRY2 complex dissociated in the dark within 10 min (Kennedy et al., 2010;Zhang et al., 2014). The fusion of Raf or AKT does not affect the association and dissociation kinetics of CIBN and CRY2 and multiple cycles of CRY2-CIBN association and dissociation can be triggered by alternating light-dark treatment (Figure 1-figure supplement 1B-1D, Videos 1 and 3). Activation of optoRaf and optoAKT resulted in nuclear translocation of ERK-EGFP ( Figure 1A, Video 2) and nuclear export of FOXO3-EGFP ( Figure 1B, Video 4) resolved by live-cell fluorescence imaging, indicating activation of the ERK and AKT signaling pathways, respectively. Western blot analysis on pERK (activated by optoRaf) in HEK293T cells showed that pERK activity ( Figure 1C) increased within 10 min blue light stimulation and returned to the basal level 30 min after the blue light was shut off ( Figure 1D). There was a slight decrease in pERK activity upon optoRaf activation for over 10 min, likely due to negative feedback, which has been consistently observed in previous studies . On the other hand, continuous light illumination maintained a sustained activation of pCRY2-mCh-AKT within an onset of 10 min ( Figure 1E). The inactivation kinetics of pAKT was 30 min, similar to that of pERK Video 1. Reversible optogenetic stimulation of Raf membrane recruitment with optoRaf resolved by livecell imaging in BHK21 cells. Cells were cotransfected with CIBN-EGFP-CaaX and CRY2-mCh-Raf1, and recovered overnight before imaging. Blue and green light (exposure time 200 ms) were applied every 2 s until the fluorescence intensity of mCherry on the plasma membrane does not change. Cells were left on the microscope in the dark from 30 min to allow for membrane dissociation of CRY2-mCh-Raf. In the next cycle, the same light pattern was repeated and membrane recruitment of CRY2-mCh-Raf was recorded. https://elifesciences.org/articles/57395#video1 Figure 1. OptoRaf and optoAKT specifically activate the ERK and AKT subcircuits, respectively. (A) Activation of optoRaf benchmarked with ERK2-EGFP nuclear translocation. (B) Activation of optoAKT benchmarked with FOXO3-EGFP nuclear export. Scale bars = 10 mm. (C) Western blot analysis of the pERK and ERK activities in response to time-stamped activation of optoRaf. Blue light (0.5 mW/cm 2 ) was applied for 5, 10, 20, and 60 min to HEK293T cells transfected with optoRaf. Non-transfected cells or optoRaf-transfected cells (dark) were used as negative controls. (D) Inactivation of the pERK Figure 1 continued on next page ( Figure 1F and G). Note we use only the phosphorylated and total forms of CRY2-mCh-AKT to quantify the light response of optoAKT because the endogenous AKT does not respond to light. optoRaf and optoAKT do not show crosstalk activity at the pERK and pAKT level Binding of neurotrophins to their receptor activates multiple downstream signaling subcircuits, including the Raf/MEK/ERK and AKT pathways. Delineation of signaling outcomes of individual subcircuits remains difficult with pharmacological assays given the unpredictable off-targets of small-molecule drugs. We hypothesized that optoRaf and optoAKT could delineate signaling outcomes because they bypass ligand binding and activate the intracellular signaling pathway. To test this hypothesis, we probed phosphorylated proteins, including pERK and pAKT with WB analysis in response to light-mediated activation of optoRaf and optoAKT. Results show that optoRaf activation does not increase endogenous pAKT ( Figure 1H and I). Similarly, optoAKT activation does not increase pERK or endogenous pAKT ( Figure 1H and I). Thus, at the level of ERK and AKT, optoRaf and optoAKT do not show crosstalk activity in mammalian cells. Activation of optoRaf and optoAKT requires upstream signaling molecules Although activation of both optoRaf and opto-AKT bypasses ligand-receptor binding, it remains unclear if other upstream signaling molecules are required to activate optoRaf and optoAKT. Endogenous Raf1 activation requires its membrane translocation mediated by the GTP-bound form of Ras, followed by phosphorylation at several residues, including Ser338, which is located Figure 1 continued activity after blue light was shut off. (E) Western blot analysis of the pAKT (S473) and AKT activities in response to time-stamped activation of optoAKT. Cells were treated with identical illumination scheme in (C). (F) Inactivation of the pAKT activity after blue light was shut off. (G) Plots of normalized pERK and pAKT activity upon optoRaf and optoAKT activation, respectively (maximum activation was defined as 1). Both optoRaf and optoAKT show rapid (less than 5 min) and reversible activation patterns (N = 3). (H) OptoRaf and optoAKT do not show cross activity at the level of ERK and AKT. Cells were exposed to blue light (0.5 mW/cm 2 ) for 10 min before lysis. (I) Quantification of the phosphorylated protein level, phosphorylation level was normalized to non-transfected group(N = 3). (J, K) PC12 cells transfected with either optoRaf (J) or optoAKT (K) were treated by blue light for 24 hr (0.2 mW/cm 2 ). Scale bars = 50 mm. (I) Quantification of the neuritogenesis ratio of PC12 cells transfected with optoRaf or optoAKT. A membrane-targeted Raf (Raf1-EGFP-CaaX) causes constitutive neuritogenesis independent of light treatment, whereas the no-Raf (CIBN2-EGFP-CaaX) control does not increase the neuritogenesis ratio under light or dark treatment. See also Figure 1-figure supplement 1. The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. OptoRaf and optoAKT specifically activate the ERK and AKT subcircuits, respectively. Video 2. Reversible optogenetic stimulation of AKT membrane recruitment with optoAKT resolved by livecell imaging in BHK21 cells. Cells were cotransfected with CIBN-EGFP-CaaX and CRY2-mCh-AKT, and recovered overnight before imaging. Blue and green light (exposure time 200 ms) were applied every 10 s until the fluorescence intensity of mCherry on the plasma membrane does not change. Cells were left on the microscope in the dark from 30 min to allow for membrane dissociation of CRY2-mCh-AKT. In the next cycle, the same light pattern was repeated and membrane recruitment of CRY2-mCh-AKT was recorded. https://elifesciences.org/articles/57395#video2 in the junction region between the regulator domain and the kinase domain (Mason et al., 1999). Replacement of Ser338 with alanine abolishes Raf activation (Xiang et al., 2002;Goetz et al., 2003). Note that phosphorylation of Ser338 itself does not activate Raf but is a prerequisite regulatory event for Raf activation (Diaz et al., 1997), likely leading to Raf dimerization (Takahashi et al., 2017). To determine if Ser338 phosphorylation is involved in optoRaf activation, we probed the phosphorylation state of CRY2-mCh-Raf upon blue light stimulation and found that indeed Ser338 is significantly phosphorylated upon blue light stimulation ( Similarly, for optoAKT, it remains unclear if its activation requires upstream PI3K signaling. Full activation of AKT requires phosphorylation on both T308 in the activation loop of the catalytic protein kinase core and S473 in a C-terminal hydrophobic motif (Manning and Toker, 2017). PH-domain-containing kinases such as PDK1 are essential for AKT activation by phosphorylating AKT on T308, whereas the mechanistic target of rapamycin (mTOR) complex 2 (mTORC2) phosphorylates AKT on S473. In addition to verifying that phosphorylation of S473 occurs during opto-AKT activation ( Figure 1E and Figure 1-figure supplement 1F), we probed pT308 for optoAKT upon blue light stimulation (Figure 1-figure supplement 1F). We found that light stimulation indeed enhances the level of pT308 in optoAKT, indicating that upstream kinases (e.g. PDK1) are involved in the activation of optoAKT upon membrane translocation of CRY2-mCh-AKT. Activation of optoRaf but not optoAKT enhances PC12 cell neuritogenesis We verified that the activation of optoRaf enhances PC12 cell neuritogenesis, which is consistent with previous studies (Zhang et al., 2014;Krishnamurthy et al., 2016). The neuritogenesis ratio is defined as the ratio between the number of transfected cells with at least one neurite longer than the size of the cell body and the total number of transfected cells. Twenty-four hours of blue light stimulation (0.2 mW/cm 2 ) increased the neuritogenesis ratio from the basal level (0.24 ± 0.04) to 0.52 ± 0.03 ( Figure 1J and L). Light-mediated activation of optoAKT, on the other hand, did not increase the neuritogenesis ratio (0.23 ± 0.04 in the dark versus 0.20 ± 0.02 under light) ( Figure 1K and L). A membrane-targeted Raf1 (Raf1-EGFP-CaaX) was used as a positive control, which caused significant neurite outgrowth independent of light treatment (0.65 ± 0.01 in the dark versus 0.63 ± 0.01 under light). Expression of CIBN2-EGFP-CaaX (without CRY2-Raf1), a negative control, did not increase PC12 neurite outgrowth either in the dark (0.20 ± 0.02) or under light (0.14 ± 0.01) ( Figure 1L). Video 3. Optogenetic activation of optoRaf causes nuclear translocation of ERK2-EGFP in BHK21 cells. Cells were transfected with optoRaf (CIBN-CaaX and CRY2-mCh-Raf1) and ERK2-EGFP, and recovered overnight before imaging. Blue and green light (exposure time 200 ms) were applied every 10 s. Nuclear translocation of ERK2-EGFP was recorded. https://elifesciences.org/articles/57395#video3 Activation of optoRaf but not optoAKT increases sensory neuron dendrite branching in fly larvae To determine the efficacy of the optogenetic tools in vivo, we generated transgenic flies with inducible expression of optoRaf (UAS-optoRaf) and optoAKT (UAS-optoAKT). We induced the expression of the transgenes in a type of fly sensory neurons, the dendritic arborization (da) neurons, which have been used extensively to study dendrite morphogenesis and remolding (Gao et al., 1999;Grueber et al., 2002;Sugimura et al., 2003;Kuo et al., 2005;Williams and Truman, 2005;Kuo et al., 2006;Williams et al., 2006;Parrish et al., 2007). Using the pickpocket (ppk)-Gal4, we specifically expressed optoRaf in the class IV da (C4da) neurons, to test whether light stimulation would activate the Raf/MEK/ERK pathway. At 72 hr after egg laying (h AEL), wild-type (WT) and optoRafexpressing larvae were anesthetized with ether and subjected to whole-field continuous blue light for 5, 10 and 15 min, while as a control, another transgenic group was incubated in the dark (0 min). The larval body walls were then dissected and immunostained with the pERK1/2 antibody, as a readout of the Raf/MEK/ERK pathway activation. We found that 5 min light stimulation was sufficient to significantly increase the pERK signal in the cell body of C4da neurons in optoRaf-expressing larvae, while 15 min illumination enhanced pERK activation and induced ERK translocation into the nucleus ( (Lizcano et al., 2003;Miron et al., 2003). These results collectively demonstrate that optoRaf and optoAKT were robustly expressed in flies and blue light is sufficient to activate the optogenetic effectors in vivo. The phosphorylation of ERK/p70 S6K in response to blue light was only observed in C4da neurons but not in other classes of da neurons or epithelial cells (Figure 2-figure supplement 2), proving they are triggered by optoRaf/optoAKT, which were only expressed in C4da neurons under the control of ppk-Gal4. Furthermore, we found ERK was not activated in optoAKT-expressing neurons ( Figure 2A, right-most panel), nor was phospho-p70 S6K in the optoRaf-expressing larvae (Figure 2figure supplement 3, right-most panel), confirming that there is no crosstalk between these two systems, at least at the node of pERK and p70 S6K . We also examined the inactivation kinetics of ERK/ phospho-p70 S6K after blue light was shut off (Figure 2A-D). The pERK ( Figure 2C) and pAKT ( Figure 2D) activity started to decrease as the light was shut off, although the decay rate of pERK decays appears slower than pAKT. Compared with the transgenic larvae kept in the dark, there was no significant difference in phospho-p70 S6K intensity at 15 min after blue light was turned off ( Figure 2D). In contrast, a 15-min off time reduces pERK activity, but the level remains higher than the basal level. When the off-time was increased to 45 min, there is still a slightly higher pERK Video 4. Optogenetic activation of optoAKT causes retreatment of FOXO3-EGFP from the nucleus into the cytoplasm in BHK21 cells. Cells were transfected with optoAKT (CIBN-CaaX and CRY2-mCh-AKT) and FOXO3-EGFP, and recovered overnight before imaging. Blue and green light (exposure time 200 ms) were applied every 1 min. Nuclear export of FOXO3-EGFP was recorded. https://elifesciences.org/articles/57395#video4 The body walls from WT and optoRaf expressing larvae were dissected and stained for pERK1/2. The 15 min continuous light illumination leads to the enhanced fluorescent intensity and nuclear translocation of pERK in the optoRaf-expressing C4da neurons (labeled by ppk-CD4tdGFP). pERK signal is significantly increased even at 45 min after Figure 2 continued on next page activity than the dark control. The difference in the inactivation kinetics may reflect distinct signaling sensitivity between Raf and AKT in optoRaf and optoAKT, respectively. These results confirmed that the intermittent pattern of light stimulation could modulate the temporal profile of ERK and AKT signaling activities. We next investigated if optoRaf or optoAKT activation would affect neural development such as dendrite morphogenesis. We labeled C4da neurons with ppk-CD4tdGFP and reconstructed the dendrites of the lateral C4da neurons -v'ada. Without light stimulation, the dendrite complexity of neurons in transgenic larvae was comparable to that of WT ( Figure 2F and G). However, optoRaf activation resulted in a significant increase in both total dendrite length and branch number, whereas optoAKT activation exhibited a slight reduction in dendritic branching ( Figure 2E-G). These results confirm the possibility of independently activating the Raf/MEK/ERK and AKT signaling pathways in flies with our optogenetic tools, prompting us to test the feasibility of their in vivo applications, such as promoting axon regeneration with high spatial and temporal resolution. Activation of optoRaf or optoAKT results in enhanced axon regeneration in the PNS Administration of neurotrophins to damaged peripheral neurons results in functional regeneration of sensory axons into the adult spinal cord in rat (Ramer et al., 2000). Here, our photoactivatable transgenic flies empower precise spatiotemporal control of the neurotrophic signaling in live animals. To test whether light-mediated activation of the Raf/MEK/ERK or AKT signaling subcircuits would also promote axon regrowth, we used a previously described Drosophila da sensory neuron injury model (Song et al., 2012;Song et al., 2015). Da neurons have been shown to possess distinct regeneration capabilities among different sub-cell types, and between the PNS and CNS, resembling mammalian neurons (Song et al., 2012;Song et al., 2015). In particular, the C4da neurons regenerate their axons robustly after peripheral injury, while the C3da neurons largely fail to regrow. Moreover, the axon regeneration potential of C4da neurons is also diminished after CNS injury. First, we asked whether optoRaf or optoAKT activation can enhance axon regeneration in the regeneration-competent C4da neurons in the PNS. We severed the axons of C4da neurons (labeled with ppk-CD4tdGFP) with a two-photon laser at 72 hr AEL, verified axon degeneration at 24 hr after injury (AI) and assessed axon regeneration at 48 hr AI. At this time point, about 79% C4da neurons in WT showed obvious axon regrowth, and the regeneration index (Song et al., 2012;Song et al., 2015), which refers to the increase in axon length normalized to larval growth (Figure 3-figure supplement 1A and B, and Materials and methods), was 0.3810 ± 0.06653 ( Figure 3A-C). Strikingly, C4da neurons expressing optoRaf or optoAKT showed further enhanced regeneration potential in response to blue light, leading to a significant increase in the regeneration index (optoRaf: 0.7102 ± 0.1033; Source data 1. Activation of optoRaf but not optoAKT increases C4da neuron dendrite complexity. In order to test the potential synergy between optoRaf and optoAKT, we co-expressed both transgenes in C4da neurons. While there was a slight increase in the regeneration percentage, activation of both ERK and AKT pathways in the same neuron did not further increase the regeneration index (0.7387 ± 0.08390) ( Figure 3A-C). The light stimulation paradigm used in the aforementioned regeneration experiments was constant blue light applied immediately after the injury. We reason that intermittent light stimulation may provide insights into the signaling kinetics in vivo and finetune axon regeneration dynamics. Therefore, instead of constant blue light illumination, we delivered two sets of programmed light patterns to injured larvae, 15 min on-15 min off or 15 min on-45 min off per cycle for 48 hr ( Figure 3D). We found that, for optoRaf-expressing C4da neurons, when the off-time was 15 min, the intermittent light stimulation was sufficient to accelerate axon regrowth, with the regeneration index (0.6352 ± 0.09627) significantly increased compared with larvae incubated in the dark ( Figure 3E and F). However, when the off-time was 45 min, the intermittent light failed to promote axon regeneration ( Figure 3E and F). Considering that pERK activity remains slightly higher than the basal level after 45 min dark incubation ( Figure 2C), the regeneration failure at 45 off-time suggests that optoRaf regulates C4da axon regeneration in a threshold-gated manner. On the other hand, C4da neurons expressing optoAKT displayed a graded response: a moderate increase of regeneration index (0.6278 ± 0.09801) in response to the 15 min on-15 min off light and a smaller uptick (0.5312 ± 0.06963) to the 15 min on-45 min off light; both were less effective than the constant light stimulation ( Figure 3E and F). These results suggest that although the higher frequency of light stimulation generally resulted in stronger regeneration potential in the transgenic flies, constant light was not always required for maximum axon regeneration. Moreover, optoRaf and optoAKT differ in their signaling kinetics during regeneration, showing a gated versus graded response, respectively. We next determined whether optoRaf or optoAKT activation would trigger regeneration in C3da neurons, which are normally incapable of regrowth (Song et al., 2012). C3da neurons were labeled with 19-12-Gal4, UAS-CD4tdGFP, repo-Gal80 and injured using the same paradigm as C4da neurons. Compared with WT, which exhibited poor axon regeneration ability demonstrated by the low regeneration percentage and the negative regeneration index (À0.03201 ± 0.02752) ( Figure 3G-I), light stimulation significantly increased the regeneration index in optoRaf-or optoAKT-expressing larvae to 0.1298 ± 0.04637 or 0.1354 ± 0.06161, respectively ( Figure 3G-I). Similar to C4da neurons, activation of both Raf/MEK/ERK and AKT pathways failed to further enhance axon regrowth compared to optoRaf or optoAKT activation alone (Figure 3-figure supplement 2). This result confirms that the actions of optoRaf and optoAKT are not additive in promoting axon regeneration, suggesting that these two subcircuits may share the same downstream components in neuroregeneration (see Discussion). Altogether, these data indicate that optoRaf and optoAKT activation not only accelerates axon regeneration but also converts regeneration cell-type specificity. Spatial activation of optoRaf or optoAKT improves pathfinding of regenerating axons While C4da neurons are known to possess the regenerative potential, it is unclear whether the regenerating axons navigate correctly. To address this question, we focused on v'ada -the lateral C4da neurons. Uninjured v'ada axons grow ventrally, showing a typical turn and then join the axon bundle with the ventral C4da neurons (Figure 3-figure supplement 1A). We found that their regenerating axons preferentially regrew away from the original ventral trajectory ( Figure 4A and B white bars). More than 60% v'ada axons bifurcated and formed two branches targeting opposite directions. In the majority cases in WT, the ventral branch, which extends toward the correct trajectory, regenerated less frequently than the dorsal branch, with 15% v'ada containing only the ventral branch ( Figure 4A and B black bars). One possibility is that the ventral branch encounters the injury site, which may retard its elongation. As a result, only a minority of regenerating axons are capable of finding the correct path. The poor pathfinding of regenerating axons was similar among WT and the transgenic larvae, regardless of whether incubated with whole-field light or in the dark ( Figure 4B). Thus, proper guidance of the regenerating axons toward the correct trajectory remained to be resolved. We thus investigated whether spatially restricted activation of the neurotrophic signaling using our optogenetic system could guide the regenerating axons. To specifically enhance the regrowth of the ventral branch, we used a confocal microscope to focus the blue light (delivered by the 488 nm argon-ion laser) on the ventral branch for 5 min at 24 hr AI. The lengths of both the ventral and dorsal branches were measured at 24 hr AI and 48 hr AI. We subtracted the increased dorsal branch length (Ddorsal) from the increased ventral branch length (Dventral), then divided that by the total increased length of these two branches ( Figure 4D). This value was defined as the relative regeneration ratio. If the dorsal branch exhibits more regenerative potential, the ratio would be negative; otherwise, it would be positive. Without light stimulation, the relative regeneration ratio of the transgenic larvae (optoRaf: À0.6062 ± 0.1453; optoAKT: À0.5530 ± 0.1011) was comparable to that of WT (À0.5786 ± 0.08229) ( Figure 4C and D), confirming preferred regrowth of the dorsal branch. Strikingly, the 5 min local blue light stimulation significantly increased the ratio in optoRaf-or opto-AKT-expressing v'ada (optoRaf: 0.04762 ± 0.1123; optoAKT: À0.1725 ± 0.09560), while this transient stimulation resulted in no difference in WT (À0.6018 ± 0.1290) ( Figure 4C and D). This result indicates that a single pulse of local light stimulation was sufficient to lead to preferential regrowth of the ventral branch. Notably, although whole-field light illumination could significantly promote axon regrowth, it failed to increase the relative regeneration ratio in transgenic larvae (Con. on optoRaf: À0.7048 ± 0.1015; Con. on optoAKT: À0.5517 ± 0.09644) ( Figure 4D), revealing the difference between activating the neurotrophic signaling in a whole neuron and a single lesioned axon branch. On the other hand, while a 5-min local light stimulation did not lead to an overall enhancement of axon regrowth, it provided adequate guidance instructions for the regenerating axons to make the correct choice. Activation of optoRaf or optoAKT promotes axon regeneration and functional recovery in the CNS Achieving functional axon regeneration after CNS injury remains a major challenge in neural repair research. Motivated by the capacity of optoRaf and optoAKT to accelerate axon regeneration in the PNS, we went on to determine whether they also show efficacy after CNS injury. We focused on the axons of C4da neurons, which project into the ventral nerve cord (VNC) and form a ladder-like structure. Each pair of axon bundles correspond to one body segment in an anterior-posterior pattern . We injured the abdominal A6 and A3 bundles by laser as previously described (Song et al., 2012;Li et al., 2020; Figure 5-figure supplement 1), and confirmed axon degeneration at 24 hr AI ( Figure 5A). At 48 hr AI, we found that axons began to extend from the retracted axon stem and towards the commissure region. We defined a commissure segment as regenerated only when at least one axon extended beyond the midline of the commissure region or joined into other intact bundles ( Figure 5-figure supplement 1). In WT, only 16% of lesioned commissure segments displayed obvious signs of regrowth ( Figure 5A and B). To quantify the extent of regrowth, we measured the length of the regrown axons and normalized that to the length of a commissure segment -regeneration index (Figure 5-figure supplement 1, Materials and methods). After light stimulation, the regeneration indexes of the two transgenic lines (optoRaf: 5.375 ± 0.3391; optoAKT: 4.765 ± 0.4236) were significantly increased compared with the WT control (2.643 ± 0.3050), and the percentage of regenerating commissure segments also exhibited a mild increase in both optoRafand optoAKT-expressing larvae ( Figure 5A-C). On the other hand, there was no significant difference between WT and the unstimulated transgenic flies ( Figure 5A-C). This result suggests that both signaling subcircuits reinforce C4da neuron axon regeneration in the CNS. We then tested whether the axon regrowth in the CNS induced by optoRaf or optoAKT activation leads to behavioral improvement. We utilized a recently established behavioral recovery paradigm based on larval thermonociception ( Figure 6A, Materials and methods) . In brief, we injured the A7 and A8 C4da neuron axon bundles in the VNC, which correspond to the A7 and A8 body segments in the periphery. We then assessed the nociceptive behavior in these larvae in response to a 47˚C heat probe applied at the A7 or A8 segments at 24 and 48 hr AI. Since C4da neurons are essential for thermonociception, injuring A7 and A8 axon bundles in the VNC would lead to an impaired nociceptive response to the heat probe specifically at body segments A7 and A8. Indeed, all the injured larvae exhibited diminished response at 24 hr AI, while the total score is approaching three in uninjured WT larvae ( Figure 6B). At 48 hr AI, substantial recovery was observed in the two transgenic groups with light stimulation, whereas WT showed a very limited response and a low recovery percentage ( Figure 6B and C). Both the response score and the percentage of larvae exhibiting behavioral recovery in these two groups were more than twice as that of the WT, while the unstimulated groups were comparable to WT. Altogether, these results demonstrate that our optogenetic system empowers ligand-free and non-invasive control of the Raf/MEK/ ERK and AKT pathways in flies, which not only promote axon regeneration after injury but also benefit functional recovery, suggesting that the regenerated axons may rewire and form functional synapses. Discussion Neurotrophins are known to activate Trk receptors and trigger the Raf/MEK/ERK, AKT, and PLCg pathways which are involved in cell survival, neural differentiation, axon and dendrite growth and sensation (Bibel and Barde, 2000;Huang and Reichardt, 2001;Chao, 2003;Cheng et al., 2011;Joo et al., 2014). Here, we used optogenetic systems to achieve specific and reversible activation of the neurotrophin subcircuits, including the Raf/MEK/ERK (via optoRaf) and AKT (via optoAKT) signaling pathways. We further verified that optoRaf and optoAKT did not show crosstalk at the level of phosphorylated ERK and AKT proteins, and activation of optoRaf but not optoAKT promoted PC12 cell differentiation. The A7 and A8 C4da neuron axon bundles (corresponding to the A7 and A8 body segments) in the VNC were injured by laser and the larva was then subjected to three consecutive trials at 24 and 48 hr AI, respectively. In each trial, a 47˚C heat probe was applied at the A7 or A8 segments. A fully recovered larva would produce a stereotypical rolling behavior in response to the heat probe and would be scored as '1', otherwise as '0'. If the total Figure 6 continued on next page Note that in the canonical growth factor signaling pathways, crosstalk actually occurs between the ERK and AKT signaling pathways, particularly at the upstream signaling node such as Ras. Indeed, the binding of growth factors to their receptors activates the transmembrane receptor tyrosine kinase, which recruits adaptor protein such as Grb2 (growth factor receptor-bound protein) and Sos (son of sevenless), a guanine exchange factor (GEF) for Ras. Sos then transforms the inactive, GDP-bound Ras to an active, GTP-bound Ras, which then recruits multiple proteins, including Raf and PI3K, an upstream kinase for AKT, to the plasma membrane. Thus, Ras serves as a common signaling node and therefore creates possible signaling crosstalk between the PI3K-AKT and Raf-MEK-ERK pathway. Another possible signaling crosstalk arises from the PI3K-mediated production of phospholipids, which could recruit a number of signaling molecules containing the lipid-binding domain (e.g., PH domain) including Sos, which then affects Ras/Raf activation. Our observation that optoRaf and optoAKT do not crosstalk (i.e. optoRaf does not activate the AKT downstream effector p70 S6K ; optoAKT does not activate pERK) may arise from the fact that both optoRaf and optoAKT bypass the ligand-binding, receptor activation, Ras activation, and phospholipid production signaling steps. Activation of optoRaf and optoAKT does require upstream signaling molecules (e.g. kinases). However, there could be common downstream signaling molecules (such as transcription factors) that mediate the effects of neural regeneration by optoRaf and optoAKT. While ongoing efforts aim to elucidate these common signaling effectors, evidence from previous literature (some from other cell types) implies several possible candidates such as CREB (cAMP response element-binding protein) and FOXO (forkhead box transcription factors). Activation of Raf leads to phosphorylation of CREB, a family of transcription factors that regulate cell survival (Ginty et al., 1994). Evidence also suggests that CREB is a regulatory target for AKT (Du and Montminy, 1998). Besides the positive regulation of CREB by ERK and AKT signaling, their activity could also negatively regulate the function of FOXO transcription factors. FOXO is a family of transcription factors that can directly be phosphorylated by AKT (Brunet et al., 1999). Phosphorylated FOXO transcription factors translocate out of the nucleus, and their transcriptional program is attenuated. Interestingly, phosphorylated ERK can downregulate FOXO activity by directly interacting with and phosphorylates FOXO3a at Ser 294, Ser 344, and Ser 425, which leads to FOXO3a degradation via an MDM2-mediated ubiquitin-proteasome pathway (Yang et al., 2008). Additional evidence supporting this idea is that inhibition of PI3K/AKT and MEK/ERK pathways both enhance the activation of FOXO transcription factors in pancreatic cancer cells (Roy et al., 2010). After spinal cord injury, the synthesis of neurotrophins is elevated to support axon regrowth (Cho et al., 1998;Hayashi et al., 2000;Fukuoka et al., 2001;Fang et al., 2017). AKT signaling, which functions downstream of Trk receptors, was reported to accelerate axon regeneration in mammals Miao et al., 2016). While NGF family members of neurotrophic factors have only been identified in vertebrates, the AKT pathway has also been shown to promote axon regrowth in flies (Song et al., 2012). However, the role of Raf/MEK/ERK signaling during nerve repair is controversial. Although some studies revealed that ERK is involved in axon extension, others suggested that ERK activation impedes axon regeneration and functional recovery (Markus et al., 2002;Huang et al., 2017;Cervellini et al., 2018). To specifically evaluate the efficacy of Raf/MEK/ ERK and AKT signaling in promoting axon regeneration, we generated fly strains with tissue-specific expression of optoRaf or optoAKT and found that light stimulation was sufficient to activate the corresponding downstream components in fly larvae in vivo. Consistent with previous studies (He and Jin, 2016), we found that AKT activation resulted in significantly increased axon regeneration in Figure 6 continued score of the three trials was below 1 at 24 hr AI but increased to 2 or 3 at 48 hr AI, the larva was defined as recovered. (B, C) The behavioral recovery test was performed at 24 hr and 48 hr after VNC injury (A7 and A8 bundles). Larvae expressing optoRaf or optoAKT exhibit significantly accelerated recovery in response to light stimulation. (B) Qualification of the total scores at each time point. WT (uninjured) N = 15, WT (light) N = 22, optoRaf (light) N = 32, optoRaf (dark) N = 23, optoAKT (light) N = 26, optoAKT (dark) N = 28. Data are mean ± SEM, analyzed by two-way ANOVA followed by Tukey's multiple comparisons test. (C) Qualification of the recovery percentage. The data were analyzed by Fisher's exact test, p=0.0230, p=0.7222, p=0.0167, p=1.000. *p<0.05, **p<0.01. The online version of this article includes the following source data for figure 6: Source data 1. Activation of optoRaf or optoAKT promotes functional regeneration after CNS injury. C4da neurons as well as the regeneration-incompetent C3da neurons. Interestingly, we found that C4da and C3da neurons expressing optoRaf also exhibited greater regeneration potential in response to light stimulation. This result also corroborates with a previous finding that activated B-RAF signaling enables axon regeneration in the mammalian CNS (O'Donovan et al., 2014). We speculate that the differential outcomes of ERK activation on axon regeneration may be due to the different injury models used, and the strength and cell type origin of ERK signaling. The regenerative capacity varies significantly among different neuronal subtypes, as well as the PNS and CNS. Although the administration of neurotrophins enhances axon regeneration in peripheral neurons, its capacity to promote functional regeneration in the CNS is limited, in part due to the inaccessibility of neurotrophins to reach injured axons (physical barrier) (Silver and Miller, 2004;Yiu and He, 2006) and innate inactivation of the regenerating program in CNS (Lu et al., 2014). OptoRaf and optoAKT could be used to address both issues by direct delivery of light (rather than ligand) to reactivate the regenerating program and thereby significantly increase neural regeneration in the CNS as well. We further showed that activation of the Raf/MEK/ERK or AKT subcircuit was capable of improving behavioral performance in fly larvae, suggesting that it may promote synapse regeneration leading to functional recovery. Ineffective functional recovery at least partially results from the inappropriate pathfinding of the regenerative neurons. As shown in this study, the majority of regenerating C4da neuron axons preferentially grew away from their original trajectory. We surprisingly found that delivering a 5 min light stimulation to the ventral branch, which extended toward the correct direction, was sufficient to convey guidance instructions and increase the preferential elongation of the ventral branch against the dorsal branch. Correct guidance cannot be achieved by whole-body administration of pharmacological reagents. Similarly, when casting blue light on the whole transgenic larvae, light stimulation must be given at a high frequency to promote axon regrowth (there is a threshold for the light off-time), and the dorsal branch extension was also dominant in this case. This result highlights the importance and necessity of restricted activation of neurotrophic signaling. Indeed, the strength and location of Raf/MEK/ERK and AKT activation during axon regeneration may be important to the functional consequences. Notably, although the transient restricted stimulation likely affects the decision-making of the growth cone at the branching point, constant light is still required to increase overall axon regeneration. Neurotrophins are engaged in a variety of important cellular processes, and their physiological concentration is essential for the normal function of both neurons and non-neuronal cells (Rose et al., 2003;Xiao et al., 2010;Pö yhö nen et al., 2019). Despite exhibiting substantial efficacy for enhancing nerve regeneration, neurotrophin-based therapeutic applications have been confronted with a number of obstacles such as their nociceptive side effects and lack of strategy for localized signaling activation (Aloe et al., 2012;Mitre et al., 2017;Mahar and Cavalli, 2018;Sung et al., 2019). OptoRaf and optoAKT aim to improve neurotrophin signaling outcomes by preferentially activating the neuroregenerative program and enabling spatiotemporal control. Our systems offer insights into the ERK and AKT subcircuits and delineate their differential roles downstream of neurotrophin activation, as evidenced by the distinct functional outcomes of Raf/ MEK/ERK and AKT signaling in several aspects. First, ERK signaling promoted PC12 cell neuritogenesis, which was not induced by AKT activation. Second, elevated ERK activity significantly increased dendritic complexity, while on the contrary, AKT activation led to decreased dendrite branching. Third, optoRaf and optoAKT displayed different sensitivity in response to light illumination when expressed in Drosophila C4da neurons. Correspondingly, neurons expressing optoRaf and optoAKT responded differently to intermitted light stimulation after injury, suggesting that the strength and activation duration of optoRaf and optoAKT is differentially gauged during axon regeneration. These collectively suggest that, since Raf could be activated by membrane translocation as well as dimerization, CRY2 oligomerization could further lead to a more potent Raf. This multimodal activation mechanism may render that a threshold of optoRaf can be reached so that a saturated ERK activation could be achieved. On the other hand, AKT activation does not depend on dimerization and may display a graded response. As a result, optoAKT activates the AKT pathway in a dose-dependent manner and may not recapitulate the maximum activation of AKT. This work provides a proofof-concept to use optogenetics to accelerate and navigate axon regeneration in mammalian injury models. Besides spatiotemporal control of the neurotrophic signaling, optoRaf and optoAKT allow for finetuning of the signaling activity with programmed light pattern during axon regeneration. Follow-up studies are warranted to determine how Raf/MEK/ERK and AKT subcircuits are involved in each process of nerve repair, including lesioned axon degeneration, regenerating axon initiation and extension, and the formation of new synapses and remyelination in mammals. Understanding the machinery will, in turn, allow better utilization and development of the optogenetic systems. Recently, Harris et al. succeeded in directing axon outgrowth with optogenetic tools in zebrafish embryos (Harris et al., 2020). Although intact optogenetics in larger mammals is limited by the poor penetration depth of blue light (less than 1 mm), we are excited to witness the rapid progress in implantable, wireless mLED devices Park et al., 2015b) and the integration of optogenetics with long-wavelength-responsive nanomaterials such as the upconversion nanoparticles Wu et al., 2016;Chen et al., 2018), both of which would facilitate precise delivery of light stimulation. Sensory axon lesion in Drosophila Da neuron axon lesion and imaging in the PNS was performed in live fly larvae as previously described (Song et al., 2012;Stone et al., 2014;Song et al., 2015). VNC injury was performed as previously described (Song et al., 2012;Li et al., 2020). In brief, A3 and A6 axon bundles in the VNC were ablated with a focused 930 nm two-photon laser and full degeneration around the commissure junction was confirmed 24 hr AI. At 48 hr AI, axon regeneration of these two commissure segments were assayed independently of each other ( Figure 5-figure supplement 1). Quantitative analyses of sensory axon regeneration in flies Quantification was performed as previously described (Song et al., 2012;Song et al., 2015). Briefly, for axon regeneration in the PNS, we used 'regeneration percentage', which depicts the percent of regenerating axons among all the axons that were lesioned; 'regeneration index', which was calculated as an increase of 'axon length'/'distance between the cell body and the axon converging point (DCAC)' (Figure 3-figure supplement 1A and B). An axon was defined as regenerating only when it obviously regenerated beyond the retracted axon stem, and this was independently assessed of the other parameters. The regeneration parameters from various genotypes were compared with that of the WT if not noted otherwise, and only those with significant differences were labeled with the asterisks. For VNC injury, the increased length of each axon regrowing beyond the lesion sites was measured and added together. The regeneration index was calculated by dividing the sum by the distance between A4 and A5 axon bundles ( Figure 5-figure supplement 1). Regeneration percentage was assessed independently of the regeneration index. A commissure segment was defined as regenerated only when at least one regenerating axon passed the midline of the commissure region or joined into other intact bundles ( Figure 5-figure supplement 1). Live imaging in flies Live imaging was performed as described (Emoto et al., 2006;Parrish et al., 2007). Embryos were collected for 2-24 hr on yeasted grape juice agar plates and were aged at 25˚C or room temperature. At the appropriate time, a single larva was mounted in 90% glycerol under coverslips sealed with grease, imaged using a Zeiss LSM 880 microscope, and returned to grape juice agar plates between imaging sessions. Behavioral assay The behavioral test was performed to detect functional recovery after VNC injury as described . A7 and A8 C4da neuron axon bundles in the VNC, which correspond to the A7 and A8 body segments in the periphery, were injured with laser ( Figure 6A). Since C4da neurons are essential for thermonociception, such lesion results in impaired nociceptive response to noxious heat at body segments A7 and A8. We assessed larva nociceptive behavior in response to a 47˚C heat probe at 24 and 48 hr AI. At each time point, the larva was subjected to three consecutive trials, separated by 15 s (s). In each trial, the heat probe was applied at the A7 and A8 body segments for 5 s. If the larva produced head rolling behavior for more than two cycles, it would be scored as '1', otherwise '0' ( Figure 6A). The scores of the three trials were combined and the total score at 24 hr AI was used to determine whether A7 and A8 bundles were successfully ablated. A larva was defined as recovered only when its total score was below 1 at 24 hr AI but increased to 2 or 3 at 48 hr AI. Those failed to exhibit such improvement at 48 hr AI were defined as unrecovered. All the injured larvae exhibited normal nociceptive responses when the same heat probe was applied at the A4 or A5 body segment at 24 hr AI. Cell culture and transfection HEK293T cells were cultured in DMEM medium supplemented with 10% fetal bovine serum (FBS), and 1 Â Penicillin Streptomycin solution (complete medium). Cultures were maintained in a standard humidified incubator at 37˚C with 5% CO 2 . For western blots, 800 ng of DNA were combined with 2.4 mL Turbofect in 80 mL of serum-free DMEM. The transfection mixtures were incubated at room temperature for 20 min prior to adding to cells cultured in 35 mm dishes with 2 mL complete medium. The transfection medium was replaced with 2 mL serum-free DMEM supplemented with 1 Â Penicillin Streptomycin solution after 3 hr of transfection to starve cells overnight. PC12 cells were cultured in F12K medium supplemented with 15% horse serum, 2.5% FBS, and 1 Â Penicillin Streptomycin solution. For PC12 neuritogenesis assays, 2400 ng of DNA were combined with 7.2 mL of Turbofect in 240 mL of serum-free F12K. The transfection medium was replaced with 2 mL complete medium after 3 hr of transfection to recover cells overnight. Twenty-four hours after recovery in high-serum F12K medium (15% horse serum + 2.5% FBS), the cell culture was exchanged to a low-serum medium (1.5% horse serum + 0.25% FBS) to minimize the base-level ERK activation induced by serum. Optogenetic stimulation for cell culture For western blot analysis, transfected and serum-starved cells were illuminated for different time using a home-built blue LED light box emitting at 0.5 mW/cm 2 . For PC12 cell neuritogenesis assay, PC12 cells were illuminated at 0.2 mW/cm 2 for 24 hr with the light box placed in the incubator. Optogenetic stimulation for fly The whole optogenetics setup is modified from previous work (Kaneko et al., 2017). Larvae were grown in regular brown food at 25˚C in 12 hr-12 hr light-dark cycle. At 72 hr AEL, early 3rd instar larvae were transferred from food, anesthetized with ether for axotomy. After recovery in regular grape juice agar plates, larvae were kept in the dark or under blue light stimulation thereafter. A 470 nm blue LED (LUXEON Rebel LED) was set over the grape-agar plate for stimulation. The LED was mounted on a 10 mm square coolbase and 50 mm square Â25 mm high alpha heat sink and set under circular beam optic with integrated legs for parallel even light. The light pattern was programmed with BASIC Stamp 2.0 microcontroller and buckpuck DC driver (LUXEON, 700 mA, externally dimmable). Local light stimulation was delivered by a 488 nm argon-ion laser using a Zeiss LSM 880 microscope. At 24 hr AI, larvae were anesthetized and C4da neurons were imaged with a confocal microscope. For lesioned axons that bifurcated and formed two branches, we focused the laser beam (at 15% laser power) on the ventral branch for 5 min. The larva was then returned to grape juice agar plates and imaged again at 48 hr AI to assess the increased length of each branch. Live cell imaging For the light-induced membrane recruitment assay, BHK-21 cells were co-transfected with optoRaf or optoAKT. Fluorescence imaging of the transfected cells was carried out using a confocal microscope (Zeiss LSM 700). GFP fluorescence was excited by a 488 nm laser beam; mCherry fluorescence was excited by a 555 nm laser beam. Excitation beams were focused via a 40 Â oil objective (Plan-Neofluar NA 1.30). Ten pulsed 488 nm and 555 nm excitation were applied for each membrane recruitment experiment. CRY2-CIBN binding induced by 488 nm light was monitored by membrane recruitment of CRY2-mCherry-Raf1 (for optoRaf) or CRY2-mCherry-AKT (for optoAKT) to the CIBN-CIBN-GFP-CaaX-anchored plasma membrane. The powers after the objective for 488 nm and 555 nm laser beam are approximately 40 mW and 75 mW, respectively. Alternatively, an epi-illumination fluorescence microscope (Leica DMI8) equipped with a 100 Â objective (HCX PL FLUOTAR 100Â/ 1.30 oil) and a light-emitting diode illuminator (SOLA SE II 365) was used for the CRY2-mCherry-Raf1 membrane translocation assay. Neurite outgrowth of PC12 cells was imaged using an epi-illumination fluorescence microscope (Leica DMI8) equipped with 10Â (PLAN 10Â/0.25) and 40Â (HCXPL FL L 40Â/0.6) objectives. Fluorescence from GFP was detected using the GFP filter cube (Leica, excitation filter 472/30, dichroic mirror 495, and emission filter 520/35); fluorescence from mCherry was detected using the Texas Red filter cube (Leica, excitation filter 560/40, dichroic mirror 595, and emission filter 645/75). Western blot Cells were washed once with 1 mL cold DPBS and lysed with 100 mL cold lysis buffer (RIPA + protease/phosphatase cocktail). Lysates were centrifuged at 17,000 RCF, 4˚C for 10 min to pellet cell debris. Purified lysates were normalized using Bradford reagent. Normalized samples were mixed with LDS buffer and loaded onto 10% or 12% polyacrylamide gels. SDS-PAGE was performed at room temperature with a cold water bath. Samples were transferred to PVDF membranes at 30 V 4C overnight or 80 V for 90 min. Membranes were blocked in 5% BSA/TBST for 1 hr at room temperature and probed with the primary and secondary antibodies according to company guidelines. Membranes were incubated with ECL substrate and imaged using a Bio-Rad ChemiDoc XRS chemiluminescence detector. Signal intensity analysis was performed by ImageJ. Antibodies used were listed in the Key Resources Table. Statistical analysis No statistical methods were used to pre-determine sample sizes but our sample sizes are similar to those reported in previous publications (Song et al., 2012;Song et al., 2015), and the statistical analyses were done afterward without interim data analysis. Data distribution was assumed to be normal but this was not formally tested. All data were collected and processed randomly. Each experiment was successfully reproduced at least three times and was performed on different days. The values of 'N' (sample size) are provided in the figure legends. Data are expressed as mean ± SEM in bar graphs, if not mentioned otherwise. No data points were excluded. Two-tailed unpaired Student's t-test was performed for comparison between two groups of samples. One-way ANOVA followed by multiple comparison test was performed for comparisons among three or more groups of samples. Two-way ANOVA followed by multiple comparison test was performed for comparisons between two or more curves. Fisher's exact test was used to compare the percentage. Statistical significance was assigned, *p<0.05, **p<0.01, ***p<0.001.
2020-05-20T13:25:51.745Z
2020-05-15T00:00:00.000
{ "year": 2020, "sha1": "b510df7344441709a7ee8a6c6922039b5b01acf6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.57395", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ace8f4ad27ec900bf52bd8aba06aa411bf956715", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54024502
pes2o/s2orc
v3-fos-license
Bone microstructure and diagenesis of saurischian dinosaurs from the Upper Cretaceous ( Neuquén Group ) , Argentina The Neuquén Basin in northwestern Patagonia, Argentina, holds the most important record of Cretaceous dinosaurs in South America.The Neuquén Group (Upper Cretaceous) is the richest dinosaur-bearing unit of the basin. It comprises the Río Limay, the Río Neuquén and the Río Colorado subgroups. In this study, dinosaur remains from the Río Neuquén and the Río Colorado subgroups outcropping in Mendoza are examined. In this group, isolated, disarticulated or partially articulated sauropods and theropods are abundant. However, little is known about the diagenetic history of fossil assemblages. In southern Mendoza, three fossiliferous sites were found in the areas of Paso de las Bardas (Quebrada Norte) and Cerro Guillermo (CG1, CG2). This study aims to add to the knowledge of diagenetic processes involving dinosaur remains from the Neuquén Group, as well as their relation to the depositional environment. Histologic features and diagenetic processes of dinosaur bones were analyzed through thin sections in order to interpret the degree of taphonomic alteration. The fossil-diagenetic processes inferred include substitution, fracturing, plastic deformation and different permineralization events. Combined analyses through X-ray diffractometry (XRD) and petrographic studies reveal the substitution of hydroxyapatite by francolite. The presence of fluorine -in one of the casessuggests a link between the elemental composition and depositional environments: floodplain and fluvial channel. Permineralization stages include infilling of vascular canals, trabeculae and fractures with iron oxides and iron carbonate minerals during the burial history. This contribution represents an integral approach to the study of Cretaceous dinosaurs for assessing the diagenetic changes in the bone microstructure and the differential preservation of fossil remains in fluvial environments. This study is a bone histology and diagenetic analysis of Malarguesaurus florenciae (González Riga et al., 2009), and undetermined sauropod and theropod remains from Mendoza.In this context, the study of the microstructure makes it possible to discern whether the changes in the bone are of biological origin or generated during diagenesis. Geological setting The Neuquén Basin is perhaps the best-known sedimentary basin in Patagonia with abundant occurrences of terrestrial and marine fossils.This basin is located at the eastern side of the Andes in west-central Argentina between 32º and 40º latitude South (Fig. 1).It covers an area of over 120,000 km 2 and comprises a nearly continuous record of up to 6,000 m of stratigraphic thickness from the Late Triassic to Early Cenozoic (Schwarz, 2012).This sedimentary record includes continental and marine siliciclastics, carbonates and evaporites accumulated under a variety of basin styles, including syn-rift, postrift/sag and foreland phases (Legarreta and Uliana, 1991;Howell et al., 2005).The triangular-shaped basin (Fig. 1) shows two main regions, the Andean thrust and fold belt to the west and the Neuquén embayment to the east and southeast (Schwarz, 2012). In southern Mendoza, in Paso de las Bardas and Cerro Guillermo study areas, the Río Limay Subgroup is the most ancient outcropping strata of the Neuquén Group.The silty-shaly unit in the Cerro Lisandro Fm. -on the top of the Río Limay Subgroupis covered by sandstones and shales from the Río Neuquén and Río Colorado subgroups (González Riga, 2002).These subgroups include sedimentary sequences composed of alluvial plains and channel complexes that periodically alternate, forming two distinct facies associations.In the Paso de las Bardas, the Portezuelo and Plottier formations -Río Neuquén Subgroup-are well exposed (González Riga et al., 2009).However, the outcrops situated northwards and eastwards of the Cerro Guillermo are correlated to the Bajo de la Carpa and Anacleto formations -Río Colorado Subgroup- (Previtera, 2011). Facies associations and paleoenvironments The architectural arrangement of the units in Paso de las Bardas and Cerro Guillermo shows multi-story sandstones bodies with fining-upward sequences and lateral accretion surfaces suggesting the presence of high sinuosity rivers -meandering systems- (Previtera, 2011).Especially in the Quebrada Norte site, Paso de las Bardas (González Riga et al., 2009, Fig. 2), architectural elements (sensu, Miall, 1996) were recognized representing different fluvial sub-environment, such as floodplain fines (FF); crevasse splay (CS); and crevasse channel (CR).These sequences (Table 1) are composed of a fine member mainly tabular, with greater thickness than the coarse member, dominated by overbank fines (Fm, Fl); crevasse splay (St, Sp, Sh); and crevasse channel deposits (Sm, Sh, St, Sp) (Fig. 3A, B). The Cerro Guillermo area comprises extensive outcrops of red pelitic facies interbedded with graybrown sandy fluvial channels.In this paper, the lithological features are described (Table 1).Two paleontological sites in Cerro Guillermo (CG1 and CG2, previously referred in Fig. 1) are here analyzed.Both sites are composed of the following lithofacies (Fm, Fl, P, Sm, Sh, St, Sr, Sp) which represent the following architectural elements within the fluvial environment (Table 1); floodplain fines (FF) (Fig. 3C); crevasse splay (CS); crevasse channel (CR) and fluvial channel (CH) (Fig. 3D). The fining-upward trend of the units analyzed is linked to a progression in the fluvial sub-environments, which starts with the deposition of channels and overbank deposits at the base of Portezuelo, Plottier and Bajo de la Carpa formations in Paso de las Bardas and Cerro Guillermo areas.These sections culminate in floodplain deposits with scarce lenticular sand channels and abundant sheet flood deposits on the top of the Portezuelo Formation in Paso de las Bardas (Previtera, 2011). X-ray diffractometry (XRD) Qualitative analysis of crystalline solids was performed with a PANalytical X'Pert PRO diffractometer using a copper lamp operated nickel filter at 40 kV, 40 mA, scanning speed of 1°/min., between 3º and 60º 2q, and eventually among 3º y 40º 2q, since the main reflections of oxides, hydroxides and sulphates of iron are in this range.For the preparation of the bone IANIGLA-PV.113-7 and the rock that surrounded the fossil, the samples were dried at room temperature, powderized (~10 grams) in an agate mortar and then introduced into the X-ray diffractometer PANalytical X'Pert PRO for approximately 2 hours.The analysis by X-ray diffractometry on the milled samples showed the presence of crystalline structures.Institutional Abbreviations: IANIGLA-PV: Instituto Argentino de Nivología, Glaciología y Ciencias Ambientales, Mendoza, Argentina, Paleontología de Vertebrados. Pre-burial and Post-burial Modification The appendicular bones of Malarguesaurus florenciae recovered in a floodplain facies (IANIGLA-PV.110-8;IANIGLA-PV.110-14) exhibit fractures assigned to pre-fossilization weathering.They display longitudinal cemented fractures parallel to bone fibers (stage 1 of Behrensmeyer, 1978) showing a low grade of pre-burial cracking.These fractures produced by pre-burial subaerial exposure and cemented during the post-burial stages.The bone remains show no evidence of abrasion, being thus assigned to the category 1 ("intact bone" of Alcalá, 1994).They are covered by an outer calcareous crust, likely inhibiting the influence of other alterative agents (e.g., abrasion). The sauropod bones (IANIGLA-PV.113-7;IANIGLA-PV.113-9), also found in floodplain facies, show a high grade of pre-burial cracking and flaking (stages 1-3 of Behrensmeyer, 1978).Outermost concentric thin bone layers, of appendicular bones and ribs show flaking usually associated with splintered cracks.The appendicular bones display longitudinal cemented fractures parallel to bone fibers (stage 1 of Behrensmeyer, 1978) and some of them show transverse fractures.In some sectors, a deeper and more extensive flaking occurs until most of the outermost bone is gone.Thus, the inner cancellous bone of the epiphysis is exposed or absent (Smith, 1993).The weathered bones were more vulnerable to breakage and abrasion (Marshall, 1989).In this case, a moderate rounding of broken edges of bones (category 2 of Alcalá, 1994) was produced.The bones show no evidence of outer calcareous crusting (Previtera, 2011). In contrast, the incomplete long bone found in a fluvial channel-lag (IANIGLA-PV.116-1)shows no evidence of pre-fossilization weathering as a result of subaerial exposure, predation or trampling.However, it displays intense processes of abrasion and selection by hydraulic transport.This bone exhibits cemented fractures perpendicular to the bone long axis occurred during fossil-diagenetic stages (Fernández López and Fernández Jalvo, 2002).Furthermore, post-fossilization weathering is evidenced by the presence of non-cemented fractures reflecting exhumation events (Previtera, 2011). Bone microstructure and diagenesis This section includes a detailed description of the histology features and the diagenetic changes of each specimen to indicate their preservation state (Tables 2 and 3).The growth patterns of the minerals indicate relative time of formation . Malarguesaurus florenciae (González Riga et al., 2009) Thin sections of a right femur (IANIGLA-PV.110-8) show a thick cortex composed mostly of compact tissue with a high degree of secondary remodeling.The fibrolamellar bone is distributed into some interstices with primary osteons embedded in a woven bone matrix (Fig. 4A, Table 2).The perimedullary region shows a Haversian bone tissue with secondary osteons easily distinguishable by the presence of cementation lines (Fig. 4B).These osteons are the result of a process of secondary reconstruction, involving the removal of bone around a primary vascular canal, followed by subsequent redeposition of concentrically arranged lamellar bone in the erosion cavity (Chinsamy, 1997).The medullary region is obliterated due to the intense fracturing and cracking. The microstructure analysis of the Malarguesaurus shows two types of preservation (Table 3).In the femur, the original tissue is well preserved with vascular canals and primary osteons mainly filled by iron oxides and calcite.However, the bone tissue shows compaction and late-diagenesis distortion (Fig. 4E).Permineralization stages include: (1) an initial ingress of silt minerals in vascular canals and fractures, then (2) iron oxide infiltration and followed by (3) calcite (CaCO 3 ) precipitation in the remaining pore space (Fig. 4F). In the appendicular bone, the original microstructure is distorted by compaction and mineral growth.Abundant fractures associated with mineral growth distorting the structure of the vascular canals are observed.Permineralization events include: (1) an initial growth of isopachous fibrous calcite in vascular canal walls followed by (2) precipitation of iron carbonates (e.g., siderite) in vascular canals and fractures, and finally (3) deposition of drusy calcite cementation in vascular canals and cancellous spaces (Fig. 4G). The presence of siderite (FeCO 3 ) is confirmed by the dark brown crystals, rhombohedrons in clusters, and of typical globular structure.The bone has an outer calcareous crusting that suffered dissolution in some sectors favoring fibrous calcite cementation in the cortical wall (Fig. 4H).Near the cortex, tunnel-like biological inclusions are observed, likely caused by microorganisms (bacteria and/or fungi) involved in post-mortem bone destruction (Lyman, 1994) (dashed lines in Fig. 4I).The thin section shows the tunnels oriented longitudinally, parallel to the osteonal canals.This microbial alteration is manifested as circular or oval destructive foci surrounded by a relatively dense mineralized wall (Trueman et al., 2004). Sauropoda gen. et sp. indet. The diaphysis section of the femur (IANIGLA-PV.113-7) shows a compact cortex surrounding a central cancellous region.Fibrolamellar bone contains primary osteons arranged in a plexiform pattern due to the presence of radial canals (Fig. 5A, Table 2).Haversian dense reconstruction is recognized by well-developed secondary osteons (Sander, 2000) growing above the primary structure (Fig. 5B).Medullary cavity shows large cancellous spaces and an intricate network of thin bony trabeculae forming islets (Fig. 5C). The thin section of the dorsal rib (IANIGLA-PV.113-9) has a cortical region composed of fibrolamellar tissue with primary osteons embedded in a woven bone matrix surrounding a central medullary cavity (Fig. 5D, Table 2).In the inner cortex, the perimedullary region displays resorption cavities infilled by centripetally deposited lamellar bone and secondary osteons (Fig. 5D).The medullary zone shows cancellous spaces and bony trabeculae.In this region, several layers of circumferential endosteal lamellar tissue are deposited along the boundaries of the erosion cavities (Fig. 5E).The microstructure of the femur reveals that the main diagenetic processes affecting the bones were permineralization, compaction and deformation (Fig. 5F, G, Table 3).The primary and secondary osteons are mainly filled by an initial precipitation of iron oxides followed by calcite cementation (Fig. 5F).The medullary region displays the same infill sequence of iron oxides and calcite in trabeculae and cancellous spaces (Fig. 5G). The rib microstructure displays different episodes of mineralization (Fig. 5H, I, Table 3).The cortical region shows secondary osteons and vascular canals filled by iron oxides and calcite (Fig. 5H).The medullary cavity contains trabeculae and cancellous spaces infilled by calcite and dolomite with high iron content likely "ferro-dolomite" [CaFe (CO 3 ) 2 ].According to the optical criterion, ferro-dolomite content is confirmed by the presence of rhomboid crystals (Scasso and Limarino, 1997).Figure 5I shows cancellous spaces with the following cementation events: (1) an initial isopachous fibrous calcite precipitation, then (2) ingress of rhombohedral ferro-dolomite and finally (3) blocky calcite cementation. Theropoda gen. et sp. indet. The thin section of the incomplete long bone (IANIGLA-PV.116-1) shows compact cortex surrounding a central cancellous region.The cortical region is composed of fibrolamellar bone tissue.It contains longitudinally-oriented simple canals and primary osteons (Fig. 6A, Table 2).The rate of bone deposition is cyclical and is termed "zonal bone", and the resulting growth marks are the zones and annuli.Lines of arrested growth (LAGs) are present in the outer and mid-cortical region (arrows in Fig. 6B).Toward the inner cortex, the perimedullary region is dominated by secondary osteons resulting from a process of Haversian reconstruction (Fig. 6C-E).Inwards, a zone of coarse cancellous bone is observed.The medullary cavity shows large cancellous spaces and endosteal bony trabeculae (Fig. 6F).The well-preserved bone microstructure shows secondary osteons and vascular canals cemented by (1) iron oxides and then (2) calcite (Fig. 6C-E, Table 3).Some sectors of the sample exhibit vascular canals; secondary osteons and cancellous spaces filled by two types of cement: (1) first siderite and then (2) calcite (Fig. 6F, G).In the perimedullary region, an important fracture reveals episodes of cementation by: (1) first calcite and then (2) semi-isotropic zeolite (Fig. 6H).The bone displays cracking and non-cemented fractures superimposed onto secondary osteons showing a pattern of iron oxide alteration in the rim (Fig. 6I). Host Rock The analyzed sample consists of the sandy mudstone that was found surrounding the femur previously described.XRD revealed the presence of quartz, calcite, plagioclase, illite and potassium feldspar as the main phases in the diffractogram (Fig. 7B).The quartz is distinguished by its characteristic reflections Å (20.8q, 26.6q).In this analysis, muscovite and iron minerals were not observed.However, they were recognized through petrographic sections.The cross section (Fig. 7C) shows the mineral composition composed of a granular fraction where the main minerals are quartz (75%) and feldspar (25%).Muscovite appears as the secondary mineral.In the sample, silty-clay matrix (≤50%) is observed, as well as a blocky calcite cement covering several grains of sediment (poikilotropic cement) or microcrystalline (ferrous carbonate) in more permeable sectors of the rock.In some sectors, cracks and burrows filled with drusy calcite cement and mottled iron oxides can be observed.This latter is a consequence of the differential growth of carbonates within the calcic soil (paleosols level). Histological implications The histological examination of dinosaurssauropods and theropods-reveals similar bone microstructure and growth patterns (as summarized in Table 2).The cortical zone of the all bones displays a preponderant well-vascularized fibrolamellar tissue indicating a high rate of bony deposition (Amprino, 1947;Chinsamy, 1993;Curry, 1999).The highly vascularized fibrolamellar tissue of the sauropod specimens can be assigned to the ontogenetic stage HOS-9 (Klein and Sander, 2008).The HOS-9 stage occurs in animals of up to 75% of adult size and it is coincident with the juvenile stages of Apatosaurus proposed by Curry (1999).Particularly, in the theropod bone, discontinuities in growth are noted, either interrupted or sustained, evidenced by the presence of annuli and LAGs indicating periodic arrests in growth.Despite the inferred rapid growth, bone deposition appears to have ceased occasionally as evidenced by arrest lines observed within the tissue.Similar features have been elsewhere reported in theropods (e.g., Madsen, 1976;Chinsamy, 1990;Varricchio, 1993). The abundance of fibrolamellar cortical tissue and absence of EFS (External Fundamental System) in the sauropod specimens indicates that they were still growing at the moment of death (Klein and Sander, 2008).On the other hand, the existence of discontinuities in the theropod bone suggests changes in the growth rate.Likely, the lines of arrested growth reflect physiological stress due to environmental perturbations.According to Varricchio (1993), these lines suggest a growth cessation associated to a seasonal/annual environmental change.Starck and Chinsamy (2002) have suggested that LAGs are an expression of a high degree of developmental plasticity, which is the capability to respond to changes in the environment by evoking different developmental regimes (Smith-Gill, 1983).According to this study, the development of LAGs as a response to unfavourable environmental conditions, could be attributed to either the tectonic activity or to the relatively arid conditions during the deposition of Neuquén Group (Martinsen et al., 1999;González Riga, 2002). Taphonomic pathways Precipitation and mineral replacement are two of the complex diagenetic processes, which occur during infilling of openings in the bones (Downing and Park, 1998;Williams and Marlow, 1987;Pate et al., 1989;Piepenbrink, 1989;Wings, 2004).The dinosaur bones analyzed here show similar processes of mineralization and compaction (as summarized in table 3).However, slight differences in the types of cements precipitated and in the number of diagenetic events that occurred in the burial environments (floodplain and fluvial channel) have been recognized. Prior to burial, the sauropod bones deposited in the floodplain likely underwent the following processes of pre-fossilization weathering: temperature changes, solar radiation, saturation and desiccation, all common in environments with episodic sedimentation (Bridge, 2003).Behrensmeyer (1978) described similar features in bones under arid or saline conditions.In the case here described, the presence of eolian sandstones at the Cerro Colorado section -Río Neuquén Subgroup- (González Riga, 2002) laterally correlated to the Malarguesaurus site suggests the development of sub-arid episodes within the floodplain deposits (González Riga et al., 2009). After the burial, these bones experienced plastic deformation, a series of permineralization stages, and substitution.During early stages of diagenesis, voids and fractures were cemented by iron oxides (e.g., hematite), after which precipitation of iron carbonates (e.g., siderite), calcite and calcium iron carbonates (e.g., ferro-dolomite) took place.The Fe is likely present as hematite in the superficial part of the soil.The calcite occurred at deeper levels in the soil and shows a radial growth pattern and iron enrichment.This indicates local reducing conditions under the water table during precipitation, as it is described in previous research (Behrensmeyer et al., 1995;Retallack, 2001;Clarke, 2004).Calcite is present in almost all samples, indicating its importance as a void filler.Calcite appears to have been precipitated during the later stages of diagenesis, as suggested by Flügel (1982).This main cement occurs in both spongy and compact bone and it is represented by at least two generations: fibrous calcite cementation and blocky calcite generation (Previtera et al., 2016).The siderite often found in bedded sedimentary deposits with a biological component suggests a biogenic origin under low-oxygen and low-Ph conditions (Lim et al., 2004).Siderite perhaps indicates a special microbial influence in these bones (Mortimer et al., 1997).Furthermore, the presence of tunnel-like biological inclusions in appendicular bones, probably caused by microorganisms (bacteria/or fungi), could be attributed to the early stages of post-mortem bone decomposition.Bone decay microorganisms leave evidence creating tunnels or surface with partial erosion visible in thin sections.In all cases, it is possible to recognize the recrystallization of bone minerals (Lyman, 1994).The occurrence of calcium iron carbonates (ferro-dolomite), characterized by typically geopetal growth, indicates a deep diagenesis stage.These remains show substitution of biogenic apatite by the francolite variety in which PO 4 3-is substituted by CO 3 2-and OH -by F - (Elorza et al., 1999;Elliott, 2002).The presence of fluoride is, therefore, an indicator of diagenetic ion exchange through interaction with ground water (Hollocher et al., 2005).This type of replacement is not observed in the theropod bone from the fluvial channel. The theropod bone transported by the fluvial channel-lag displays well-preserved bone microstructure.However, it experienced intense abrasion and selection by hydraulic transport.The bone shows vascular canals, secondary osteons and cancellous spaces cemented by iron oxides, siderite and calcite.In the perimedullary region, a pre-burial fracture reveals two cementation episodes of calcite and semi-isotropic zeolite.Furthermore, the bone shows non-cemented fractures showing a pattern of iron oxide alteration on the rim produced by the contact with air.These "open" fractures, indicating post-fossilization weathering processes -subaerial exposure-flaking and fracturing, occurred during exhumation events. In summary, during the burial history (Fig. 8), the saurischian bones went through compression processes as a result of lithostatic pressure, permineralization and fracturing.Initially, iron oxide coatings and clay/ silt sediments were deposited in vascular canals and cancellous spaces.Later in time, lithostatic pressure caused a series of plastic deformations in bones after losing their collagen fibers.In bone voids and preburial fractures, a series of permineralization events of different minerals took place.The final exhumation processes are evident throughout post-fossilization weathering, flaking and fracturing that occurred during the telodiagenesis.These processes are the result of differences in burial depth, temperature and geostatic pressure suffered by the fossils in each burial environment.Similar diagenetic features have been identified in other vertebrate remains (Holz and Schultz, 1998;Wings, 2004;Reichel et al., 2005;González Riga and Astini, 2007;González Riga et al., 2009;Casal et al., 2013;Previtera, 2011Previtera, , 2013;;Previtera et al., 2013;Previtera et al., 2016). Conclusions In this paper, a bone histology and a diagenetic analysis of saurischian remains from the Neuquén Group have been achieved.Histological examination of these subadult/adult individuals reveals a predominance of fibrolamellar bone tissue suggesting a rapid periosteal osteogenesis and an overall fast growth.However, the existence of growth rings in the theropod bone indicates periodic interruptions of growth, probably related to environmental stress and a flexible growth strategy. From a fossil-diagenetic viewpoint, dinosaur remains found in floodplain facies and fluvial channel, show similar events of permineralization and compaction.However, slight differences in the types of cements precipitated and in the number of diagenetic events that occurred during the burial history have been recognized.During the exposition on the ground, the sauropod carcasses were likely affected by subaerial decay, weathering and entombment.In contrast, theropod bones were transported by hydraulic currents suffering intense processes of selection and abrasion.The diagenetic processes observed comprise substitution, fracturing, plastic deformation and permineralization events.XRD and petrographic analysis confirm the substitution of hydroxyapatite by francolite in the bone microstructure.The content of fluorine found in the sauropod femur was not detected in the theropod long bone, which confirmed the differences between the two samples and their depositional environments: floodplain and fluvial channel respectively.During early stages of diagenesis, bone voids and pre-burial fractures were filled by iron oxides, after which precipitation of calcite and iron carbonates took place.In both fluvial channel and floodplain facies, the dominant authigenic mineral is hematite and the main cement of bone voids is calcite.The occurrence of geopetal structures, typical of ferro-dolomite, indicates a deep diagenesis stage and provides useful information about taphonomic processes of reburial.The final exhumation processes are evident throughout postfossilization weathering, flaking and fracturing that occurred during the telodiagenesis.These processes are the result of differences in burial depth, temperature and geostatic pressure suffered by the fossils in each burial environment.The functional use of XRD and petrographic analysis provided a fundamental tool that enabled a better understanding of the diagenetic pathways and differential preservation of dinosaur bones in fluvial environments. FIG. 1 . FIG. 1. Location map of the Neuquén Basin showing the Paso de las Bardas and Cerro Guillermo areas, Mendoza, Argentina. FIG. 3 . FIG. 3. Detail photographs of fluvial levels from Neuquén Group in the studied areas.A. Panoramic view of the overbank deposits in the Paso de las Bardas area; B. Detail of the crevasse channel with cross-bedded lenticular sandstones; C. Paleosol horizons underlying the channel belt in the Cerro Guillermo area; D. Channel belt complexes with lateral accretion surfaces interbedded with sandstones ripple cross-lamination in the Cerro Guillermo area. FIG. 7. X-ray diffraction and petrographic analyses.A: Femur fragment (IANIGLA-PV 113.7) indicating the predominance of fluor apatite over the hydroxyapatite; B: Sandy mudstone showing its main mineral phases; C: Photomicrograph of the host rock in cross-polarized light.Scale bar equals 1 mm.
2018-12-01T09:22:36.282Z
2017-01-31T00:00:00.000
{ "year": 2017, "sha1": "a0c906903b7235e4776fd88916a464aea440cf66", "oa_license": "CCBY", "oa_url": "http://www.andeangeology.cl/index.php/revista1/article/download/V44n1-a03/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a0c906903b7235e4776fd88916a464aea440cf66", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
149497262
pes2o/s2orc
v3-fos-license
Effect of lidocaine on the safety of postoperative skin reconstruction after malignant melanoma resection Malignant melanoma is a type of skin cancer with high morbidity and mortality. Therapeutic strategies should be individualized in order to increase the survival rate. The aim of this study was to assess the effect of lidocaine on skin healing and immune function after operation of patients with melanoma. Sixty patients with melanoma were selected from those treated in Bishan Hospital between August 2014 and August 2016. The patients were randomly divided into lidocaine group and control group. Lidocaine group was locally treated with an intradermic injection of 2% lidocaine solution in dose of 1.5 mg/kg and the control group received the same quantity of saline solution. In the lidocaine group, the rates of skin temperature, drug reaction, healing and infection were higher than the corresponding rates in the C group. The local application of lidocaine can promote wound healing to a certain extent, reduce pain, and promote postoperative skin reconstruction. Introduction Malignant melanoma has always been an alarming health problem, with high morbidity and mortality (1). The etiology of melanoma is not completely identified, comprising different risk factors, such as ultraviolet radiation exposure, genetics, chemical exposure and HPV infection (2)(3)(4). The therapeutic strategies are individualized according to the stage of the disease and the prognosis. The 5-year survival rate, also depends on the stage and the prognosis (5). If the malignant cells have reached the lymph nodes, surgical resection is the main treatment of solid tumors. However, in the perioperative period due to various factors, such as surgery and anesthesia, tumor cells may enter blood circulation, lymphatic channel, bone marrow, and even spread to all the tissues and organs. This fact can lead to formation of micro-metastases, increase the risk of postoperative tumor recurrence and metastasis, and also affects postoperative survival rate (6). Thus, the challenge in the treatment strategies, is the combination of chemotherapy, radiation therapy, immunotherapy and targeted therapy, in order to combat the effects of this aggressive disease. There is a necessity of new treatment options (7)(8)(9)(10). In recent years, it has been found that most anesthetics can influence the function of the immune system and target the residual disease or make cells able to form micro-metastasis (11). Regional anesthesia was associated in some studies with reduced risk of cancer recurrence and this can be associated with the anti-inflammatory effects of local anesthetics as lidocaine that can influence the proliferation, migration or invasion of cancer cells (11). Cluster of differentiation 31 (CD31) cells plays an important role in inflammation, oxidative stress generation, cell differentiation, angiogenesis and fibroblasts migration and might be closely related with the skin wound healing process. Sumpio et al determined that CD31 cells play an important role in the formation of tumor blood vessels and its expression can accurately reflect the number of tumor blood vessels (12). Lidocaine directly regulates the molecular and cellular biology of the tumor (11). It is widely believed, that it can not only reduce the gene expression of voltage gated sodium channels (VGSC), but it also inhibits the migration and invasion of tumor cells in vivo. Moreover, it can inhibit tumor growth and proliferation by demethylation of deoxyribonucleic acid (DNA) or by mitogen-activated protein kinase pathway (MAPK) (11). On the other hand, a large number of studies have shown that lidocaine can indirectly affect tumor prognosis by regulating the function of the immune system (11). This study investigated the effect of lidocaine by CD31 cell modulation on would healing, skin temperature and the infection incidence in vitro and in vivo. Patients and methods Patients. Sixty patients with malignant melanoma treated in the Bishan Hospital from June 2015 to January 2017 were included in the study. The mean age of the patients was 50±7.2 years. All the patients included in the study were able to understand and use the visual analogue scale (pain score), they did not have a history of lidocaine allergy and they did not suffer from diabetes or immune system diseases. Also, they presented normal heart, liver and kidney function. The study was approved by the Ethics Committee of Bishan Hospital (Bishan, China), and all the patients signed an informed consent to participate in the study. Inclusion criteria. The patients included qualified based on the following criteria: The tumor was a first case surface tumor. Primary tumor. Single lesion and tumor with a surface diameter (R) R≤10 cm and level 1: malignant melanoma in situ, tumor thickness ≤1.5 mm (13). No lymph node or distal metastasis. Complete resection of tumor tissue, i.e., the result of the first examination of frozen specimen report after operation is malignant melanoma. Preoperative procedures. The diameter of the tumors on the patient's body surface was measured, recorded, and the resection area was marked. The effect was evaluated by comparing the sign and symptom index before and after treatment: Curative effect index = [(total integral before treatment -total integral after treatment)/total integral before treatment] x100. Grade of healing was evaluated as following: class a healing: good healing and no adverse reactions; class b healing: poor healing with inflammatory reaction at the healing place, such as red swelling, hard knot, hematoma and effusion, but not festering; class c healing: abscess of the incision, which requires incision and drainage. Surgery. Before the surgery, color doppler ultrasonography and CT examination of superficial lymph nodes were performed to diagnose the metastasis of the tumor and assist in determining the tumor stage. The location and size of the tumor were determined according to the diagnosis. Local anesthesia was used for the operation and the supine position was selected for patients. The range of resection was determined according to the preoperative skin contrast examination and MRI. The melanoma was resected with at least 2 cm lateral and deep surgical margins when possible or including the deep fascia. Then the tension-free hernioplasty was applicated to repair the wound, to prevent secondary deformity from the wound, and maintain a good form for esthetic principles. Postoperative wound intervention. The 60 patients included in the study were randomly divided into 2 groups, lidocaine group and control group. The lidocaine group received intradermal injection with 2% lidocaine solution (approval no. H11020558; Beijing Yongkang Pharmaceutical Co., Ltd., Beijing, China) in dose of 1.5 mg/kg and the control group received the same quantity of 0.9% saline solution. The lidocaine was diluted in 0.9% saline solution. The pain score values, before and after the treatment, were recorded. After the treatment debridement was performed in both groups. Determination of CD4 T-cell percentage in peripheral blood. From all the patients included in the study venous blood was collected at 3 time-points: T0, before the surgery; T1, 4 h after the surgery; and T2, 24 h after the surgery. Lymphocyte CD4 T-cell percentage was determined using BD FACSCount analyzer (BD Biosciences, Franklin Lakes, NJ, USA). Briefly, the reagent tubes were brought to ambient temperature and vortexed upright for 10 sec before using. For analysis, 50 µl of whole blood was added to the CD4 reagent tube containing CD3/CD4 PE monoclonal antibodies (BD Biosciences). The tube was incubated in the dark for 30 min at room temperature and 50 µl of fixative (5% formaldehyde in PBS) was added and vortexed before reading on Becton-Dickinson FACS machine according to manufacturer's instructions using cell count software. Immunomagnetic separation to collect CD31. The blood collected at 72 h after surgery from all the patients was used for immunomagnetic separation of CD31 marked immune cells. The sorted cells were added to complete cell culture medium for endothelial cells (including EBM, 2 basal medium enriched with recombinant human epidermal growth factor, insulin-like growth factor, recombinant human fiber cell growth factors, endothelial cell growth factor, fetal bovine serum, ascorbic acid and heparin). The CD31 + cells suspended into DMEM/F-12 medium containing 10% FBS were inoculated into a 6-well plate (2x10 3 /cm 2 ) containing collagen of rat tail. The plates were incubated at 37˚C, in 5% CO 2 and 95% humidity atmosphere. The culture medium was replaced once every 2 days. The cell suspensions were then added to 24-well culture plate (0.75 ml in each well, corresponding to 3x10 5 cells). In the lidocaine group we used 1.5 mg/kg 2% lidocaine (0.75 ml solution included 2% lidocaine, 0.9% normal saline and CD31 culture solution). VAS pain score. Score from 0-10, score 0 point: No pain; score <3 points: Slight pain but the patient can tolerate it; score 4-6 points: The patient is in pain and the pain can affect sleep but it still is tolerable; score 7-10 points: The pain of patient is a growing pain and the pain is unbearable. Statistical analysis. For statistical analysis SPSS software (version 20.0; IBM Corp., Armonk, NY, USA) was used. The data were expressed as mean ± standard deviation (mean ± SD). For assessing the differences between the groups, the Mann-Whitney U test was used for numerical data and Chi-square test was used for categorical data. P<0.05 was considered to indicate a statistically significant difference. Results Recurrence of disease. All patients were followed up from 6 months to 2 years. The control group had 2 cases of local recurrence, 2 cases of cervical lymph node metastasis, 1 case of death, and no recurrence or metastasis. On the other hand, the lidocaine group had 1 case of metastasis, 6 months after surgery (Table I). Skin infection. In patients from lidocaine group no infection was observed compared with the control group, one patient developed infection after the surgery. The infection was caused by the inappropriate protection of the wound and it was not associated with the treatment (Table II). Skin temperature. The difference in skin temperature between the surgical side and healthy side in lidocaine group was significantly decreased compared with the control group at every time-point. In lidocaine group the difference between body temperature at surgery side and body temperature at healthy side was low and this difference decreased in time. In control group the trend was different, increasing in the first 10 h, then slightly decreased and continued to increase with time (Table III). Table III. The difference in body temperature between surgical side and healthy side at different times after the surgery in the two groups. Adverse drug reaction. Regarding the incidence of adverse reactions to the treatment, no drug allergic reaction was observed in either group. In vitro experiment on cell culture for CD31 cell induction. After induction of CD31 cells in the two groups, their levels were significantly increased in lidocaine group compared to control group. The ratio of CD31 in lidocaine group was 45.54±0.03% and the ratio of CD31 in the control group was 28.37±0.02% (P<0.05) (Fig. 1). Wound healing time. The wound healing time in the experimental group was 4-7 days according to the clinical examination records. The control group healing time averaged from 6 to 10 days. According to the value distribution, the treatment time was not normally distributed, so the rank-sum test was adopted in both groups. The difference between the two groups was statistically significant, and the wound healing time in the experimental group was significantly shorter than that in the control group (Table V). CD4 T-cell percentage in peripheral blood. At T0 before the surgery there were no differences in the CD4 T-cell percentage in the two groups. We observed that in both groups the percentage of CD4 T-cell decreased after surgery. In comparison with the control group, in lidocaine group the CD4 T-cell percentage was significantly increased at T1 and T2 (P<0.05) (Table VI). Discussion According to bibliographic data, the incidence of melanoma in China is lower than in Europe, with different clinical features and prognosis (11). Studies have found that in domestic settings the ratio between men's and women's incidence ratio is 0.87:1 in China, in Brazil 1:1.4 and in Scotland 1:1.6 in 50 patients with boundary (12). In this study, patients were divided into two groups and it was shown that the difference of average survival time in melanoma patients treated with lidocaine was not significant. Lidocaine is a type Ib anti-arrhythmic agent and sodium channel antagonist commonly utilized in the cardiac and pain conditions (14). Via its sodium channel blocking the neural conduction is reduced and impeded, leading to its antiarrhythmic and anesthetic properties. Unlike other sodium channel blocking AEDs, such as phenytoin (also a class Ib anti-arrhythmic), its structure includes an aromatic and amine chain motif allowing the binding to the sodium channel via both the channels pore-lining phenyl binding site, or via the external amine chain site, both of which lead to the reduction of ion transport across the cellular membrane. Lidocaine hydrochloride has the advantages of quick action, safety, strong penetration, and possibility of repeatable use. Lidocaine can more rapidly reduce harmful stimulation of peripheral nerve excitability, alleviate nerve dysfunction, achieve the goal of relieving itching, improve local blood flow into the body's normal regulating function recovery (15,16). Pain is the fifth most important inducer of changes in body temperature, pulse, breathing and blood pressure. In this study, except for physical factors, pain may be caused by related psychological stress, such as the fear of disease recurrence, speed of skin recovery, spouse attitude and worries about the children. In order to improve postoperative quality of life, painless treatment is advocated clinically. According to relevant literature, the appropriate method for anesthesia management is direct application to the wound as surface anesthesia (17). Similar to previous studies, this study used lidocaine directly in post-operative skin monitoring and found that it can reduce the pain of debridement. Lidocaine can effectively treat a wide range of wound pain, depending on the scope and drug safety criteria. In clinical practice, medical staff can calculate the dosage based according to the size and area of the wound, and accounting for the safe range of medication. Zhou et al pointed out that CD31 expression level in tumor cells could be used as an indicator to monitor the disease progression and tumor recurrence risk (18). Recently, some researchers tried to add lidocaine to MCF-7 breast cancer cells and then added normal breast epithelium MCF-10, to intervene. They observed the cell vitality by immunofluorescence staining, DNA fragments and WB analysis, and the results showed that lidocaine can inhibit cancer cell activity (19). Interestingly, in the process of cell induction, we found that lidocaine interfered with CD31 cells by stimulating the proliferation and apoptosis. Therefore, we hypothesized that the application of lidocaine on the wound could have some effect on the proliferation of tumor cells. The lymphocytes have significant influence on the postoperative recovery of patients with melanoma, and a good prognosis is correlated with the content of intratumoral inflammatory infiltrate and the levels of circulatory immune cells (20,21). Previous studies have shown that surgical stress can inhibit lymphocyte proliferation and accelerate apoptosis, leading to lower lymphocyte numbers in the blood circulation (22,23). Studies showed that lidocaine through its anti-inflammatory effects regulation of HPA axis, affect the inflammatory changes that are immune-induced, further affecting lymphocyte proliferation and apoptosis, thus reducing the postoperative immunosuppressive state (11). At T1 and T2 time-points in both groups of patients the proportions of lymphocytes were lower than at T0 time-point (P<0.05). These results are consistent with previous research conclusions, that lidocaine reduce the healing time. Some researchers found that lidocaine through its anti-inflammatory effects on regulation of HPA axis can improve the perioperative activity of lymphocytes, which is confirmed by our study in which we showed an increase of the number of CD4 + lymphocytes at T2. However, the molecular mechanism of interaction between lidocaine and the lymphocyte is not yet fully understood, and the relevant separation mechanism needs to be investigated. Skin temperature, skin tone, tension and capillary recirculation time are the main indications of blood circulation in the body, and skin temperature is the only objective indicator from these indications. In conclusion, the local application of lidocaine can promote wound healing to a certain extent, reduce pain, and promote postoperative skin reconstruction.
2019-05-12T13:27:43.015Z
2019-04-23T00:00:00.000
{ "year": 2019, "sha1": "37a219065a595c919d28b2d4819638cbad4efb48", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/etm.2019.7519/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "37a219065a595c919d28b2d4819638cbad4efb48", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
198189589
pes2o/s2orc
v3-fos-license
Schwannoma of the Small Intestine Schwannomas of the gastrointestinal tract are rare. Herein, we report a case of schwannoma originating from the small intestine. A 78-year-old woman underwent medical follow-up after surgery for bladder cancer, and a mass in the upper part of the pelvis was revealed by abdominal CT. With the diagnosis of a submucosal tumor of the small intestine, she underwent partial intestinal resection. The submucosal tumor was pathologically composed of S100-positive spindle cells and diagnosed as schwannoma. We report this case of rare schwannoma of the small intestine and review the literature. Introduction Schwannomas arise from Schwann cells of the peripheral nerve sheath that frequently develop in areas of the central nervous system, such as the spinal cord and the brain, the visceral peritoneum, the head and neck, and the surface of limbs. However, when schwannomas develop in the gastrointestinal tract, they are thought to originate from the Auerbach plexus or the Meissner plexus [1]. The frequency of schwannoma occurrence is 44.8% in the head and neck, 19.1% in the upper limbs, 13.5% in the lower limbs, and less than 10% in the gastrointestinal tract [2]. It is difficult to distinguish from mesenchymal tumors, it often requires diagnostic resection, and immunostaining is needed for accurate diagnosis. Here, we report a case of rare schwannoma of the small intestine. Case Report A 78-year-old woman underwent medical follow-up after surgery for bladder cancer, and a mass in the upper part of the pelvis was revealed by abdominal CT. She only had a history of hyperlipemia and had no history or family history of neurofibrosis and malignant diseases. She had no digestive symptoms, and her appetite had been normal. She had a soft abdomen and no pain, but the physical examination was positive for a 3-cm induration in the midline below the umbilicus. Laboratory findings included hemoglobin 11.8 g/dL, white blood cell count 6.2 × 103/μL, C-reactive protein 0.1 mg/dL, platelet count 252 × 103/μL, creatinine 0.67 mg/dL, alanine transaminase 21 U/L, and aspartate transaminase 23 U/L. They showed no abnormalities, and tumor markers (CEA: 0.1 ng/mL, CA19-9: 23 U/mL) were within normal limits. Enhanced abdominal CT demonstrated a 25 × 30 × 35 mm mass in the upper part of the pelvis, and the tumor was suspected to originate from the small intestine ( Fig. 1). No enlarged lymph nodes or distant metastases were demonstrated. Colonoscopy showed only polyps. Therefore, with the diagnosis of a submucosal tumor of the small intestine, possibly gastrointestinal stromal tumor (GIST), she underwent laparotomy, at which partial intestinal resection was performed with a mechanical side-to-side anastomosis with a 1-cm margin from the tumor. Intraoperatively, the tumor was palpated in the wall of the small intestine. Pathologically, a hard submucosal tumor of 4.5 × 3.4 × 2.4 cm was detected macroscopically (Fig. 2). Histologically, relatively uniform spindle-shaped cells were formed, in a palisading pattern, from the lamina propria to the subserous lamina of the small intestine. Immunohistochemical examination demonstrated that the tumor cells were positive for S100, and negative for αSMA, desmin, CD34, and c-Kit (Fig. 3). The MIB1 index was low, and no findings indicating malignancy were observed. Hence, the tumor was diagnosed as schwannoma, and the patient was discharged 10 days after operation. However, 12 days after discharge, she died of asphyxia. Discussion/Conclusion Schwannomas are tumors derived from Schwann cells, which develop preferentially in the head and neck, trunk and limb, and rarely develop in the gastrointestinal tract. It has been reported that of 246 cases of schwannomas and neurofibromas, 42 cases (17.1%) occurred in the gastrointestinal tract, among which 37 (88.1%) occurred in the stomach, 3 (7.1%) in the small intestine, and 2 (4.8%) in the colon [3]. It occurs most commonly in females between 30 and 60 years of age [4]. The main symptoms include abdominal pain, mass palpation, and bleeding, and less frequently, intestinal obstruction. Neurogenic tumors are submucous masses that are rich in blood vessels, and it is thought that bleeding associated with neurogenic tumors is due to the necrosis that accompanies tumor growth. It has also been reported that neurogenic tumors develop under the serosal membrane on the contralateral mesentery and exhibit exophytic growth. Therefore, intestinal obstruction by schwannoma of the intestine is uncommon. However, these symptoms appear as a result of tumor growth and are not disease specific [5,6]. Therefore, as in the current case, such tumors are sometimes discovered incidentally by diagnostic imaging. Histopathologically, schwannomas are well-defined tumors that show spindle-shaped cells on HE staining. Such tumors are classified into Antoni A type, in which spindle-shaped cells form palisading-like patterns, and Antoni B type in which the stroma is edematous and the tumor is hypocellular. Type B is reported as a secondary change that results from the growth of type A, and there are no differences in prognosis between type A and type B. In the current case, type A was mainly observed. As a differential diagnosis, mesenchymal tumors such as leiomyoma, leiomyosarcoma, and GIST can be considered; however, it is difficult to distinguish these tumors by HE staining alone, and immunohistological staining is usually required. Schwannomas are typically positive for S-100 and vimentin, and negative for desmin, keratin, c-kit, CD34, and αSMA [7][8][9][10]. If without symptoms, follow-up observation is also possible, but it is difficult to make a diagnosis before operation. Surgical treatment is often selected to treat symptoms or manage malignant diseases such as GIST [1]. Although the current case was asymptomatic, operation was performed because of the possibility of malignant diseases. Immunohistochemically, the tumor cells were positive for S-100 and negative for αSMA, desmin, CD34, and c-Kit, which was compatible with the diagnosis of schwannoma. The long-term follow-ups of patients with schwannomas of the gastrointestinal tract have not shown any propensity to recurrence following complete excision [4]. In the current case, the patient died of other factors soon after discharge, and therefore, tracking of the prognosis was impossible. In summary, we experienced a case of incidental schwannoma of the small intestine. Although schwannomas rarely develop in the small intestine, it is important to keep them in mind as a differential diagnosis of neoplasms of the small intestine. Statement of Ethics This study was conducted in accordance with the Declaration of Helsinki.
2019-07-26T07:23:45.483Z
2019-06-28T00:00:00.000
{ "year": 2019, "sha1": "501e88850fce70e922d82b6ac3b8b72ba5489cb9", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/501065", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "501e88850fce70e922d82b6ac3b8b72ba5489cb9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18704837
pes2o/s2orc
v3-fos-license
Send Orders of Reprints at Reprints@benthamscience.net Biotransformation of Khellin to Khellol by Aspergillus Niger and the Evaluation of Their Biological Activities General Experimental Procedures Melting Points Were Determined on a Fisher-johns Scien- Tific Co. Melting Point Apparatus, Usa. the De Biotransformation of khellin using Aspergillus niger ATCC 10549 resulted in the production of khellol. The biological activities of the transformed product and khellin were established by antioxidant and acetylcholine esterase in-hibitory assays. Khellol exhibited a higher degree of antioxidant and acetylcholine esterase inhibitory activities compared to khellin. This is the first report on the biotransformation of khellin by microorganisms and the first evaluation of the neuroprotective activity of either khellin or khellol. 1. NTRODUCTION Microbial transformations have been widely exploited in the preparation of many useful chemical products [1-3]. Khellin is an active compound extracted from Ammi visnaga Lam (Fam. Apiaceae), which is natural source of several furochromones. Khellin has been reported to have some biological effects such as relaxation of smooth muscle [4] and prevention of stone formation associated with hyperoxaluria [5], but no evidence concerning antioxidant or neuroprotec-tive effects (acetylcholine esterase inhibitory activity) has been reported. This report describes the microbial transformation of khellin to khellol, and the antioxidant and neuro-protective evaluations of the metabolite. Plant Material Khellin was isolated and purified from unused parts of Ammi visinaga grown in Egypt. Its identity and chemical structure was confirmed by comparative study to those cited in the literature [6]. was performed on aluminum sheets precoated with 0.2-mm silica gel 60 F254 (Merck). Plates were developed in a solvent mixture of n-hexane-ethyl acetate (3:7, v/v), and the developed chromatograms were visualized under 254-and 365-nm UV light and the spots were made visible by spraying with vanillin/H 2 SO 4 reagent before warming in an oven preheated to 110 °C for 5 min to develop yellow color for khellin and khellol. Microorganisms The microorganisms used in this work were provided by the culture collection of Mansoura University, Faculty of Pharmacy and were maintained on Potato-dextrose agar. The following microorganisms were screened for their ability to transform khellin: Fermentation Screening Procedure Using a two-stage fermentation protocol [7], screening was carried out by incubating the cultures while shaking at 200 rpm on New Brunswick Scientific G25 Gyratory shakers NTRODUCTION Microbial transformations have been widely exploited in the preparation of many useful chemical products [1][2][3].Khellin is an active compound extracted from Ammi visnaga Lam (Fam.Apiaceae), which is natural source of several furochromones.Khellin has been reported to have some biological effects such as relaxation of smooth muscle [4] and prevention of stone formation associated with hyperoxaluria [5], but no evidence concerning antioxidant or neuroprotective effects (acetylcholine esterase inhibitory activity) has been reported.This report describes the microbial transformation of khellin to khellol, and the antioxidant and neuroprotective evaluations of the metabolite. Plant Material Khellin was isolated and purified from unused parts of Ammi visinaga grown in Egypt.Its identity and chemical structure was confirmed by comparative study to those cited in the literature [6]. General Experimental Procedures Melting points were determined on a Fisher-Johns Scientific Co. melting point apparatus, USA.The 1 H-, 13 C-, APT, DEPT NMR spectra were analyzed on JEOL JNM ECA at 400 MHz for 1 H-and 100 MHz for 13 C-NMR spectra.TLC was performed on aluminum sheets precoated with 0.2-mm silica gel 60 F254 (Merck).Plates were developed in a solvent mixture of n-hexane-ethyl acetate (3:7, v/v), and the developed chromatograms were visualized under 254-and 365-nm UV light and the spots were made visible by spraying with vanillin/H 2 SO 4 reagent before warming in an oven preheated to 110 °C for 5 min to develop yellow color for khellin and khellol. Microorganisms The microorganisms used in this work were provided by the culture collection of Mansoura University, Faculty of Pharmacy and were maintained on Potato-dextrose agar.The following microorganisms were screened for their ability to transform khellin: Fermentation Screening Procedure Using a two-stage fermentation protocol [7], screening was carried out by incubating the cultures while shaking at 200 rpm on New Brunswick Scientific G25 Gyratory shakers at 25 °C in a medium consisted of 2% glucose, 0.5% peptone, 0.5% yeast extract, 0.5% NaCl and 0.5% K 2 HPO 4 , adjusted to pH 7.0 before autoclaving for 15 min at 121 °C.Stage I culture was started by suspending spores and mycelia from filamentous fungal culture slants in 2 mL of sterile medium and transferring the suspension to 25 mL medium contained in 150-mL flasks.After incubation for 72 hours, stage II cultures were initiated by transferring 2 mL of stage I cultures to 150-mL flasks containing 25 mL medium.After 24 hours, khellin dissolved in dimethyl formamide (0.2 mg/mL medium) was added to the flasks.Culture controls consisted of fermentation blanks in which the organisms were grown under identical conditions but without substrate addition.Substrate controls were composed of a sterile medium to which the substrate was added and incubated without microorganisms.Samples (0.5 mL) of stage II cultures were extracted with EtOAc (2 mL × 3) every 24 hr for 15 days.The combined extracts were dried over anhydrous Na 2 SO 4 and evaporated in vacuum at 40 °C.The dried extracts were reconstituted in 0.5 mL MeOH, applied to silica gel plates.The results of TLC analysis of EtOAc extracts of the culture broths of all microbes showed that Aspergillus niger ATCC 10549 could metabolize khellin to more polar metabolites after fourteen days.The fermentation screening procedures were repeated using Aspergillus niger ATCC 10549 to prove the reproducibility of the metabolite formation. Scaled-up Reaction by Aspergillus Niger ATCC 10549 Khellin (500 mg) was evenly distributed among 25 (150-mL flasks), each containing 50 mL medium of 24-hr A. niger stage II cultures.The cultures were incubated for fourteen days (200 rpm, 25 °C) and extracted with EtOAc (1L × 3).The combined extracts were dried and evaporated to yield 430 mg yellow viscous residue.The residue was mixed with 400 mg of silica gel and placed on top of a silica gel column (l00 g, 1.5 × 72 cm).The column was isocratically eluted with 40% ethyl acetate in hexane.Fractions with identical R f (TLC) were pooled. Biological Evaluation 1. Acetylcholine Esterase Inhibitory Assay AChE activity was measured using a microplate reader [8][9].Acetylcholine esterase enzyme hydrolyzes the substrate acetylthiocholine, resulting in the product thiocholine, which reacts with Ellman's reagent (5,5-dithiobis [2nitrobenzoic acid; (DTNB) to produce 2-nitrobenzoate-5mercaptothiocholine and 5-thio-2-nitrobenzoate, which can be detected at 405 nm.One hundred microliters of Tris buffer (50 mM, pH= 8.0, 0.1% BSA, to stabilize the enzyme), 10 μl of 0.25 U/mL AChE enzyme, 10 μl of 10 mM DTNB, and 5 μl of compound per well at their maximum solubility [final concentrations of 0.73 mM (192.3 g/mL) for khellin and 0.78 mM (192.3 g/mL) for khellol] were added to 96-well plates, which were then incubated at 30 °C for 5 min; then, 5 μl of 75 mM acetylthiocholine was added to each well and the plate was incubated for an additional 10 min at 30 °C, after which the absorbance was measured at 405 nm.Galanthamine, 0.1 mM (38.46 g/mL), was used as a positive control. The percentage of inhibition was calculated using the following formula: where A Blank is the absorbance of the blank in which the sample is replaced by buffer, A blank 100% is the absorbance of the blank in which the sample and enzyme are replaced by buffer, A Sample is the absorbance of the sample, and A Sam- pleBlank is the absorbance of the sample wells in which the enzyme is replaced by buffer. ORAC Assay The oxygen radical anti-oxidant capacity (ORAC) assay measures the free radical scavenging activity of the sample and its ability to prevent the oxidative degeneration of the fluorescence of fluorescein after being mixed with the peroxyl radical generator 2,2`-azobis (2-amidino-propan) dihydrochloride (AAPH) at 37 °C.The ORAC method was used as previously described, with 96-well plates [10].Briefly, 200 µl of 94.4-nM fluorescein was added to each 20-µl sample, Trolox, and phosphate buffer to reach 75 mM, pH 7.4 (as blank), and the mixture was incubated at 37 °C for 10 min.Seventy-five microliters of 31.7 mM AAPH was added to each well, and fluorescence degradation was measured for 90 min at 30-sec intervals.Excitation at 485 nm and emission at 525 nm were measured using a FlexStation 3 Microplate Reader (Molecular Devices, LLC, Sunnyvale, CA, USA), and data were managed by SoftMax Pro® 5.4.1 software.The standard curve was linear between 0 and 50 mM Trolox. RESULTS AND DISCUSSION Microbial transformation of khellin by A. niger produced one major spot on TLC with an R f value of 0.2, which upon its isolation afforded a 150-mg (30%) colorless needle of the metabolite.This compound was identified by 1 H-and 13 C-NMR as well as co-chromatography with an authentic sample to be identified as khellol (Fig. 1).The data was further confirmed by consulting the literature [11].Ellman's method was used to evaluate the neuroprotective activity through the inhibition of ACE by khellin and its metabolite at their higher concentration, 0.73 mM (192.3 g/mL) for khellin and 0.78 mM (192.3 g/mL) for khellol.Khellin showed weak inhibitory activity, 26.3%, at a higher concentration, compared with the positive control, galanthamine (0.1 mM, 38.46 g/mL), which showed 93.4% ACEI, while higher inhibition was observed for khellol, which was 46.5% at the highest concentration (Fig. 2).The results of ORAC assay (Table 1) showed khellol have higher antioxidant activity in comparison with khellin so it was concluded that the biotransformation of khellin greatly improved the neuroprotective and antioxidant activity through the formation of khellol as a metabolite.We also isolated khellol from the unused part of A. visinaga, proving that the plant metabolism mimics the microbial metabolism.
2018-05-08T18:37:36.571Z
0001-01-01T00:00:00.000
{ "year": 2013, "sha1": "3c613a8002203f1e34b62b75d19a54f08471485b", "oa_license": "CCBYNC", "oa_url": "http://benthamopen.com/contents/pdf/TOBCJ/TOBCJ-4-1.pdf", "oa_status": "HYBRID", "pdf_src": "Crawler", "pdf_hash": "3c613a8002203f1e34b62b75d19a54f08471485b", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
16527287
pes2o/s2orc
v3-fos-license
Crosstalk between Tryptophan Metabolism and Cardiovascular Disease, Mechanisms, and Therapeutic Implications The cardiovascular diseases (CVD) associated with the highest rates of morbidity are coronary heart disease and stroke, and the primary etiological factor leading to these conditions is atherosclerosis. This long-lasting inflammatory disease, characterized by how it affects the artery wall, results from maladaptive immune responses linked to the vessel wall. Tryptophan (Trp) is oxidized in a constitutive manner by tryptophan 2,3-dioxygenase in liver cells, and for alternative cell types, it is catalyzed in the presence of a differently inducible indoleamine 2,3-dioxygenase (IDO1) in the context of a specific pathophysiological environment. Resultantly, this leads to a rise in the production of kynurenine (Kyn) metabolites. Inflammation in the preliminary stages of atherosclerosis has a significant impact on IDO1, and IDO1 and the IDO1-associated pathway constitute critical mediating agents associated with the immunoinflammatory responses that characterize advanced atherosclerosis. The purpose of this review is to survey the recent literature addressing the kynurenine pathway of tryptophan degradation in CVD, and the author will direct attention to the function performed by IDO1-mediated tryptophan metabolism. Introduction Tryptophan (Trp), an essential amino acid, constitutes a central component in human and animal protein synthesis, and it serves as the sole source of substrates that facilitate the generation of a range of crucial molecules. Trp precedes and indicates the synthesis of proteins, nicotinamide adenine dinucleotide (NAD), nicotinic acid, and serotonin (namely, the neurotransmitter) [1,2]. For mammalian species, the kynurenine (Kyn) pathway is Trp's central catabolic route, featured in 95% of peripheral Trp metabolism in mammals; furthermore, it results in NAD's biosynthesis, as NAD functions as a crucial cofactor [3]. The highest rates of global morbidity are associated with cardiovascular disease (CVD), and atherosclerosis is the primary etiological factor leading to various manifestations of CVD, including coronary heart disease and stroke [4]. One of the critical factors in CVD pathogenesis is the immune response, and a clinical solution remains to be identified [5,6]. Atherosclerosis occurs due to the manner in which low-density lipoprotein (LDL) accumulates and is retained in the arterial wall, and this leads to maladaptive responses from T cells and macrophages [7]. Scholars in recent years have directed significant energy towards the examination of the Kyn pathway and the role it plays in CVD pathogenesis, and because several hypotheses have suggested that various factors, including oxidative stress, immune activation, and inflammation, are central to the pathogenesis of atherosclerosis and CVD, a critical area of future investigation is to examine to potential part played by the Kyn pathway in CVD regarding these factors. Tryptophan Metabolism and the Kynurenine Pathway Trp hydroxylase facilitates the biotransformation of approximately 5% Trp via metabolism to 5-hydroxy Trp, and this generates serotonin by decarboxylase (an amino acid). Lastly, through N-acetyltransferase, serotonin is metabolized to melatonin. Via the Kyn pathway, the degradation of the other 95% of Trp is converted to kynurenine, and the regulation of this primarily occurs with a pair of ratelimiting enzymes, tryptophan 2,3-dioxygenase (TDO) and indoleamine 2,3-dioxygenase (IDO1). Each of these enzymes incorporates one noncovalently bound iron-protoporphyrin IX to every monomer, and TDO and IDO1 are members of the oxidoreductase family. Specifically, the enzymes are associated with the family of oxidoreductases that act on single donors with O 2 as the oxidant and the inclusion of two oxygen atoms into the substrate (oxygenases) [8,9]. The expression of IDO1 occurs at basal levels as a consequence of antigen-presenting cells, including macrophages and dendritic cells, and this procedure is driven to a significant extent by IFN-, the proinflammatory cytokine, and type I interferons, tumor necrosis factor, and lipopolysaccharide (LPS) (the latter three to a less significant degree) [10]. Considerable scholarly attention has been directed towards the immunoregulatory function played by Trp metabolism in the immune system, and most studies have centered on the role of IDO1; this rate-limiting enzyme governs the ratelimiting step of Trp catabolism. Kynureninase, after it has been synthesized by IDO1, uses Kyn to generate anthranilic acid (AA) [11]. Additional steps in the Kyn pathway relate to the degradation of kynurenine to the sequential production of 3-hydroxybutyrate kynurenine and 3-hydroxybutyrate anthranilic acid (3-HAA) or xanthurenic acid in the presence of kynurenine-3-monooxygenase (KMO) and kynureninase or kynurenine aminotransferase, respectively. 3-HAA is further metabolized to quinolinic acid (QA), the excitotoxin, which is a potent convulsant and excitant [12]. Furthermore, studies have demonstrated that kynurenine aminotransferase metabolizes Kyn to generate kynurenic acid (KYNA) [13]. Due to its N-methyl-D-aspartate (NMDA) receptor antagonist characteristics, KYNA is a neuroprotective compound [12]. The manner in which KMO is expressed and acts is improved by IFN-in the context of human macrophages and microglia cells [14], and an increase in KMO expression is linked to significant levels of TNF-and IL-6 in the brains of rats after a systemic inflammatory challenge [15]. Figure 1 provides a schematic illustration of the ways in which the critical enzymes and substrates linked to the Trp metabolic pathway are associated with one another [9], and it also demonstrates the primary immune-related active substances, including kynurenine, quinolinic acid, 5-hydroxytryptamine (5-HT), and melatonin. Preliminary research in this area mainly attributed the Kyn pathway with a central function in the generation of nicotinic acid or vitamin B3 [16]. Nevertheless, after the observation that modifications of Trp metabolism are present in numerous central nervous system conditions, attention moved towards the produced enzymes and metabolites, subsequently denoted as kynurenines. One of the critical findings was that QA operates as a potent convulsant and excitant [12] and, as such, resulted in convulsive responses when inserted into mouse brain ventricles. Furthermore, researchers found that QA functions as a selective NMDA receptor agonist [17]. AS, Trp, Kyn, AA, 3-hydroxybutyrate kynurenine, and xanthurenic acid readily cross the blood-brain barrier [18,19]. The impacts that systemic Trp has on the brain Kyn pathway is partly facilitated by its peripheral conversion to Kyn and 3-OHkyn. An additional driver ensures entry of these metabolites into the brain. Kynurenic acid, 3-HAA, and QA, primarily as a consequence of the polar nature and the seeming absence of effective transition procedures, are not the same as a range of different kynurenine pathway metabolites because they cannot effectively cross the blood-brain barrier [18]. Therefore, their formation occurs in a local manner inside the brain. Kynurenine Pathway and Immune Responses Research has identified that a key function of the Kyn pathway relates to the pathological regulation of the innate and adaptive immune system [3]. In a prospective multicenter study involving a 986-person sample group, comprised entirely of individuals in the young adult age range, investigators noted that the activity of IDO1 is significantly associated with carotid artery intima-media thickness (IMT) in females. Specifically, IDO1 activity displayed a significant association with a range of atherosclerosis risk factors for the female population, including age, LDL cholesterol (LDL-C), and BMI. Moreover, IFN-was identified as the primary IDO1 inducer in vitro and in vivo, and the presence of IFNfacilitated an increase in intracellular IDO1 transcription [20,21]. Another study identified alternative inflammatory factors as less prominent inducers of IDO1, including IFN-, IFN-, LPS, and cytotoxic T lymphocyte-associated antigen-4 [22]. Contemporary research findings have contributed to a body of knowledge in which a minimum of three mechanisms that facilitate the initiation of immunological suppression are understood. It is notable that all of the identified immunosuppressive impacts are aligned with IDO1 activation and its downstream effects on specific groups of T cells. Initially, active IDO1 facilitates the depletion of Trp in local tissue microenvironments and, in turn, it drives the promotion of metabolite generation associated with the Kyn pathway. After IDO1 induction, which results in the inhibition of the propagation of reactive T lymphocytes, Trp levels are depleted, which increases the degree to which T lymphocytes are susceptible to cell death [23]. In vitro Trp depletion leads to cell cycle impedance of activated T cells, and this similarly increases the likelihood of cell death [24]. Second, the subsequent rise in kynurenine metabolites, including Kyn, QA, and 3-HAA, impedes propagation and, following this, facilitates the initiation of selective cell death regarding T helper 1 (TH1) lymphocytes, which respond to antigen-presenting cells [25]. Research findings have demonstrated that kynurenine results in negative impacts regarding several phenomena, including how immune responses are regulated, the inhibition of T cell and NK cell propagation, and the regulation of immunogenic dendritic cells [26]. In the context of inflammation conditions, Trp degrades quickly to QA, and QA and 3-hydroxybutyrate kynurenine have the potential to induce the selective cell death in vitro of TH1 and not TH2 cells. Therefore, by suppressing and removing T lymphocytes, Trp metabolism has the potential to impact immunity [27]. Third, it is important to recognize that an increase in the frequency of regulatory T cells positive for forkhead box P3 (FOXP3+) via TGF induction occurs when two conditions, namely, the presence of kynurenine metabolites and Trp depletion, are met, and the simultaneous presence of these conditions also heightens the effect on naïve T cells [27]. This effect significantly advances immune tolerance and a negatively formulated feedback loop, thereby driving immune response regulation [28]. Kynurenine Pathway and Cardiovascular Disease The overexpression of IDO1 accompanied by increased Trp catabolism has been shown to stem from chronic systemic low-grade inflammation (CSLGI), which is a predictive factor for the results of CVD. The enhanced degradation of Trp was linked to inflammation in [29] by an observation of the increased plasma Kyn to Trp ratio (Kyn/Trp) (KTR). Here, a sample group of mature participants from Finland demonstrated that the aforementioned ratio is positively associated with BMI, LDL, triglycerides, and waist circumference while it is negatively associated with high-density lipoprotein (HDL) [30]. Studies were conducted on an expansive cohort taking a broad sample of all demographic groups, and it was noted that IDO1 activity (via the KTR) was positively associated with the preliminary stages of atherosclerosis and increased carotid artery IMT for males and females; this finding indicates that IDO1 constitutes a viable indicator of atherosclerosis [31]. Increased IDO1 expression was identified in the macrophage-loaded core of atherosclerotic plaques in human participants [32], and another study demonstrated that low Trp plasma concentration and a high KTR are characteristic of individuals suffering from coronary heart disease [33]. Moreover, a high KTR is a sensitive indicator of severe coronary events for individuals displaying no history of coronary artery disease [34]. Therefore, KTR can be used to forecast critical coronary events and is also useful in determining all-cause mortality for individuals suffering from coronary artery disease [35]. A relationship was observed between KTR and IMT for individuals suffering from hemodialysis while being classified as high risk for CVD [36], and it was reported that increased Trp degradation is associated with neopterin plasma concentrations [35]. This finding constitutes a biomarker of cell-mediated immune activation and is connected to atherosclerotic CVD [37]. Epidemiological research indicates that the Kyn pathway's activity, as manifested in the plasma Kyn/Trp, is associated with the strokeinduced inflammatory response, the degree of which strokes is severe, and chronic clinical results [38]. Kyn, which has been shown to precede and indicate KYNA, was identified as considerably reducing neuronal damage and infarct volume, and this was determined in a study involving the preischemic intraperitoneal administration in various rat models of brain ischemia-hypoxia [39]. Furthermore, 3-hydroxykynurenine, as is the case with the KTR, has been linked to the appearance of CVD in individuals suffering from chronic renal disease, and this was verified in an independent manner [40]. In preliminary investigations, 3-HAA was frequently denoted as a Trp metabolite resulting in antioxidant and antiinflammatory impacts. Research has shown that 3-HAA in mitochondrial mechanisms impedes oxygen uptake by mitochondrial respiration with NAD-dependent substrates, the uncoupling of the respiratory chain, and oxidative phosphorylation [41]. In addition, various studies have examined the ways in which 3-HAA results in cell death, by associating it with the apoptosis induction in monocyte/macrophage cell lines [42], identifying a link between 3-HAA and apoptosis in activated T cells [43], and demonstrating that 3-HAA facilitates the inhibition of nuclear factor-B activation [44]. Experimental findings indicate that 3-HAA has a significant function regarding atheroprotection in that it facilitates the regulation of lipoprotein metabolism. LDLr −/− and IDO −/− double knockout mice displayed a considerable increase 4 Oxidative Medicine and Cellular Longevity in serum lipids, especially triglycerides [45]. Furthermore, the administration of 3-HAA to LDL receptor knockout (LDLr −/− ) mice facilitated a significant decrease in overall plasma cholesterol and triglyceride levels, and the former effect was attributed to the lower chylomicron/VLDL ratio. In addition, 3-HAA led to a significant increase in HDL-C [46]. Therapeutic Implications and Concluding Remarks CSLGI linked to conventional CVD risk factors leads to an increase in Trp degradation. Therefore, one of the critical objectives in developing appropriate therapies for the symptoms of CVD patients is to normalize Trp metabolism. Practitioners should be aware that a high KTR may indicate a natural immune reaction, meant to combat inflammation and, furthermore, that the Kyn pathway can modulate vascular inflammation and atherosclerosis in a direct or indirect manner. Considering that IDO1 inhibition facilitates a reversal in septic shock associated-hypotension and, moreover, diminishes the likelihood of fatality, Kyn constitutes a fruitful area of investigation regarding the creation of therapies for hypertension. More research is needed to gain comprehensive insight into the function of the Kyn pathway in the modulation of cardiovascular risk factors, atherosclerosis, and vascular inflammation. Furthermore, the nature of the Kyn pathway's related and initiated molecular mechanisms of action require more in-depth research. Specifically, future studies should focus on an investigation of the degree to which these parameters constitute the potential foundation of accurate and effective biologically informed therapies that can be implemented to heighten the likelihood of patient recovery from CVD.
2018-04-03T01:15:16.898Z
2017-03-09T00:00:00.000
{ "year": 2017, "sha1": "3419eeb427ea6068d074d61caf7118c5b97b2d8c", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/omcl/2017/1602074.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2211debf18120d651d59c2ab185bc735556a30d5", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine", "Biology" ] }
120434533
pes2o/s2orc
v3-fos-license
Predictors associated with HIV/AIDS patients dropout from antiretroviral therapy at Mettu Karl Hospital, southwest Ethiopia Objective The aim of this study was to determine the major risk factors of antiretroviral therapy dropout. The retrospective cohort research design was applied. 1512 HIV patients were included from Mettu Karl Hospital in Illubabor Zone, southwest part of Ethiopia from September 2005 to January 2018. Kaplan–Meier comparison and log-logistic regression accelerated failure time model were used. Results From the log-logistic regression result, the risk of dropout for patients with primary education status was 10.58% greater as compared to illiterate (p < 0.0110). The probability of dropout for patients with marital status separated was about 16.82% higher than those patients with marital status divorced (p < 0.0070). Being merchant, farmer and daily labour had a greater risk of dropout as compared to a housewife. Most of the HIV/AIDS patients on ART were dropout in a short period due to patients separated marital status, primary education, CD4, being merchants, farmer and daily labour. Investigation on the cause of antiretroviral therapy dropout from a number of AIDS clinics in the country is highly appreciated. Electronic supplementary material The online version of this article (10.1186/s13104-019-4267-3) contains supplementary material, which is available to authorized users. Introduction HIV is the most responsible causes of mortality worldwide and the primary predictor of death in sub-Saharan Africa region. The prevalence of new infections in the area accounted for 66.6% of the world. Above 68% of adults and 90% of children infected with the disease were found in this area, and more than 76% of HIV/AIDSrelated deaths were occurred in Africa [1]. In sub-Saharan Africa more than 2.2 million people were died per year due to HIV/AIDS and related causes [2,3]. In Ethiopia, 780, 000 HIV/AIDS patients were on antiretroviral therapy [4] and around one million people are reportedly living with HIV. Of all people who have ever been reported as beginning antiretroviral treatment, 249,174 are adhering to their treatment regimen and there were 55,200 AIDS-related deaths in 2013 [5]. Antiretroviral therapy dropout is a serious challenge to the success of HIV/AIDS treatment. According to the world health organization report, from all patients enrolled in HIV, the percentage of success was only 23% [6]. Antiretroviral therapy dropout negatively affects the improvement of an immunological advantage of antiretroviral therapy and increases HIV/AIDS-related mortality [7]. Dropout of patients receiving antiretroviral therapy will be the reason for drug toxicity, treatment failure due to poor adherence, and drug resistance [8][9][10] this directly leads to death [11][12][13][14][15]. 40% of all patients on antiretroviral therapy were dropout in sub-Saharan Africa [16,17]. Of all dropout patients in the region of sub-Saharan Africa, 46% of them were died [16]. Antiretroviral therapy can reduce HIV replication and it develops the immune ability [18]. There are limited data accesses about the results of the ART in Ethiopia. In Oromia region, there were 194,370 HIV/AIDS patients and of the 115,334 were on antiretroviral therapy. Of them, only 59.3% of HIV/AIDS patients were on ART which was far from adequate [19]. Another investigation also explained that the rate of antiretroviral therapy failure in private health facilities in Ethiopia was 20.4% [20]. In Jimma, one out of five adults had to antiretroviral therapy dropout which is a disaster for once country which aims to minimize the effect of HIV/AIDS [21]. HIV/AIDS patients with poor antiretroviral therapy follow up outcome are at high risk of death by two times than patients with good follow up adherence [22]. Patients who have poor follow up status were at risk of death by four times than who have well-adhered patients in Addis Ababa [23]. The risk of death of poor adhered patients is five times greater than better-adhered patients [24]. The study in Ethiopia also showed that around 50% of the antiretroviral therapy dropout patients were dead [25]. HIV/AIDS Patients who dropout antiretroviral therapy will likely die in a short period of time [26]. Ethiopia is among one of the most HIV/AIDS prevalence countries globally. ART treatment has a great role to prolong the life of HIV patients but, there were a high percentage of dropouts from antiretroviral therapy which causes directly facilitate death [27][28][29]. A study which was conducted in the Illubabor Zone recommended that investigation on antiretroviral therapy dropout in the area is timely [30]. Therefore, the aim of this study was to determine predictors of antiretroviral therapy dropout of HIV/AIDS patients at Mettu Karl Hospital in Illubabor, Ethiopia. Study area This study was conducted at Mettu Karl referral Hospital which is found in Ilubabor Zone, Oromia region, southwest part of Ethiopia. This is 600 km far from the capital city of Ethiopia. Mettu is known for its waterfalls such as Sor fall and surrounding evergreen forest. Study design The study was applied a retrospective cohort study design. All patients on antiretroviral therapy from September 2005 up to January 2018 were considered in the study. Secondary data from the Hospital registry was used to retrieve data all about HIV AIDS patients on antiretroviral therapy follow up. There were 3517 patients in a given time interval. Of which a total of 1512 patients were included in the study in a given time interval depending on exclusion criteria (see Additional file 1). Variables The dependent variable is survival time to dropout from the ART starting from September 2005 up to January 2018. The predictor variables were sex, occupation, WHO clinical stage, marital status, baseline regimen type, age, religion, educational level, CD4 level, religion, and body weight. Exclusion criteria Patients with; an incomplete variable of interest, transfer out and death outcomes were excluded from inferential analysis. Survival data analysis Factors associated with predictors of time to dropout from ART were analyzed using Kaplan-Meier comparison and log-logistic regression AFT model. Variables with p value < 0.05 was considered statistically significant. Kaplan-Meier estimation The Kaplan-Meier is a nonparametric method used to estimate the survival experience. The survival experience of two or more groups of between-subjects factor can be compared for equality. It is a nonparametric estimator of the survivor function S(t). where d j , is the number of individuals who experience the event at time t j , and, n j is the number of individuals. Log-logistic accelerated failure time model The log-logistic distribution provides the most commonly used AFT model. The log-logistic regression can exhibit a non-monotonic hazard function which increases at early times and decreases at later times. It is similar in shape to the log-normal distribution but its cumulative distribution function has a simple closed form, which becomes important computationally when fitting data with censoring. The log-logistic survival and hazard function for a log-linear model with no covariates (logT = μ + δε) are; where θ = −µ σ and γ = 1 σ are unknown parameters. Results There were 1512 patients in the cohort study out of which 243 ( Kaplan-Meier survival estimates The Kaplan-Meier graph showed that the survival ability of patients marital status married is less than patients with never married (see Additional file 3). From the Kaplan-Meier, log-rank test in Table 2 shows that the survival experience of patients related with occupation and original regimen type status had a significant difference on time to ART dropout at 5% of a significant level. Model selection The study used the AIC criterion to compare different models. For each model, the value is computed as AIC = −2 log (likelihood) + 2(p + k). Based on the following statistics value of the AIC/BIC criteria parametric model with log-logistic was preferable for modelling since the smallest value is preferable (see Additional file 4). From the log-logistic regression model; when a CD4 level added by one unit, the risk of dropout increased by 0.05% (AHR = 1.0005). Likewise, a unit change of weight could accelerate time to dropout by 0.31% (AHR = 1.0031). The risk of dropout of patients with married marital status was 9.8% greater as compared with divorced. Patients ART dropout with separated marital status were at risk as compared to married by 16.82%. The probability of ART dropout with primary education level was 10.58% greater than the illiterate patients. The risks of dropout of patients with daily labour were 87.44% greater than that of housewife. Similarly, the risks to dropout of being farmer were 82.73% as compared to housewife. Being dropout from ART with government worker was increased by 73.72% as compared to a housewife (p < 0.001). Being a merchant also had a negative impact on dropout as compared to housewife. Patients who took D4t-3TC-EFV medication type had a greater risk of dropout as compared to patients who took D4t-3TC-NVP by 84.23% (Table 3). Discussion In this survival retrospective cohort study, there were 243 dropouts from 1512 patients, yielding antiretroviral therapy dropout prevalence were 17/100 patients. In Gambia, only 17.2% dropout was observed [31]. Another study in Nigeria stated that there were 74.9% had been ART dropout which is greater than this investigation [32]. A study which found in sub-Saharan Africa stated that this percentage will vary from 5.7 to 28.9% [33]. A study which was conducted in the region also stated that the percentage of patients dropout was estimated to be up to 31% [34]. The average age of all patients was 33 which is the most productive age group, another study also in Zambia same echo shows that the median age were 34 [35]. Other studies across the country also statement between 31 and 33 [27,36,37], which is almost consistent with this study. Even though many manuscript papers stated that age was as a significant factor for antiretroviral therapy dropout, this study explained that age was not a significant impact on antiretroviral therapy dropout. This is inconsistent with findings from other studies [38]. Unlike other studies, weight and WHO clinical stage were not a responsible cause of antiretroviral therapy dropout [39][40][41][42][43][44]. Patients with higher CD4 level have a greater risk of dropout [AHR = 1.0005 (1.0003-1.0007)], which is directly related with the study in the UK [45] and Hospital of Bergamo cohorts [46], where dropout was related with a higher CD4 count level. Another study in French found that patients with higher CD4 count have increased the risk of antiretroviral therapy dropout [35,47]. This study stated that sex was not a responsible factor for loss from treatment, but another study in Ethiopia stated that being male was one of the predictors for antiretroviral therapy dropout [48]. Likewise, no association was found between sex and loss from treatment [49][50][51], but not other studies [52][53][54]. The difference may arise because of sample size, study design and follow up time difference. Some previous studies suggest that marital status can predict dropout among ART initiators [55][56][57]. In this data, the patient's initially receiving D4t-3TC-EFV regimens had decreased risk of dropout as compared with patients who took D4t-3TC-NVP medication type. But the regimen type AZT was not a significant predictor as compared to D4T based which is consistent with another study [57]. This study will serve as resource material for researchers, managers, policymakers. Additionally, the study will be used as a baseline for further researchers. Conclusion In conclusion, HIV/AIDS patients on antiretroviral therapy were dropout in a short period due to patients marital status married and separated, primary education level, high level of CD4 count, being merchants, farmer and daily labour. Investigation on the cause of antiretroviral therapy dropout from a number of HIV/AIDS clinics in the country is highly appreciated. Limitations There were a lot of patients with incomplete records which were excluded from this investigation; this may affect the conclusion of the study. Authors' contributions This research paper entire activity was done by MT. The author read and approved the final manuscript. The author declares no competing interests. Availability of data and materials If needed the raw data in excel format for this article is available. Consent for publication Not applicable. Ethics approval and consent to participate This study used secondary data from medical case records and patients were not contacted. The data from the case records were handled with strong responsibility and confidentiality. The study was started after ethical clearance was obtained from Mettu University research committee and permission was taken from Mettu Karl Hospital medical director to collect data from records. Funding There was no fund. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2019-04-19T13:28:00.680Z
2019-04-18T00:00:00.000
{ "year": 2019, "sha1": "06a4e9f212cfd1d62eafadf73935fa95fd80869a", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-019-4267-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81855d8af22dd15c3ce383012f335c06a3eeb0ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270524278
pes2o/s2orc
v3-fos-license
Single nucleotide variants in lung cancer Germline genetic variants, including single-nucleotide variants (SNVs) and copy number variants (CNVs), account for interpatient heterogeneity. In the past several decades, genome-wide association studies (GWAS) have identified multiple lung cancer-associated SNVs in Caucasian and Chinese populations. These variants either reside within coding regions and change the structure and function of cancer-related proteins or reside within non-coding regions and alter the expression level of cancer-related proteins. The variants can be used not only for cancer risk assessment and prevention but also for the development of new therapies. In this review, we discuss the lung cancer-associated SNVs identified to date, their contributions to lung tumorigenesis and prognosis, and their potential use in predicting prognosis and implementing therapeutic strategies. Introduction Lung cancer is the most frequent cause of cancer-related death worldwide, with 5-year survival rates varying from 4-17%. 1 Small cell lung carcinoma (SCLC) and non-small cell lung carcinoma (NSCLC) are the two histopathologic types of lung cancer.NSCLC consists of adenocarcinoma (ADC), squamous cell carcinoma (SCC), adenosquamous carcinoma (ASC), and large cell carcinoma (LCC).The alteration in oncogenic driver genes is one of the important causes of lung cancer.Tumorsuppressor gene TP53 is the most frequently mutated gene (approximately 40%) in all types of lung cancer. 2Lung cancer subtypes also harbor specific mutations and variants in other genes, such as EGFR and KRAS in ADC, 3 CDKN2A and RB1 in SCC, 4 and RB1 , PTEN , and MYC in SCLC. 2 Pathogenic variants in the germline of an individual, which are heritable, are called germline variants.A growing amount of evidence has shown the important role of germline variations in lung cancer initiation and progression.Genetic variations include single nucleotide variants (SNVs), insertions, deletions, structural variants, and repeat variations. 5f the minor allele frequency (MAF) of an SNV is more than 1%, a variant is called a polymorphism.Single nucleotide polymorphisms (SNPs) are the most common genetic variation type among all germline genetic variations, accounting for 90% of all polymorphisms in the genome. SNVs in different regions SNVs can reside in coding regions and non-coding regions of the genome.Approximately 10% of disease-associated SNVs are located in coding regions and 90% are located in non-coding regions. 6SNVs in different regions modulate diseases through different mechanisms. SNVs in coding regions Genetic variants in a coding region change the sequence of amino acids and influence protein function.Germline variants in lung cancer driver genes such as EGFR , KRAS , and P53 contribute greatly to tumorigenesis and progression.The epidermal growth factor receptor (EGFR) tyrosine kinase domain is encoded by exons 18-24.Over 90% of the known EGFR mutations in lung cancer are in exons 19-21.EGFR -K757R is the most common EGFR germline mutation in Chinese lung cancer patients.The mutation of K757R is associated with the response of lung cancer to chemotherapy.K757R and exon 19del + K757R show similar sensitivity to icotinib and osimertinib, whereas exon 19del + K757R is more sensitive to afatinib and gefitinib than K757R. 7The less commonly observed EGFR germline variants in coding exons have been associated with susceptibility or treatment response of lung cancer.For example, EGFR V834L and V843I are associated with susceptibility to lung adenocarcinoma (LUAD), and R776H, V843I, L858R, and P848L are associated with squamous cell lung cancer. 8Synonymous mutations also affect lung cancer in some cases.The common EGFR Q787Q polymorphism showed significant protective effects on the overall survival of patients with EGFR -mutant stage IV LUAD treated with EGFR tyrosine kinase inhibitors (TKIs). 9AS proteins are a family of small GTPases that play critical roles in multiple cellular signaling pathways, such as the RAS-mitogenactivated protein kinase (MAPK) pathway.Approximately 83% of KRAS somatic mutations are found at G12, followed by G13 (14%) and Q61 (2%).Germline KRAS mutations have been observed at numerous locations, including K5N, V14I, Q22R, Q22E, P34R, P34L, T58I, G60R, E153V, and F156L, and are associated with lung cancer risk.10 , 11 Germline variants in codons 12, 13, or 61 in KRAS are rarely found. The P53 tumor suppressor gene plays a central role in many cellular processes, such as DNA repair and apoptosis.The P53 V157D mutation was identified in a family with hereditary lung cancer syndrome.Further mechanistic study showed that the P53 V157D variant promotes lung cancer cell proliferation. 12The P53 P72A germline variant in the P53 coding region is also associated with lung cancer. 13n addition to hotpot oncogene mutation, germline variants in other candidate susceptibility genes have also been widely studied.Eleven lung cancer-associated germline variants are located in genes encoding components of the growth hormone/insulin-like growth factor (GH-IGF) pathway, including rs7214723 in CAMKK1 (E375G), rs6964587 in AKAP9 (M463I), and rs6183 in GHR (P495T), and genes in the DNA damage-response pathway, including rs11571833 in BRCA2 (K3326X) and rs28360135 in XRCC4 (I137T). 14Rare deleterious variants in DNA repair pathway genes, such as rs77187983 in EHBP1 (D590V), rs11571833 in BRCA2 (K326X), and rs752672077 in MPZL2 (I24M), are also associated with lung cancer risk. 15Germline mutations in PARK2 correlate with impaired mitophagy and increase the death of lung cancer cells.Mechanically, A46T Parkin results in inability to translocate to mitochondria and recruit downstream mitophagic regulators, such as optineurin (OPTN) and transcription factor EB (TFEB).Besides, N254S and R275W Parkin display slower mitochondrial translocation than WT Parkin. 16hole-exome sequencing revealed that SCLC frequently harbors germline pathogenic variants in RAD51D (Q62X), CHEK1 (Q346X, R379X), BRCA2 (R1699W, Y2215X), and MUTYH (G396D, V493F, Y179C), which are significantly associated with recurrence-free survival after platinum-based chemotherapy. 17 SNVs in non-coding regions A significant number of lung cancer-associated SNVs have been identified in gene promoters, introns, and intergenic regions.Promoter SNVs frequently alter the binding affinity of transcription factors, subsequently influencing the transcriptional regulation of key genes involved in lung tumorigenesis.Intron or intergenic SNVs either modulate the activity of cis-regulatory elements (CREs), such as enhancers and silencers, usually by affecting transcription factor binding or changing the function of non-coding RNAs (ncRNAs). SNVs in cis-regulatory elements A non-coding SNV can exert its functional effects through various mechanisms: regulating the transcription of neighboring genes, regulating the transcription of distant genes, or regulating genes located on other chromosomes.This suggests the complexity of eukaryotic genomic assembly.rs3769201 and rs722864, which are located in the introns of ZAK , are associated with decreased ZAK mRNA expression and reduced lung cancer risk. 18rs3117582, located in intron 1 of BAT3 , is involved in the regulation of p53 acetylation in response to DNA damage; and rs3131379, located in intron 10 of MSH5 , is involved in DNA mismatch repair and regulates lung cancer susceptibility. 19Whether these two SNP loci regulate BAT3 or MSH5 is not clear.Growing evidence has shown that a non-coding SNP may bypass nearby genes to regulate a gene that is located far away.For example, rs402710, which is located in the last intron of CLPTM1L , physically interacts with the TERT promoter by looping out the intervening sequences, regulating TERT gene expression and lung tumorigenesis 20 ( Fig. 1 ). Although intrachromosomal regulation is a frequently reported mechanism used by non-coding SNPs to regulate disease, some lung cancer-associated SNP loci exhibit trans-effects.For example, rs1663689, which is located in the intergenic region in chromosome 10p1.4,regulates lung cancer susceptibility and outcome through regulation of adhesion G protein-coupled receptor G6 ( ADGRG6 ), which is located chromosome 6, through interchromosomal interaction 21 ( Fig. 2 ). A non-coding SNP usually exerts its function by changing the binding affinity of transcription factors.For example, rs2853677 is located within the Snail1 binding site in a TERT enhancer.The enhancer increases TERT transcription when juxtaposed to the TERT promoter.rs2853677-T results in the binding of Snail1 to the enhancer and disrupting enhancer-promoter colocalization, which subsequently silences TERT transcription ( Fig. 3 ). 22SNP rs17079281-C, located in the DCBLD1 promoter, creates a YY1-binding site, resulting in decreased DCBLD1 expression and subsequent decreased cell proliferation. 23rs9399451 and rs9390123 reside within an enhancer region and influence the binding of POU2F1, which subsequently affects the promoter activity of PHACTR2-AS1 and PEX3 in lung cancer cell lines. 24rs4142441 is located in a MYC binding site in the OSER1-AS1 promoter region.The G allele of rs4142441 results a higher binding affinity of MYC.MYC binding suppresses the transcription of OSER1-AS1 , and promotes tumor progression. 25A lung-specific p53-responsive enhancer of TNFRSF19 harbors three highly linked common SNPs (rs17336602, rs4770489, and rs34354770) and six p53 binding sequences either close to or located between the variations.The enhancer effectively protects normal lung cell lines against pulmonary carcinogen nicotine-derived nitrosamine ketone (NNK)-induced DNA damage and malignant transformation by upregulating TNFRSF19 through chromatin looping.These variations significantly weaken the enhancer activity by affecting the p53 response, especially when cells are exposed to NNK. 26 SNVs in ncRNAs Most of the human genome is transcribed RNAs that do not encode proteins.These ncRNAs, including microRNAs (miRNAs) and long ncRNAs (lncRNAs), play crucial roles in regulating the initiation and progression of various cancers. 27Lung cancer-associated SNPs in ncR-NAs have been identified.rs11614913 in hsa-mir-196a2 regulates its binding to the LSP1 3 ′ UTR, which changes LSP1 expression and survival in individuals with NSCLC. 28rs10505477 in the lncRNA CASC8 is highly related to ADC risk in males and highly relevant to severe hematologic toxicity in NSCLC and gastrointestinal toxicity in SCLC after platinum-based chemotherapy. 29rs140618127, located in the lncRNA LOC146880 , regulates binding between miR-539-5p and LOC146880 , which modulates phosphorylation of enolase 1 (ENO1) and subsequent phosphorylation of phosphoinositide 3-kinase (PI3K) and Akt, and is associated with NSCLC susceptibility in the Chinese population. 30The risk T allele of rs12740674, located in the enhancer of miR-1262, reduces the expression level of miR-1262 in lung tissue through chromosomal looping and increases the expression levels of UNC-51-like kinase 1 (ULK1) and RAB3D, member RAS oncogene family, promoting lung cancer cell proliferation ( Table 1 ). 31 SNVs and lung cancer susceptibility Over the past decade, several genome-wide association studies (GWAS) focused on cancer susceptibility have been performed.To date, 51 lung cancer-associated SNP loci have been identified, 32 and a substantial proportion of these loci are specific to different subgroups in terms of histological subtype, smoking status, and ancestry. 33 SNVs in different lung cancer subgroups Genomic heterogeneity is associated with different histopathological types of lung cancer.A study by Dai et al 34 identified 19 SNP loci that were significantly associated with NSCLC risk in a Chinese population.Among these variants, rs17038564 ( P = 1.87 × 10 -8 ), rs35201538 ( P = 1.99 × 10 -8 ), and rs77468143 ( P = 7.48 × 10 -12 ) were significant in the LUAD subgroup, whereas rs4573350 was specific for SCC.SNPs can be verified through different cohorts.Amos et al 35 confirmed the association of rs77468143 with LUADs in a European population.The authors also identified additional seven SNPs (rs13080835, rs7705526, rs4236709, rs885518, rs11591710, rs1056562, and rs41309931) that were associated with LUAD and three SNPs (rs116822326, rs7953330, and rs17879961) that were associated with SCC. 35rs3134615, located in the 3 ′ UTR of MYCL1 , is associated with an increased risk of SCLC. 36he identification of subtype-specific associations of genetic variants indicates that the genetic architecture of lung cancer varies markedly among LUAD, SCC, and SCLC ( Table 2 ). SNVs and smoking status Smoking is the main cause of lung cancer, especially LUAD and SCLC, and nicotine is the most addictive component in tobacco.An analysis by Gabriel et al 37 demonstrated that genetic variants influence the smoking behavior of individuals, which in turn influences their carcinogenic exposure and, consequently, their somatic mutation burden.The 15q25 susceptibility region, which contains six coding genes, including three cholinergic nicotine receptor genes ( CHRNA3 , CHRNA5 , and CHRNB4 ) that exhibit independent effects on smoking behavior, contains multiple SNVs that are strongly associated with lung cancer. 38 -42The most robust lung cancer-associated SNP in 15q25 is rs16969968, which results in an amino acid change from aspartate to asparagine at position 398 of the nicotinic receptor 5 subunit protein sequence.rs16969968 predicts delayed smoking cessation and earlier age of lung cancer diagnosis. 43 -45rs9439519 and rs4809957 are associated with cigarette smoking.rs4809957 interacts with smoking dose to contribute to lung cancer risk. 46rs6441286, rs17723637, and rs4751674 stratify lung cancer risk by smoking behavior.rs6441286 and rs17723637 variants increase the risk for lung cancer in eversmokers, whereas the rs4751674 variant has a protective effect in eversmokers compared with never-smokers. 47rs910083, located in the intron of DNMT3B , is associated with an increased risk of nicotine dependence.In International Lung Cancer Consortium data, the C allele at rs910083 was found to increase the risk of squamous cell lung carcinoma. 48rs34211819, located in an intron region of tensin-3 ( TNS3 ), and rs1143149, located in an intron region of septin 7 ( SEPT7 ), are significantly associated with the survival of NSCLC patients who are long-term former smokers.Both SNPs have significant interaction effects with years of smoking cessation.As the years of smoking cessation increase in long-term former smokers, the protective effect of rs34211819 and the detrimental effect of rs1143149 on survival are both enhanced. 49ile the primary cause of lung cancer is smoking, approximately 25% of lung cancers worldwide occur in never-smokers. 502][53] A study using WES and RNA-sequencing data for never-smoker LUADs sequenced by The Cancer Genome Atlas (TCGA) and Clinical Proteomic Tumor Analysis Consortium (CPTAC) found that pathogenic germline variants in cancer predisposition genes such as BRCA1 , BRCA2 , FANCG , FANCM , HMBS , MSH6 , NF1 , POLD1 , TMEM127 , and WRN are exclusively associated with lung cancer in never-smokers. 54A GWAS of lung cancer in never-smoking females in Asia identified three lung cancer susceptibility loci, at 10q25.2 (rs7086803), 6q22.2 (rs9387478), and 6p21.32 (rs2395185), with no evidence of an association of 15q25 with lung cancer. 55The TERT and CLPTM1L genes are located in 5p15.TERT is an established telomere maintenance locus.rs10936599 variants, which are located in the TERT coding region and associated with telomere length, were robustly associated with increased lung cancer risk among never-smoking women in Asia. 56CLPTM1L was identified by screening for cisplatin (CDDP) resistance-related genes and was found to induce apoptosis in CDDP-sensitive cells.rs402710 is located in intron 4 of the CLPTM1L gene and is associated with lung cancer susceptibility. 41 , 57strogen receptor ( ER ) gene SNPs such as rs7753153 and rs985192, located in ESR1 , and rs3020450, located in ESR2 , are associated with LUAD risk in never-smoking women. 58Moreover, rs12233719, a sex hormone regulation-related SNP in UGT2B7 , is associated with NSCLC risk among never-smoking Chinese women. 59These studies suggest the important role of sex hormones in regulating lung tumorigenesis in never-smoking populations.rs11080466 and rs11663246, located in the intron of PIEZO2 , show a significant association with NSCLC susceptibility of never smokers in Korean populations. 60rs4648127, located in the NFKB1 gene, was associated with lung cancer in the screening arm of the Prostate, Lung, Colon, and Ovarian (PLCO) Cancer Screening Trial. 61he genetic susceptibility differences in histopathological types and in accordance with smoking status may implicate distinct biological mechanisms of lung cancer and therapeutic strategies ( Table 3 ). Interplay between SNVs and somatic mutations in regulating lung cancer susceptibility Most of the mutations detected clinically are somatic mutations.Germline mutations can co-occur or be mutually exclusive with somatic cancer gene alterations. 62For example, the susceptibility variant rs36600 is associated with somatic mutations within ARID1A ; the susceptibility variants rs2395185 and rs3817963 are associated with somatic alterations in the cell cycle pathway; and rs3817963 is associated with somatic alterations in the MAPK signaling pathway. 63rs2395185 is associated with elevated APOBEC mutagenesis. 64Some lung cancer risk-related SNPs were shown to influence genetic damage in coke oven workers exposed to polycyclic aromatic hydrocarbons (PAHs).Some SNP loci (rs1333040, rs1663689, and rs3813572) are associated with decreased micronuclei frequency, which is a biomarker of chromosomal damage, genome instability, and cancer risk that associates acquired mutations with genetic susceptibility. 65Recently, Peng et al 66 identified 111 pathogenic or likely pathogenic (P/LP) germline mutations in 35 cancer genes in 106 of 1794 Chinese patients (5.91%).Chinese patients with germline mutations show different prevalence rates of somatic KRAS mutation, MET exon 14 skipping, and TP53 mutations compared with those without germline mutations. SNVs associated with lung cancer outcome and drug sensitivity In recent years, significant efforts have been dedicated to investigate the biological mechanisms underlying the association of SNVs with lung cancer outcomes and the clinical implications.The main goal is to translate these discoveries into clinical application.rs3743073-G in the CHRNA3 gene is significantly associated with short survival among patients with advanced stage NSCLC. 67rs942190-G and rs2401863-A located in TDP1 are associated with relatively poor survival among SCLC patients. 68Four SNPs (rs2107561, rs6882451, rs1826692, and rs6595026) modulate overall survival of lung cancer, and rs2107561, an intron variant of PTPRG , exhibits the strongest association. 69rs5030740 in RPA1 and rs1776148 and rs1047840 in EXO1 are associated with disease-free survival and overall survival in lung cancer patients receiving platinum-based chemotherapy.Patients with the C allele of rs5030740 are regarded as protective allele of the prolonged progression-free survival.Patients with the A/A or A/G genotype of rs1776148 and the A/A genotype of rs1047840 have longer overall survival than G/G genotype of rs1776148 and A/G or G/G genotype of rs1047840. 70n addition to predicting prognosis, SNPs can be used in determining therapeutic strategies.The therapeutic efficacy suffers from large patient variability.Genetic variants often alter the sensitivity to the treatments in clinical practice.For example, rs712829 (216G/T) and rs4644 (191C/A) in EGFR are predictive of sensitivity to gefitinib.Mechanically, rs712829 is located in the binding site for the transcription factor Sp1.The T allele promotes Sp1 binding, enhances EGFR transcription, and increases sensitivity to gefitinib.rs4644 is located in the transcriptional start site of the EGFR promoter.The A allele increases promoter activity and protein expression and therefore increases sensitivity to gefitinib. 71rs2231142 (421C/A) in ABCG2 has been correlated with drug transport.The A allele reduces TKI transport and increases the accumulation of gefitinib, which results in adverse effects. 71Patients with the H19 rs2839698 A allele have a smaller chance of response to platinumbased chemotherapy. 72rs1052566 (A273V) in BRMS1v2 is associated with aggressive tumors in LUAD.The A allele of rs1052566 increases c-fos, thereby upregulating CEACAM6 , which drives metastasis.T5224, a c-fos pharmacologic inhibitor, suppresses metastases in mice bearing A/A tumors. 73rs1663689 A enhances ADGRG6 expression, which elevates the downstream cyclic adenosine monophosphate (cAMP)-protein kinase A (PKA) signaling.rs1663689 A/A tumors are more sensitive to the PKA inhibitor H89 than the rs1663689 C/C tumors. 21The A allele of rs16906252 in MGMT is associated with increased MGMT methylation and lower MGMT expression.MGMT can reduce the tumor response to temozolomide.Thus, lung cancer patients with rs16906252-A may benefit from temozolomide treatment. 74Although the clinical treatment strategies based on germline variants have not been implemented at present, these studies demonstrate the utility for SNVs in predicting drug sensitivity of tumors, highlighting their important role in precision medicine. Outlook SNVs regulate cellular behavior and subsequent disease phenotypes; therefore, SNVs can be used to select the appropriate therapeutic strategies.To achieve this goal, understanding the causal function of SNVs is needed.SNVs reside in different regions and function through different mechanisms.SNVs in coding regions change the sequence of amino acids, and subsequently change the structure and biological function of relative proteins.However, over 90% of disease-associated SNVs are located in non-coding regions of the genome, often at considerable genomic distances from annotated genes.These non-coding SNVs either change the sequences of noncoding RNAs or change the binding affinity of transcription factors and thereby posttranscriptionally regulate or transcriptionally regulate their downstream target genes ( Fig. 4 ).Although some well-characterized non-coding SNVs regulate their neighboring genes, assignment based on linear proximity is error prone, as many cis-regulatory elements map large distances away from their targets, bypassing the nearest gene, which makes identifying their downstream genes problematic.Therefore, the functions of most lung cancer-associated non-coding SNVs remain unknown.Developing efficient strategies to decipher the regulatory pathways for non-coding SNVs is needed.Given that long-range regulation requires direct physical interactions in eukaryotes, genomic screening for non-coding SNVinteracting genes will serve as a strategy to identify target genes. Fig. 1 . Fig. 1.Schematic representation of the mechanism by non-coding SNV rs402710, which is located in the last intron of CLPTM1L , and regulates TERT via physical interactions with the TERT promoter.SNV: Single nucleotide variant. Fig. 3 . Fig. 3. Schematic representation of rs2853677 changing the binding affinity of the Snail1 transcription factor. Table 1 Mechanisms of non-coding SNVs. Table 2 SNVs and lung cancer susceptibility in different subgroups. Table 3 SNVs and lung cancer susceptibility with different smoking status.
2024-06-17T15:33:20.418Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "106fbd329818c04f70b8c774fe856a0b9e43fa51", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.pccm.2024.04.004", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "109bc45e8a7d0f97c0c1df5a5bfe1f2f94214cd7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4990419
pes2o/s2orc
v3-fos-license
MPPT Control Methods in Wind Energy Conversion Systems Wind energy conversion systems have been attracting wide attention as a renewable energy source due to depleting fossil fuel reserves and environmental concerns as a direct consequence of using fossil fuel and nuclear energy sources. Wind energy, even though abundant, varies continually as wind speed changes throughout the day. The amount of power output from a wind energy conversion system (WECS) depends upon the accuracy with which the peak power points are tracked by the maximum power point tracking (MPPT) controller of the WECS control system irrespective of the type of generator used. This study provides a review of past and present MPPT controllers used for extracting maximum power from the WECS using permanent magnet synchronous generators (PMSG), squirrel cage induction generators (SCIG) and doubly fed induction generator (DFIG). These controllers can be classified into three main control methods, namely tip speed ratio (TSR) control, power signal feedback (PSF) control and hill-climb search (HCS) control. The chapter starts with a brief background of wind energy conversion systems. Then, main MPPT control methods are presented, after which, MPPT controllers used for extracting maximum possible power in WECS are presented. Introduction Wind energy conversion systems have been attracting wide attention as a renewable energy source due to depleting fossil fuel reserves and environmental concerns as a direct consequence of using fossil fuel and nuclear energy sources. Wind energy, even though abundant, varies continually as wind speed changes throughout the day. The amount of power output from a wind energy conversion system (WECS) depends upon the accuracy with which the peak power points are tracked by the maximum power point tracking (MPPT) controller of the WECS control system irrespective of the type of generator used. This study provides a review of past and present MPPT controllers used for extracting maximum power from the WECS using permanent magnet synchronous generators (PMSG), squirrel cage induction generators (SCIG) and doubly fed induction generator (DFIG). These controllers can be classified into three main control methods, namely tip speed ratio (TSR) control, power signal feedback (PSF) control and hill-climb search (HCS) control. The chapter starts with a brief background of wind energy conversion systems. Then, main MPPT control methods are presented, after which, MPPT controllers used for extracting maximum possible power in WECS are presented. Wind energy background Power produced by a wind turbine is given by [1] 23 0. 5 ( , ) where R is the turbine radius, w v is the wind speed,  is the air density, p C is the power coefficient,  is the tip speed ratio and  is the pitch angle. In this work  is set to zero. The tip speed ratio is given by: where r  is the turbine angular speed. The dynamic equation of the wind turbine is given as www.intechopen.com where J is the system inertia, F is the viscous friction coefficient, T m is the torque developed by the turbine, T L is the torque due to load which in this case is the generator torque. The target optimum power from a wind turbine can be written as Maximum power point tracking control Wind generation system has been attracting wide attention as a renewable energy source due to depleting fossil fuel reserves and environmental concerns as a direct consequence of using fossil fuel and nuclear energy sources. Wind energy, even though abundant, varies www.intechopen.com MPPT Control Methods in Wind Energy Conversion Systems 341 continually as wind speed changes throughout the day. Amount of power output from a WECS depends upon the accuracy with which the peak power points are tracked by the MPPT controller of the WECS control system irrespective of the type of generator used. The maximum power extraction algorithms researched so far can be classified into three main control methods, namely tip speed ratio (TSR) control, power signal feedback (PSF) control and hill-climb search (HCS) control [2]. The TSR control method regulates the rotational speed of the generator in order to maintain the TSR to an optimum value at which power extracted is maximum. This method requires both the wind speed and the turbine speed to be measured or estimated in addition to requiring the knowledge of optimum TSR of the turbine in order for the system to be able extract maximum possible power. Fig. 2 shows the block diagram of a WECS with TSR control. In PSF control, it is required to have the knowledge of the wind turbine's maximum power curve, and track this curve through its control mechanisms. The maximum power curves need to be obtained via simulations or off-line experiment on individual wind turbines. In this method, reference power is generated either using a recorded maximum power curve or using the mechanical power equation of the wind turbine where wind speed or the rotor speed is used as the input. Fig. 3 shows the block diagram of a WECS with PSF controller for maximum power extraction. The HCS control algorithm continuously searches for the peak power of the wind turbine. It can overcome some of the common problems normally associated with the other two methods. The tracking algorithm, depending upon the location of the operating point and relation between the changes in power and speed, computes the desired optimum signal in order to drive the system to the point of maximum power. Fig. 4 shows the principle of HCS control and Fig. 5 shows a WECS with HCS controller for tracking maximum power points. MPPT control methods for PMSG based WECS Permanent Magnet Synchronous Generator is favoured more and more in developing new designs because of higher efficiency, high power density, availability of high-energy permanent magnet material at reasonable price, and possibility of smaller turbine diameter in direct drive applications. Presently, a lot of research efforts are directed towards designing of WECS which is reliable, having low wear and tear, compact, efficient, having low noise and maintenance cost; such a WECS is realisable in the form of a direct drive PMSG wind energy conversion system. There are three commonly used configurations for WECS with these machines for converting variable voltage and variable frequency power to a fixed frequency and fixed voltage power. The power electronics converter configurations most commonly used for PMSG WECS are shown in Fig. 6 Depending upon the power electronics converter configuration used with a particular PMSG WECS a suitable MPPT controller is developed for its control. All the three methods of MPPT control algorithm are found to be in use for the control of PMSG WECS. Tip speed ratio control A wind speed estimation based TSR control is proposed in [3] in order to track the peak power points. The wind speed is estimated using neural networks, and further, using the estimated wind speed and knowledge of optimal TSR, the optimal rotor speed command is computed. The generated optimal speed command is applied to the speed control loop of the WECS control system. The PI controller controls the actual rotor speed to the desired value by varying the switching ratio of the PWM inverter. The control target of the inverter is the output power delivered to the load. This WECS uses the power converter configuration shown in Fig. 6 (a). The block diagram of the ANN-based MPPT controller module is shown in Fig. 7. The inputs to the ANN are the rotor speed ω r and mechanical power P m . The P m is obtained using the relation Power signal feedback In [4], the turbine power equation is used for obtaining reference power for PSF based MPPT control of PMSG WECS. Fig. 8 shows the block diagram for the PSF control signal generation. Using equation (8) we have: The PSF control block generates the reference power command P ref using (8) which is then applied to the grid side converter control system for maximum power extraction. www.intechopen.com mode will use the searched data by advanced hill-climb search to gradually train the intelligent memory to record the training experience. The algorithm will reuse the recorded data in application mode for fast execution. This "search-remember-reuse" will repeat itself until an accurate memory of system characteristics is established. Therefore, after the algorithm is adequately trained, its power extraction performance is optimized. Since the intelligent memory is trained on-line during system operation, such a process is also referred as on-line training process. The structure of the advanced hill-climb search control algorithm is shown in Fig. 9. Every execution cycle starts with sampling of V dc and P 0 , and calculation of their differentials. The mode switch rule directs the control into one of three execution modes, namely initial mode, training mode, and application mode. The inverter current demand I dm is calculated in that mode and fed to the inverter to regulate the system power output. I dm is defined as the requested peak value of the sinusoidal inverter output current. The maximum power error driven mechanism (MPED) provides the system with a preliminary optimized operating point when the intelligent memory is empty. The reference signal for MPED is the P max which can only be reached when wind is sufficiently high. The intelligent memory records the system maximum power points and the corresponding control variables at different operating conditions. The direct current demand control (DCDC) utilizes the optimized relationship between the V dc and I dm recorded by the intelligent memory, and generates the command I dm based on the present value of V dc . For details about the method please refer to [2]. generator system. The problem of maximizing the WG output using the converter duty cycle as a control variable is effectively solved using the steepest ascent method given by Hill climb search control where D k and D k-1 are the duty cycle values at k and k-1 sampling instant and C 1 is the step change. The method is based on the fact that at maximum power point /0 dP d  and therefore /0 dP dD  where D is the dc/dc converter duty cycle. This is possible because power as a function of duty cycle has a single extremum point coinciding with the maximum power point of WG, and the dc/dc converter duty-cycle adjustment according to the control law of (9) ensures convergence to the maximum power point under any wind-speed condition. A HCS control method based on limit cycle is proposed in [6]. The MPPT control is performed via an integrator ramping up or down the current command signal of the grid side converter using the error in the dc link voltage regulated by a boost chopper. The reference current increases till the maximum power is obtained however, if it is increased further then the dc link voltage cannot be kept at constant because the power equilibrium cannot be maintained. Therefore, the dc link voltage begins to decrease and if it goes below a certain limit then, the integrator gain is changed to negative value decreasing the value of reference current. The MPPT control exhibits non linear oscillations about maximum power point called the limit cycle. In this method, the generated output power is automatically maximized by utilizing the inherent limit cycle phenomena of the system itself without requiring any information from the generator side, e.g. rotation speed, torque or instantaneous power etc. In [7,8] disturbance injection based HCS is proposed. The control algorithm proposes to inject a sinusoidal perturbation signal to the chopper. Then, the system output power is sampled at π/2 and 3π/2 of each cycle, the difference of which decides about the next perturbation. The method does not require wind speed or rotor speed sensing. In the HCS method proposed in [9], by controlling the output power as well as adjusting the electrical torque, the speed of the generator is indirectly controlled and then it obtains the optimum speed for driving the power to the maximum point. The maximum power error driven mechanism, operates like a traditional hill-climbing method, drives the output power gradually increasing to its maximum value by regulating the direction of current command according to the power variation trend. The maximum power differential speed control produces an additional step of current command based on the instantaneous difference of generator speeds, so that it can prevent the wind turbine from stalling at the suddenly dropping wind speed and achieve the object of maximum power extraction. It adds a faster control index into the control value which is proposed to be an exponential function of the differential generator speed and therefore it causes sharp increase or decrease in generator current command when wind speed increases or decreases suddenly. The controller generates current command for controlling the grid side converter. The method does not require wind speed measurement. In [10] a variable tracking step is used to track the maximum power point. The constant step size used in conventional controllers is replaced with a scaled measure of the slope of power with respect to the perturbed generator speed ΔP/Δω. The variable step uses a larger step size www.intechopen.com when the operating point is far away from the peak due to the larger magnitude of P-ω slope and as the peak gets nearer, the step size should automatically approach to zero. The method uses torque current and speed to compute power. The speed step is computed as and the speed reference for the machine side converter control system is computed as In [11,12] adaptive control algorithm for MPPT control is proposed. The control algorithm allows the generator to track the maximum power points of the wind turbine system under fluctuating wind conditions. The algorithm proposed initiates the TSR control with an approximate optimal TSR value. When the measured wind velocity is found to be stable, the algorithm switches to HCS to search for the true optimal point. When the true peak is reached, a memory table of the optimum generator speed versus the corresponding wind velocity is updated and then, the TSR is corrected. When the wind speed varies, the rotor speed reference is applied from the memory if a recorded data at current wind speed is present in the memory and if not, it is calculated using TSR. The MPPT control signal is given to the boost chopper for tracking the maximum power points. The method requires both wind speed and rotor speed measurement. In [13] MPPT control of PMSG WECS is implemented via a dc-dc boost converter. The proposed MPPT strategy is based on directly adjusting the dc-dc converter duty cycle according to the result of the comparison of successive WTG output power measurements. The WECS MPPT algorithm operates by constantly perturbing the rectified output voltage V dc of the WECS via the dc-dc boost converter duty cycle and comparing the actual output power with the previous perturbation sample. If the power is increasing, the perturbation will continue in the same direction in the following cycle so that the rotor speed will be increased, otherwise the perturbation direction will be inverted. When the optimal rotational speed of the rotor for a specific wind speed is reached, the HCS algorithm will have tracked the maximum power point and then will settle at or around this point. In [14] a buck-boost converter circuit is used to achieve the maximum power control of wind turbine driven PMSG WECS. The PMSG is suitably controlled according to the generator speed and thus the power from a wind turbine settles down on the maximum power point using the proposed MPPT control method. The method does not require the knowledge of wind turbine's maximum power curve or the information on wind velocity. It uses the dc link power as its input and the output is the chopper duty cycle. The HCS MPPT control method in [15] uses power as the input and torque as the controller output. The optimum torque output of the controller is applied to the torque control loop of a DTC controlled PMSG. The controller does not require wind speed sensing. The HCS MPPT control method presented in [16] combines the benefits of two of the commonly used MPPT methods: (i) the tracking method based on the optimum power versus speed characteristic and (ii) the HCS. The algorithm measures generator rotor speed and computes optimum torque T opt , the torque which maximizes power. The actual torque T t is also calculated. For a small error between the optimal and measured torque ΔT, the system performs a perturb and observe (P&O) process, based on the calculation of actual power, overlooking the use of the optimum T -ω characteristic. However, if the ΔT exceeds a certain limit the duty cycle is commanded according to the optimum characteristic. In other words, the system tracks the maximum power point through a P&O process under normal circumstances; however, it uses the predefined T -ω characteristic in case the P&O algorithm is thrown off due to heavy disturbances such as sudden wind speed changes or improper initialization. Neural networks based MPPT controller is presented in [17]. The method proposed uses Jordan recurrent multilayer ANN with one hidden layer. The weights of the networks are continuously modified by back propagation during the operation of the WECS with online training. The control system continuously searches ways to reach the peak power point. Optimum rotor speed, which is the output of the controller, is used as the reference speed for the vector controlled machine side converter control system. MPPT control methods for SCIG based WECS The use of induction generators (IG) is advantageous since they are relatively inexpensive, robust, and require low maintenance. The nature of IG is unlike that of PMSG; they need bidirectional power flow in the generator-side converter since they require external reactive power support from the grid. Modern IG WECS are equipped with PWM back-to-back frequency converter which also allows advanced control algorithms to be implemented. However, other converter configurations are possible and can be found in the literature. SCIG WECS with a back-to-back converter configuration is shown in Fig. 10. The MPPT control in such system is realized using the machine side control system. All the existing MPPT control algorithm scan be implemented for the control of IG WECS. Tip speed ratio control In [13,18,19], TSR control method of MPPT control for SCIG WECS are presented. In the TSR control method presented in [18] the wind speed is measured for obtaining optimum rotor speed using the value of optimum tip speed ratio. The optimum TSR is obtained from the turbine's C p -λ curve. The rotor speed required for implementing speed feed-back control is estimated using a speed observer. The speed control is exercised using a fuzzy neural network controller. Wind-speed estimation based MPPT control are proposed in [3,19]. In [3], an ANN wind speed estimation based TSR control method was used for implementing MPPT control of SCIG WECS. Here, the optimum speed command was generated by the MPPT controller for speed control loop of machine side converter control system enabling the WECS to extract www.intechopen.com optimum energy. The method has been presented in section (4.1). The wind speed estimation method in [19] is based on the theory of support-vector regression (SVR). The inputs to the wind-speed estimator are the wind-turbine power and rotational speed. A specified model, which relates the inputs to the output, is obtained by offline training. Then, the wind speed is determined online from the instantaneous inputs. The estimated wind speed is used for MPPT control of SCIG WECS. Power signal feedback In [20], fuzzy logic controller is used to track the maximum power point. The method uses wind speed as the input in order to generate reference power signal. Maximum power output P max of the WECS at different wind velocity v w is computed and the data obtained is used to relate P max to v w using polynomial curve fit as given by The reference power at the rectifier output is computed using the maximum power given by (12) as The actual power output of the rectifier P o is compared to the reference power P ref and any mismatch is used by the fuzzy logic controller to change the modulation index M for the grid side converter control. Hill climb search control HCS control of SCIG WECS are presented in [21,22]. In [21], a fuzzy logic based HCS controller for MPPT control is proposed. The block diagram of the fuzzy controller is shown in Fig. 11. In the proposed method, the controller, using Po as input generates at its output the optimum rotor speed. Further, the controller uses rotor speed in order to reduce sensitivity to speed variation. The increments or decrements in output power due to an increment or decrement in speed is estimated. If change in power 0 P  is positive with last positive change in speed r In order to avoid getting trapped in local minima, the output r   is added to some amount of r L   in order to give some momentum and continue the search. The scale factors KPO and KWR are generated as a function of generator speed so that the control becomes somewhat insensitive to speed variation. For details please refer to [21]. In [22], a fuzzy logic control is applied to generate the generator reference speed, which tracks the maximum power point at varying wind speeds. The principle of the FLC is to perturb the generator reference speed and to estimate the corresponding change of output power P 0 . If the output power increases with the last increment, the searching process continues in the same direction. On the other hand, if the speed increment reduces the output power, the direction of the searching is reversed. The block diagram of the proposed controller is shown in Fig. 12. The fuzzy logic controller is efficient to track the maximum power point, especially in case of frequently changing wind conditions. The controller tracks the maximum power point and extracts the maximum output power under varying wind  are used as the control input signals and the output of the controller is the new speed reference speed which, after adding with previous speed command, forms the present reference speed. For more details, please refer to [22]. MPPT control methods for DFIG based WECS The PMSG WECS and SCIG WECS have the disadvantages of having power converter rated at 1 p.u. of total system power making them more expensive. Inverter output filters and EMI filters are rated for 1 p.u. output power, making the filter design difficult and costly. Moreover, converter efficiency plays an important role in total system efficiency over the entire operating range. WECS with DFIG uses back to back converter configuration as is shown in Fig. 13. The power rating of such converter is lower than the machine total rating as the converter does not have to transfer the complete power developed by the DFIG. Such WECS has reduced inverter cost, as the inverter rating is typically 25% of total system power, while the speed range of variable speed WECS is 33% around the synchronous speed. It also has reduced cost of the inverter filters and EMI filters, because filters are rated for 0.25 pu total system power, and inverter harmonics present a smaller fraction of total system harmonics. In this system power factor control can be implemented at lower cost, because the DFIG system basically operates similar to a synchronous generator. The converter has to provide only the excitation energy. The higher cost of the wound rotor induction machine over SCIG is compensated by the reduction in the sizing of the power converters and the increase in energy output. The DFIG is superior to the caged induction machine, due to its ability to produce above rated power. The MPPT control in such system is realized using the machine side control system. Fig. 13. DFIG WECS Tip speed ratio control TSR control is possible with wind speed measurement or estimation. In [23], a wind speed estimation based MPPT controller is proposed for controlling a brushless doubly fed induction generator WECS. The block diagram of the TSR controller is shown in Fig. 14. The optimum rotor speed opt  , which is the output of the controller, is used as the reference signal for the speed control loop of the machine side converter control system. versus P T and I c versus η were implemented using RBF neural networks. Then, generator input power P T is calculated from the maximum efficiency max  and the measured output power P 0 . The next step involves wind speed estimation which is achieved using Newton-Raphson or bisection method. The estimated wind speed information is used to generate command optimum generator speed for optimum power extraction from WECS. For details of the proposed method please refer to [23]. The method is not new; similar work was earlier implemented for controlling a Brushless Doubly Fed Generator by Bhowmik et al [24]. In this method the Brushless Doubly Fed Generator was operated in synchronous mode and input to the controller was only the output power of the WECS. Power signal feedback control PSF control along with feedback linearization is used by [25] for tracking maximum power point. The input-output feedback linearization is done using active-reactive powers, d-q rotor voltages, and active-reactive powers as the state, input and output vectors respectively. The references to the feedback linearization controller are the command active and reactive powers. The reference active power is obtained by subtracting the inertia power from the mechanical power which is obtained by multiplying speed with torque. A disturbance torque observer is designed in order to obtain the torque. A fuzzy logic based PSF controller is presented in [26]. Here, a data driven design methodology capable of generating a Takagi-Sugeno-Kang (TSK) fuzzy model for maximum power extraction is proposed. The controller has two inputs and one output. The rotor speed and generator output power are the inputs, while the output is the estimated maximum power that can be generated. The TSK fuzzy system, by acquiring and processing the inputs at each sampling instant, calculates the maximum power that may be generated by the wind generator, as shown in Fig. 15. Fig. 15. TSK fuzzy MPPT controller The approach is explained by considering the turbine power curves, as shown in Fig. 16. If the wind turbine initially operates at point A, the control system, using rotor speed and turbine power information, is able to derive the corresponding optimum operating point B, giving the desired rotor speed reference ω B . The generator speed will therefore be controlled in order to reach the speed ω B allowing the extraction of the maximum power P B from the turbine. [27][28][29]. In [27], a simple HCS method is proposed wherein output power information required by the MPPT control algorithm is obtained using the dc link current and generator speed information. These two signals are the inputs to the MPPT controller whose output is the command speed signal required for maximum power extraction. The optimum speed command is applied to the speed control loop of the grid side converter control system. In this method, the signals proportional to the P m is computed and compared with the previous value. When the result is positive, the process is repeated for a lower speed. The outcome of this next calculation then decides whether the generator speed is again to be increased or decreased by decrease or increase of the dc link current through setting the reference value of the current loop of the grid side converter control system. Once started, the controller continues to perturb itself by running through the loop, tracking to a new maximum once the operating point changes slightly. The output power increases until a maximum value is attained thus extracting maximum possible power. The HCS control method presented in [28] operates the generator in speed control mode with the speed reference dynamically modified in accordance with the magnitude and direction of change of active power. Optimum power search algorithm proposed here uses the fact that dP o /dω=0 at peak power point. The algorithm dynamically modifies the speed command in accordance with the magnitude and direction of change of active power in order to reach the peak power point. In [29], the proposed MPPT method combines the ideas of sliding mode (SM) control and extremum seeking control (ESC). In this method only the active power of the generator is required as the input. The method does not require wind velocity measurement, windturbine parameters or rotor speed etc. The block diagram of the control system is shown in Fig. 17. In the figure ρ is the acceleration of P opt . When the sign of derivative of ε changes, a sliding mode motion occurs and ω* is steered towards the optimum value while P o tracks P opt . The speed reference for the vector control system is the optimal value resulting from the MPPT based on sliding mode ESC. Case study An MPPT controller for variable speed WECS proposed in [30] is presented in this work as a case study. The method proposed in [30], does not require the knowledge of wind speed, air density or turbine parameters. The MPPT controller generates at its output the optimum speed command for speed control loop of rotor flux oriented vector controlled machine side converter control system using only the instantaneous active power as its input. The optimum speed commands, which enable the WECS to track peak power points, are generated in accordance with the variation of the active power output due to the change in the command speed generated by the controller. The proposed concept was analyzed in a direct drive variable speed PMSG WECS with back-to-back IGBT frequency converter. Vector control of the grid side converter was realized in the grid voltage vector reference frame. The complete WECS control system is shown in Fig. 18. The MPPT controller computes the optimum speed for maximum power point using information on magnitude and direction of change in power output due to the change in command speed. The flow chart in Fig. 19 shows how the proposed MPPT controller is executed. The operation of the controller is explained below. The active power P o (k) is measured, and if the difference between its values at present and previous sampling instants ΔP o (k) is within a specified lower and upper power limits P L and P M respectively then, no action is taken; however, if the difference is outside this range, then certain necessary control action is taken. The control action taken depends upon the magnitude and direction of change in the active power due to the change in command speed.  If the power in the present sampling instant is found to be increased i.e.   and C. The values C are decided by the speed of the wind. During the maximum power point tracking control process the product mentioned above decreases slowly and finally equals to zero at the peak power point. In order to have good tracking capability at both high and low wind speeds, the value of C should change with the change in the speed of wind. The value of C should vary with variation in wind speed; however, as the wind speed is not measured, the value of command rotor speed is used to set its value. As the change in power with the variation in speed is lower at low speed, the value of C used at low speed is larger and its value decreases as speed increases. In this work, its values are determined by running several simulations with different values and choosing the ones which show best results. The values of C, used in implementing the control algorithm, are computed by performing linear interpolation of 1.1 at 0 rad/s, 0.9 at 10 rad/s, 0.6 at 20 rad/s, 0.32 at 30 rad/s 0.26 at 40 rad/s, 0.25 at 50 rad/s and 0.24 at 55 rad/s. During the simulation, the d axis command current of the machine side converter control system is set to zero; whereas, for the grid side converter control system the q axis command current is set to zero. Simulation was carried out for two speed profiles applied to the WECS, incorporating the proposed MPPT controller. Initially, a rectangular speed profile with a maximum of 9 m/s and a minimum of 7 m/s was applied to the PMSG WECS in order to see the performance of the proposed controller. The wind speed, rotor speed, power coefficient and active power output for this case are shown in Fig. 20. Good tracking capability was observed. Then, a real wind speed profile was applied to the PMSG wind generator system. Fig. 21 shows for this case, the wind speed, rotor speed, power coefficient and active power. The maximum value of C P of the turbine considered was 0.48, and it was found that in worst case, the value of C P was 0.33 which shows good performance of the proposed controller. It can therefore be concluded from the results of simulation that the proposed control algorithm has good capability of tracking peak power points. The method also has good application potential in other types of WECS. Conclusions Wind energy conversion system has been receiving widest attention among the various renewable energy systems. Extraction of maximum possible power from the available wind power has been an important research area among which wind speed sensorless MPPT control has been a very active area of research. In this chapter, a concise review of MPPT control methods proposed in various literatures for controlling WECS with various generators have been presented. There is a continuing effort to make converter and control schemes more efficient and cost effective in hopes of developing an economically viable solution to increasing environmental issues. Wind power generation has grown at an alarming rate in the past decade and will continue to do so as power electronic technology continues to advance. www.intechopen.com
2016-01-29T17:58:53.149Z
2011-07-05T00:00:00.000
{ "year": 2011, "sha1": "cd42cb3c4b3594f35819357c1662167bba7631da", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/16255", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cd42cb3c4b3594f35819357c1662167bba7631da", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247143001
pes2o/s2orc
v3-fos-license
Co-Infection of COVID-19 and Pneumocystosis Following Rituximab Infusion—A Case Report Immunocompromised patients with respiratory viral infections are at increased risk of fungal superinfections, including Pneumocystosis. Within the scope of the COVID-19 pandemic, Pneumocystis jirovecii co-infections are being increasingly reported. Differential diagnosis often creates a dilemma, due to multiple overlapping clinical and radiographic features. Awareness of fungal co-infections in the context of the COVID-19 pandemic is crucial to initiate prophylactic measures, especially in high-risk individuals. We report the second case of Pneumocystis jirovecii pneumonia and COVID-19 co-infection in a renal transplant recipient in Poland. Introduction Immunocompromised patients frequently develop infection caused by the pathogen Pneumocystis jirovecii (PJP), leading to the development of pneumonia, which can be lifethreatening [1]. Patients who have cancer, acquired immune deficiency syndrome (AIDS), or who are transplant recipients are particularly vulnerable to PJP infection. Symptoms of pneumonia include fever, shortness of breath or cough, and in severe cases respiratory failure [2]. However, it should be noted that currently the majority of PJP patients are not infected with human immunodeficiency virus (HIV) [3]. COVID-19 caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a disease that specifically attacks the respiratory system. It spreads through droplet particles excreted by the patient during coughing or sneezing [4]. In the era of the COVID-19 pandemic, the diagnosis of pneumonia caused by PJP creates a challenge due to the fact that both diseases show common clinical symptoms and a similar computed tomography (CT) scan image [5]. We report a case of a renal transplant recipient, who developed PJP with concomitant COVID-19 infection. Despite characteristic CT findings for COVID-19, PCR test results for COVID-19 were repeatedly negative/inconclusive. Case Report A 48-year-old male who underwent kidney transplantation (30 November 2013) due to primary focal segmental glomerulosclerosis (FSGS) was admitted in February 2021 to the Department of Nephrology and Transplantation Medicine of the University Clinical Hospital in Wrocław, Poland, because of elevated CRP levels (250.4 mg/L) found in the Transplant Outpatient Clinic. He presented in moderately severe condition, with fever (38-39 • C), cough, dyspnoea and myalgia. The symptoms had started several days before. He denied anosmia and ageusia. Four months prior, an allograft biopsy was performed (21 September 2020), due to the deterioration of filtration function of the transplanted kidney with an increase in serum creatinine concentration (1.58 mg/dL; reference range < 1.0 mg/dL) proteinuria (1.0 g/L; reference range < 0.3 g/L) and haematuria. The biopsy revealed a lymphocytic Transplantology 2022, 3 84 infiltration located predominantly on the fibrotic interstitial area (total inflammation ti 8-10% of the cortical part of the biopsy, IF/TA grade I). In addition, the morphological image of the glomeruli corresponded to focal segmental glomerulosclerosis (FSGS), not otherwise specified. Afterwards, it was decided to increase dosages of prednisone (to 20 mg for one month), mycophenolate mofetil to 1 g twice daily (previous dose was 0.5 g twice daily), and tacrolimus to reach levels between 7 and 8 ng/mL. Prior in 2007, the patient developed end-stage renal failure in the course of FSGS and was subsequently qualified for renal transplantation surgery. Due to pathological and clinical signs of FSGS recurrence in the graft, antiCD-20 therapy was planned. During the next hospitalization, an abdominal ultrasound was performed, which showed an increased echogenicity of the parenchymal layer and a hypoechoic rim along the dorsum of the transplanted kidney, indicating oedema. In the beginning of December 2020, the patient received an infusion of Rituximab (Mabtera) 500 mg, which was well tolerated and effective. After the infusion, a decrease in the number of total lymphocytes (from 3.46 × 10 3 /µL to 0.47 × 10 3 /µL), and an improvement in blood pressure control was observed. Laboratory tests carried out after the infusion presented T lymphocytes-688 cells/µL, and a severe B lymphocytes depletion-0 cells/µL. In comparison, previous measurements in October 2020 showed B lymphocytes at 569 cells/µL and T lymphocyte levels at 2377 cells/µL. On re-admission to the Clinic in February, the heart rate of the patient was 96/min, blood pressure 140/90 mmHg and O 2 saturation 70-75% without administered oxygen (increased to 98% after oxygen administration). Laboratory studies showed: d-dimer 0.8 µg/mL, CRP 250 mg/L, NT-proBNP 2141.3 pg/mL and LDH 759 U/L. An oropharyngeal swab for SARS-CoV-2 polymerase chain reaction (PCR) was negative. Repeated testing on the following day also gave a negative result. In the chest X-ray, hilar thickening was visible-suggesting inflammatory changes that were not visible on a comparison image from November 2020. A CT scan of the chest showed bilateral areas of opaque ground-glass opacifications with cobblestone pattern and small consolidations located subpleurally in the lower lung lobes-consistent with changes in the course of viral inflammation. Respiratory support using AirVo (80%, 60 L) was initiated. Two days later SpO2 sunk to 80%, which resulted in the decision to intubate the patient. Quantitative PCR test for Pneumocystis jirovecii gave a positive result, while PCR for SARS-CoV-2 was inconclusive. Meanwhile, several infectious investigations for viruses, bacteria and fungi were performed. A multitest for respiratory pathogens including Influenza virus (A, B), RSV, Parainfluenza virus, Rhinovirus, Enterovirus, Legionella pneumophila, Haemophilus influenza, Streptococcus pneumoniae and Moraxella catarrhalis as well as Chlamydophila pneumoniae and Mycoplasma pneumoniae was negative. Tuberculosis infection was excluded by a QuantiF-ERON test. Fungal antigen tests for Aspergillus sp., Candida sp. and Legionella also revealed negative results. Despite strict adherence to hospital mitigation strategies, our patient transmitted COVID-19 to several fellow patients in the nephrological department. The patient was transferred to the intensive care unit, where mechanical ventilation and pharmacological treatment including analgosedation (propofol, fentanyl), and immunosuppressive treatment (tacrolimus, methylprednisolone) continued. Additionally, Piperacillin-tazobactam and Co-trimoxazole were introduced. On neurological examination pupils were even, narrow, and reactive. An ultrasound examination of the lungs revealed numerous B-lines and subpleural consolidation, predominantly on the right side. A few days later renal replacement therapy was started, due to worsening metabolic acidosis and rising creatinine (2.71 mg/dL) and urea levels (131 mg/dL). On the 20 February 2021, cardiovascular performance improved, thus catecholamine administration was stopped. Pharmacological treatment was continued with the addition of empirical broad-spectrum antibiotics. These included meropenem, levofloxacin, vancomycin and colistin. Antibiotic therapy was later modified according to the results of microbiological cultures. Two days later, the patient developed extreme respiratory insuffi-ciency and hypoxemia, despite ventilation with 100% oxygen. In ultrasound examination: no signs of lung emphysema, but multiple subpleural and interstitial consolidations and features of pulmonary oedema in the lower parts of the lung. Despite recruitment manoeuvres and ventilation in the prone position, the state of the patient deteriorated with severe respiratory acidosis (pCO2-92 mmHg and pH-6.963), increase in d-dimer to 1.2 µg/mL and procalcitonin to 1.3 ng/mL in laboratory studies and refractory hypotension (BP 70/40 mmHg), in spite of vasopressor infusion and renal replacement therapy. On the 2 March, he died from multiple organ dysfunction syndrome and respiratory failure. No autopsy was performed. Discussion Coronavirus disease (COVID-19), caused by the severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-19) has led to a global pandemic that resulted in approximately 224,511,226 confirmed cases, including 4,627,540 deaths, posing a significant burden to the health care system (World Health Organisation report as of 13 September 2021). Approximately, 5% of ambulatory patients and 20% of hospitalized patients require intensive care treatment, of which 40% result in fatal outcome [6,7]. Fungal co-infections are known to significantly increase mortality [8]. Immunocompromised patients, such as patients after solid-organ transplantation, who developed COVID-19, have a higher probability to develop fungal superinfections, in comparison to immunocompetent individuals [9]. To the best of our knowledge, we report the third case of COVID-19 and PJP coinfection in a renal-transplant recipient in Europe. One case was in a 47-year-old man in Poland [16], who was taking cyclosporine for immunosuppressive treatment, and developed acute kidney injury caused by an interaction between cyclosporine and clarithromycin. The degree of immunosuppression is unclear, as exact lymphocyte counts are not disclosed. The other case of co-infection with COVID-19 and PJP in a kidney transplant recipient was in a 65-year-old male in Italy [17], whose baseline immunosuppressive regimen consisted of tacrolimus, mycophenolate mofetil and methylprednisolone. This patient showed a persistence in low lymphocyte counts with CD4+ cells at 35 cells/mL and an inversed ratio of CD4+/CD8+, while our patient had CD3+ cells at 654 cells/mL and CD19+ values at 0 cells/mL as a consequence of the Rituximab infusion, that was implicated due to the risk of transplant rejection. In addition, our patient was also taking prednisone orally over the course of several months. Both patient cases resulted in fatal outcome, which may be attributable to the extent of immunosuppression. Both, underlying disease and treatment in immunocompromised patients with respiratory viral infection result in impaired immune responses that predispose patients to opportunistic mycoses [7,9,18]. Treatment with immunosuppressive agents, including calcineurin inhibitors (CNI), anti-rejection therapy and prolonged treatment with systemic steroids are significant risk factors for PJP development [19]. Interestingly, one case reports co-infection in a 83-year old female patient without known underlying immunodeficiency, who presented with CD4+ count at 291 cells/µL [20]. In severe COVID-19 cases, absolute numbers of T lymphocytes, CD4+ T cells and CD8+ T cells can be remarkably low [21]. According to Menon [20], an infection with SARS-CoV-2 may cause CD4 lymphocyte depletion and suppressed functional immunity, which may predispose to Pneumocystis activation and proliferation. An observational study by Alanio et al. reported that P. jirovecii PCR was positive in 10 out of 108 (9.3%) patients with severe SARS-CoV-2 infection [22]. Of note, this group presented a higher frequency of long-term corticosteroid prescriptions. However, more than half of these patients had low serum Beta-D-glucan levels and did not receive treatment for PJP and finally presented with similar mortality as patients in the treated group, suggesting possible PJP colonization rather than co-infection [23]. On the contrary, a study by Blaize et al. declared no linkage between COVID-19 induced lymphocytopenia and PJP infection, by finding that only two among 145 patients (1.4%) with severe COVID-19 had a positive polymerase chain reaction (PCR) for Pneumocystis jirovecii [24]. In light of the current data, it is not possible to accurately determine the true incidence, risk factors and prognosis of COVID-19 patients with PJP coinfections. Immunosuppressive therapies and untreated HIV infection with low CD4+ cell count are presumably predisposing factors for PJP coinfection [23]. Multifocal ground-glass opacities with interlobular septal thickening are the key radiographic finding in COVID-19 and PJP, complicating differential diagnosis [11,31]. However, COVID-19-related ground glass changes frequently present with multi-lobar distribution, with a predilection for the lung peripheries [32]. In contrast, increasingly recognized characteristic radiologic findings of PJP include bilateral parenchymal opacities, most prominent in the upper lobes, with sparing of the lung bases [33,34]. Additionally, pulmonary cysts may occur in one third of patients with PJP [33][34][35]. The laboratory diagnosis of Pneumocystis pneumonia can be made by real-time quantitative (RTqPCR) assays from respiratory specimens, mainly of bronchoalveolar lavage fluids (BAL) and throat swabs [36]. SARS-CoV-2 RNA detection is made by RT-PCR from throat swabs, tracheal aspirates or bronchoalveolar lavage samples [37]. The sensitivity of RT-PCR depends on the type of specimen [38] and on the timing, as sensitivity drops from 100% to 40% after day 5 of symptom onset [39]. A limitation of our study is that we cannot define this case as proven COVID-19, as we were unable to detect SARS-CoV-2 by PCR in respiratory tract specimen. It is inconclusive why in our patient the SARS-CoV-2 PCR testing results remained negative, however epidemiological data as well as clinical and radiological symptoms strongly supported the diagnosis of COVID-19. The ideal treatment strategy for PJP and COVID-19 co-infection in transplant recipients remains unsettled. The British Transplantation Society (BTS) and UK Kidney Association (UKKA) released guidance for the management of transplanted patients with COVID-19 ( Figure 1). General principles suggest discontinuation of antiproliferative agents (such as azathioprine and mycophenolate mofetil), minimization of calcineurin inhibitors in early disease stages and reduction/discontinuation in progressive stages [40]. Conclusions In the scope of the COVID-19 pandemic, physicians have directed their focus to diagnosing COVID-19 in patients with respiratory symptoms, consequently creating a risk of neglecting differential diagnoses, including Pneumocystis jirovecii pneumonia. Moreover, numerous overlapping clinical and radiographic features of COVID-19 and PJP can pose a diagnostic challenge. We report a case of a kidney transplant recipient who developed co-infection of PJP and COVID-19, which resulted in fatal outcome. With diagnostic and treatment delay directly affecting mortality, the awareness of co-infections in the current COVID-19 pandemic is crucial in reducing morbidity and mortality. Because of a lack of evidence-based research, there are no guidelines for the treatment of COVID-19 in transplant patients and scientific evidence about co-infection of COVID-19 and PJP in transplant recipients is scarce. Along with previous case reports of and concurrent PJP infection, we recommend Beta-D-glucan testing and underline the importance of systematic investigations for Pneumocystis jirovecii in deep respiratory specimens, especially in immunocompromised patients. According to Baker et al., all renal transplant recipients with confirmation of Pneumocystis jirovecii in respiratory secretions should be treated for 14-21 days with co-trimoxazole orally or intravenously (15-20 mg/kg in three or four divided doses). Second-line treatment is pentamidine. Adjunctive glucocorticoid therapy may be considered in advanced disease [41]. Management of co-infection with PJP and COVID-19 can be complicated by the controversy of high-dose corticosteroid therapy, which is recommended in severe PJP [42], but not for early COVID-19 treatment [43]. Choudhari et al. suggest co-trimoxazole as adjuvant therapy in critically ill COVID-19 patients, due to its anti-inflammatory and immunomodulatory action [44]. Co-trimoxazole has proven efficacy in PJP prophylaxis in immunocompromised patients [45,46], hence it could be of value in critically ill COVID-19 patients. Conclusions In the scope of the COVID-19 pandemic, physicians have directed their focus to diagnosing COVID-19 in patients with respiratory symptoms, consequently creating a risk of neglecting differential diagnoses, including Pneumocystis jirovecii pneumonia. Moreover, numerous overlapping clinical and radiographic features of COVID-19 and PJP can pose a diagnostic challenge. We report a case of a kidney transplant recipient who developed co-infection of PJP and COVID-19, which resulted in fatal outcome. With diagnostic and treatment delay directly affecting mortality, the awareness of co-infections in the current COVID-19 pandemic is crucial in reducing morbidity and mortality. Because of a lack of evidence-based research, there are no guidelines for the treatment of COVID-19 in transplant patients and scientific evidence about co-infection of COVID-19 and PJP in transplant recipients is scarce. Along with previous case reports of COVID-19 and concurrent PJP infection, we recommend Beta-D-glucan testing and underline the importance of systematic investigations for Pneumocystis jirovecii in deep respiratory specimens, especially in immunocompromised patients. Conflicts of Interest: The authors declare no conflict of interest.
2022-02-27T16:23:37.913Z
2022-02-24T00:00:00.000
{ "year": 2022, "sha1": "e8ecb81613e601f50d136ee4fa65b53b63103ece", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-3943/3/1/8/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5f6b52d873cfed759b20a418a089de2556fc80bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270000197
pes2o/s2orc
v3-fos-license
AnVILWorkflow: A runnable workflow package for Cloud-implemented bioinformatics analysis pipelines Advancements in sequencing technologies and the development of new data collection methods produce large volumes of biological data. The Genomic Data Science Analysis, Visualization, and Informatics Lab-space (AnVIL) provides a cloud-based platform for democratizing access to large-scale genomics data and analysis tools. However, utilizing the full capabilities of AnVIL can be challenging for researchers without extensive bioinformatics expertise, especially for executing complex workflows. Here we present the AnVILWorkflow R package, which enables the convenient execution of bioinformatics workflows hosted on AnVIL directly from an R environment. AnVILWorkflowsimplifies the setup of the cloud computing environment, input data formatting, workflow submission, and retrieval of results through intuitive functions. We demonstrate the utility of AnVILWorkflowfor three use cases: bulk RNA-seq analysis with Salmon, metagenomics analysis with bioBakery, and digital pathology image processing with PathML. The key features of AnVILWorkflow include user-friendly browsing of available data and workflows, seamless integration of R and non-R tools within a reproducible analysis pipeline, and accessibility to scalable computing resources without direct management overhead. While some limitations exist around workflow customization, AnVILWorkflowlowers the barrier to taking advantage of AnVIL’s resources, especially for exploratory analyses or bulk processing with established workflows. This empowers a broader community of researchers to leverage the latest genomics tools and datasets using familiar R syntax. This package is distributed through the Bioconductor project (https://bioconductor.org/packages/AnVILWorkflow), and the source code is available through GitHub (https://github.com/shbrief/AnVILWorkflow). Introduction The NHGRI's Genomic Data Science Analysis, Visualization, and Informatics Lab-space (AnVIL) consortium was launched in 2018, aiming to democratize genomics data [1].AnVIL enables easy sharing of genomics data by organizing databases, bioinformatics pipelines for large-scale data processing, and interactive downstream analysis in one Cloud-based platform.AnVIL [2], also the name of the platform from the AnVIL project, implements the FAIR data-sharing philosophy and provides a graphical user interface (GUI, supported by Terra[3]), making it more accessible for researchers without programming backgrounds.However, a GUI tends to be less e cient and slower than a command line interface (CLI), especially for bulk analyses, still requires learning a new platform, and does not support version control and text-based work ows, often included as best practices for reproducible computational research [4] Bioconductor's AnVIL package is an AnVIL API wrapper that provides R-friendly, programming-based functionalities to leverage exible and scalable cloud-based resources implemented in the AnVIL platform.With the AnVIL package, users can easily access work ows, data, and Cloud-based computing resources managed by AnVIL.However, the AnVIL package is not customized for work ow execution tasks.Instead, AnVIL covers all the resources related to the AnVIL platform, such as interaction with the repository for Docker-based genomic analysis tools and work ows (Dockstore [5]), leveraging cloud resources (Leonardo[6]), and data search and digestion (Gen3 [7]).Many AnVIL functions also expose API commands directly, requiring a deep understanding of the underlying AnVIL workspace structures and data models to use for work ow execution.Also, it is a general package without individual support on any workspace and provides no metadata curation.Because the majority of Bioconductor users focus on data analysis, a convenient R-friendly way of accessing and utilizing AnVIL resources is needed.Here, we present the AnVILWork ow package to meet this need.AnVILWork ow package is a convenient, tfor-purpose wrapper around the AnVIL package with the following features optimized for work ow execution: Support work ow-speci c documentations Enable to set up a Cloud environment with a single function call Return error messages that are easy to interpret and actionable Essential metadata curation for more e cient data browsing Users can apply AnVILWork ow on any workspace they can access, including 347 public workspaces (snapshot on 8.28.23) available to anyone with an AnVIL account.We present the three use cases where we ran non-R-based bioinformatics analysis tools using conventional R syntax: Salmon[8], bioBakery [9], and PathML [10].Salmon is a widely used RNA sequencing analysis tool for quantifying the expression of transcripts and is based on the command-line interface.Its downstream analysis involves many R/Bioconductor packages, such as DESeq2, edgeR, and limma.bioBakery is a widely used whole metagenomic shotgun (WMS) sequencing data analysis environment, mainly relying on Python.PathML is a general-purpose research toolkit for computational pathology, including many functionalities in digital pathology data analysis, such as strain normalization, nucleus segmentation, and tissue detection.PathML takes raw image les and returns the processed image data in an hdf5 format for further downstream analysis, including machine learning methods. Overview AnVIL provides comprehensive resources for biomedical data analysis, including data (e.g., genomics), work ows for bulk analysis, and interactive analysis apps (i.e., Galaxy, Jupyter Notebooks, and RStudio) under the workspace.Among them, work ows are often a limiting factor in bioinformatics analysis due to computing demands and bioinformatics expertise required.Thus, the AnVILWork ow package makes the work ow-related resources from AnVIL more accessible and easier to use, especially for R users (Fig. 1). While AnVIL manages work ow orchestration and workspace metadata and provides default setups simplifying decision-making for users, users still need to manage the storage of their data and cloud cost.Genomics data, especially their raw and intermediate forms, are very large, so data storage can be costly if the sample size increases.Storage costs incur and can be managed in two ways -storage itself and transfer.For example, using regional storage instead of multi-region, cleaning up intermediate results, and storing infrequently accessed data in low-cost storage (e.g., nearline or coldline storage from Google Cloud) can reduce per-sample costs.Analyzing data stored in one region using Virtual Machine (VM) compute resources in a different region incurs data transfer charges, so centralizing all storage and computing in a single region can be more cost-e cient by not only reducing the storage cost but also avoiding data transfer charges.Currently, the AnVIL workspaces use the us-central1 as a default region and any artifacts generated from the work ow execution, unless speci ed, are saved in the same-region bucket linked to the workspace.If users use the default region con gured by AnVIL, bringing their data stored in the default region, us-central1, will save the data transfer charge.Additionally, open and controlled access genomic datasets hosted in AnVIL are stored in the us-multi-region, so there are no storage and transfer charges for users using the default workspace con guration.Downloading data to the user's workstation or laptop is subject to charges, currently $0.08 to $0.12 per GB, depending on the amount of data [11] and geography of the transfer, and transfer from the US to another continent is more expensive than within the US transfer. While browsing existing resources through AnVILWork ow is free, running work ows charge computing costs.AnVILWork ow is designed to use existing work ows which usually prede ne computing resources optimized for the types of analyses, simplifying computing-related cost management.You can further reduce the run cost using call caching and preemptive instances.For example, if your work ow runs in fewer than 24 hours since a preemptible VM lasts 24 hours at most, you can save up to 80% by using preemptible VMs. The cost management for a group of users can be e ciently managed through the AnVIL billing project. One billing account can be shared with others by simply adding email addresses under the billing project. The billing project offers details on each workspace, including workspace owner and spent reports, so we can easily identify 'who' uses 'how much' for 'what'.In addition to the workspace-level expense reports, users can further enhance cost monitoring by con guring spending reporting [12].This allows users to closely monitor the expenditure associated with each work ow execution. Major functions Browse AnVIL resources.The AnVILBrowse function allows users to browse AnVIL resources using keywords.This function runs instantaneously because the AnVILWork ow package includes the snapshot of metadata on all the publicly accessible AnVIL workspaces and their work ows and data.It performs basic metadata harmonization, allowing more e cient browsing and ltering, such as selecting workspaces based on the study size or participants' ages.Users can also browse non-public workspaces they have access to using the getMetaTables function; however, this process can take a while depending on the number of workspaces a user has access to. Run AnVIL work ows.AnVILWork ow package provides all the functionalities required to run work ows available in AnVIL from the local R session -from the environment setup to the output download.One prerequisite is to create an AnVIL account from the AnVIL web portal.AnVIL account provides two required inputs to run work ows remotely: 1) the email address associated with the user's account and 2) the billing project name to cover the computing cost. AnVIL-hosted work ows can be run using four main functions: setCloudEnv, cloneWorkspace, runWork ow, and getOutput.The setCloudEnv function accepts the AnVIL account email and billing project name and sets up your local R environment ready to access AnVIL and Cloud-computing resources.The cloneWorkspace function creates the user's copy of a 'template' workspace and the runWork ow executes the work ow.The getOutput function can check the outputs from successfully executed work ows and download user-speci ed les to a local computer. User input can be provided through the updateInput function, which accepts two different forms of tables depending on the work ows -AnVIL's data model or URLs pointing to data les stored in Google Cloud buckets.The input data formats are already speci ed in the work ow scripts (Work ow Description Language, WDL [13]).Other accessory functions are available to monitor submission progress (monitorWork ow), stop submitted work ow (stopWork ow), and get Dashboard content (getDashboard). Use cases The use cases demonstrated below include demo input data in the template workspaces, so the R scripts below can run the listed use cases from the local computer.Ready-to-run examples that can be used to test the process on the user's own AnVIL account are available in the AnVILWork ow package vignette.GATK best practice pipelines [14] are not demonstrated here, but they are also available as The main features of the demo workspaces and their work ow-speci c input data preparation process are described below. Bulk RNA sequencing data analysis Salmon work ow uses AnVIL's data model and requires four essential inputs -fastq1, fastq2, fasta, and transcriptome index name.This work ow can be easily applied to the consortium data hosted in AnVIL, which follows AnVIL's data model.the default runtime environment con gured for this work ow (1 CPU, 2GB memory, and 10GB SSD disk), processing 16 demo samples (32 fastq les, ~1GB per le) took about 30 minutes and cost $0.12. Whole metagenomic shotgun data analysis bioBakery is a metagenome analysis environment composed of Python-based tools, reference databases, and command-line-based work ows.It processes raw shotgun sequencing data into microbial community feature pro les, summary reports, and gures [9].bioBakery's whole metagenome shotgun (wmgx) and visualization (wmgx_vis) work ows are implemented as an AnVIL workspace.The current version of the AnVILWork ow supports bioBakery version 3 [15].While users can customize this work ow to a great degree, only six inputs are su cient to run a standard, optimized version of this work ow.Those six inputs are: -Name of the Trimmomatic adaptor type (for demo data, NexteraPE) -Your project name -Extension of input les (for demo data, .fastq.gz) -A table of your sequencing le (fastq) names stored in the Google Cloud Storage bucket -Input le identi er for paired-end sequencing (for demo data, _R1 and _R2) The seven required databases are already linked to this work ow and nine additional optional inputs are available for further customization.Optional inputs are for work ow customization, such as bypassing functional pro ling (default is false) and maximum memory usage for different tasks (default is 32GB for functional pro ling by HUMAnN, 8GB for quality control by Kneaddata, and 24GB for taxonomic pro ling by MetaPhlAn).This work ow uses call caching and preemptive instances by default for cost e ciency.Processing six paired-end demo samples (mean le size ~ 380MB) with the optimized default setting without using preemptive instances took about 5 hours and cost around $6.50.With the preemptive instances, it can take longer but cost less.Compared to the existing options such as Nephele[16], AnVILWork ow allows a programmatic approach and more exible customization options. Histopathology image processing PathML We implemented the hematoxylin-eosin (HE) stain normalization process of PathML as an AnVIL workspace.This work ow accepts an SVS le as input and returns original and normalized images as PNG les.There are two required inputs -Google Cloud Storage URI where the input SVS image le is stored and the sample name.Processing one publicly available image (CMU-1_Small_Region.svs, 1.8MB) [17] with the default runtime (4 CPU, 16GB memory) took about 8 minutes and cost $0.01.This simple but robust analysis setup can support clinical use cases, such as pathologists who process a large number of images in a short time, by offering guidance and cross-validation options. Conclusions The AnVILWork ow package enables users to conduct complex and computationally intense analyses with minimal bioinformatics expertise, through well-established work ows within AnVIL and versatile cloud resources directly from standard laptops using the familiar R syntax.The major advantages AnVILWork ow provides over the existing approaches include 1) a minimal entry barrier, negating the need for software installations, preparation of properly versioned reference data, or construction and oversight of work ows, 2) leveraging exible cloud computing resources without the need to learn or handle them directly, 3) user-friendly functions that provide enhanced information, and 4) greatly improved reproducibility and interoperability by seamlessly linking multiple analysis steps, conducted in both R and non-R based tools, within a single R vignette.However, there are still some limitations.For instance, certain customizations of the work ows are limited or require a more profound understanding of the work ows.Despite not being inherently more costly than an in-house server, the pay-per-use structure requires careful planning and management.The absence of an integrated versioning system in AnVIL workspaces requires users to manually monitor new versions.In conclusion, AnVILWork ow proves most advantages for analyzing a bulk of samples on relatively simple work ows (i.e., single-stage work ow procedure) or for exploratory data analysis for non-technical users, particularly when employing well-established analysis work ows.
2024-05-26T05:20:13.623Z
2024-05-15T00:00:00.000
{ "year": 2024, "sha1": "cf1669d58f2f347d1f8e1aac01fdcf9f6af08630", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-4370115/latest.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "cf1669d58f2f347d1f8e1aac01fdcf9f6af08630", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
162182269
pes2o/s2orc
v3-fos-license
Platelet bioenergetics correlate with muscle energetics and are altered in older adults Introduction Maintaining physical function is vital to sustaining independence of older adults, and declining strength and increased fatigability are often characteristics of aging that precede cognitive dysfunction and other physical disabilities (1–3). Consistent with this, assessments of physical function, such as gait speed, are strongly predictive of morbidity and mortality (3). Changes in mitochondrial function have long been associated with age-related functional decline. Data from a number of human populations and animal models demonstrate various alterations in mitochondrial morphology and content across different organs that are thought to underlie molecular mechanisms of aging, including cell senescence, oxidative stress, and chronic inflammation (4–8). However, the effect of BACKGROUND. Physical function decreases with age, and though bioenergetic alterations contribute to this decline, the mechanisms by which mitochondrial function changes with age remain unclear. This is partially because human mitochondrial studies require invasive procedures, such as muscle biopsies, to obtain live tissue with functional mitochondria. However, recent studies demonstrate that blood cells are potentially informative in identifying systemic bioenergetic changes. Here, we hypothesize that human platelet bioenergetics reflect bioenergetics measured in muscle biopsies. Introduction Maintaining physical function is vital to sustaining independence of older adults, and declining strength and increased fatigability are often characteristics of aging that precede cognitive dysfunction and other physical disabilities (1)(2)(3). Consistent with this, assessments of physical function, such as gait speed, are strongly predictive of morbidity and mortality (3). Changes in mitochondrial function have long been associated with age-related functional decline. Data from a number of human populations and animal models demonstrate various alterations in mitochondrial morphology and content across different organs that are thought to underlie molecular mechanisms of aging, including cell senescence, oxidative stress, and chronic inflammation (4)(5)(6)(7)(8). However, the effect of BACKGROUND. Physical function decreases with age, and though bioenergetic alterations contribute to this decline, the mechanisms by which mitochondrial function changes with age remain unclear. This is partially because human mitochondrial studies require invasive procedures, such as muscle biopsies, to obtain live tissue with functional mitochondria. However, recent studies demonstrate that blood cells are potentially informative in identifying systemic bioenergetic changes. Here, we hypothesize that human platelet bioenergetics reflect bioenergetics measured in muscle biopsies. age on mitochondrial function has most prominently been studied in skeletal muscle, in which sarcopenia leads to loss of physical function in older adults. A number of studies have shown a decrease in mitochondrial electron transport chain content, and 31 P-magnetic resonance spectroscopy ( 31 P-MRS) studies have demonstrated decreased ATP production and recovery of phosphocreatine after exercise in older adults (9)(10)(11). However, other researchers have not observed these changes (6,12). Thus, the exact manifestations and mechanisms of mitochondrial alteration with age, and whether these mitochondrial changes directly underlie declining physical function, remain unclear. One barrier to routine measurement of mitochondrial function in large human cohorts is the necessity to obtain sufficient quantities of live tissue. Muscle biopsy studies remain the gold standard to study human mitochondria but are highly expensive and invasive. Alternatively, 31 P-MRS or near infrared spectroscopy (NIRS) can be used to noninvasively measure the kinetics of ATP generation or tissue oxygen consumption and perfusion, respectively. However, 31 P-MRS methodology requires expensive specialized equipment, access to a magnetic resonance magnet, and expertise in analysis and interpretation. Although NIRS is relatively less costly than 31 P-MRS, measurements can be affected by variable amounts of adiposity in human subjects as well as differences in skin pigmentation and blood flow (13,14). Recent studies have demonstrated that mitochondrial function in circulating blood cells can reflect tissue mitochondrial energetics (15)(16)(17). Further, mitochondrial function in circulating platelets and peripheral blood mononuclear cells (PBMCs) correlates with some clinical parameters and physical function, respectively (15,(18)(19)(20)(21)(22). However, it is unclear whether platelets directly reflect mitochondrial function measured in the muscle of older human adults and whether there are measurable changes in platelet bioenergetics in young versus older adults. Here, we hypothesized that platelet bioenergetics are altered with age, reflect skeletal muscle mitochondrial function measured by respirometry and 31 P-MRS, and are associated with clinical parameters of physical function in a population of older adults. We show that platelet bioenergetics in older adults correlate significantly with muscle mitochondrial function in the same cohort. Further, platelets from older adults demonstrate altered bioenergetics. The implications of these data are important for uncovering mitochondrial mechanisms of aging and for the use of platelet bioenergetics to serve as a supplement or potential surrogate to human muscle mitochondrial measurement. Results Platelet bioenergetic parameters reflect muscle mitochondrial function measured by muscle respirometry and 31 P-MRS. We first determined whether human platelet bioenergetics reflect mitochondrial function measured in muscle samples from the same individuals. We isolated platelets from individuals in the Health, Aging and Body Composition (Health ABC) cohort, which consists of older adults (88 ± 2 years; n = 32; Table 1). In these intact platelets, we assessed cellular oxygen consumption rate (OCR) by Seahorse extracellular flux (XF) analysis and calculated mitochondrial OCR by correcting for the measured nonmitochondrial OCR as previously described (18). A subset of these subjects also underwent 31 P-MRS to noninvasively measure skeletal muscle ATP kinetics and provided skeletal muscle biopsies for respirometry ( Figure 1). Table 2 shows the muscle bioenergetic data for all subjects with measurements by 31 P-MRS and respiration in skeletal muscle fibers from biopsy. In the subset of subjects with platelet measurements and concomitant muscle measurements, we assessed the association between platelet and muscle bioenergetics to determine whether platelet bioenergetics reflect muscle mitochondrial function. Platelet basal OCR showed an association with muscle ATP synthesis measured by 31 P-MRS (r = 0.420; P = 0.032; n = 26; Table 3 and Figure 2A). This correlation was statistically significant when platelet ATP-linked OCR (calculated as basal OCR minus proton leak) was used instead of basal OCR (r = 0.643; P = 0.004; Figure 2B). Platelet proton leak, maximal respiratory capacity, and basal glycolysis did not show a significant association with ATP synthesis measured by 31 P-MRS (Table 3). Muscle biopsies were also performed on a subgroup of the Health ABC cohort and muscle fiber respirometry was assessed. Comparison of platelet OCR to muscle fiber respiration in the same individuals (n = 23) showed that platelet maximal OCR correlated significantly with muscle maximal respiration (r = 0.595; P = 0.003; Table 4 and Figure 2C). In addition, platelet proton leak was significantly associated with muscle state 4 respiration (respiration in the presence of substrates for complex I but no ADP), a parameter of respiration driven by proton leak (r = 0.620; P = 0.002; Figure 2D). Platelet ATP-linked respiration showed a trend to correlation with muscle state 3 respiration (respiration in the presence of substrates and ADP), but this did not reach statistical significance after correction for multiple comparisons (r = 0.568; P = 0.005; Table 4). insight.jci.org https://doi.org/10.1172/jci.insight.128248 C L I N I C A L M E D I C I N E Platelets from older adults show greater proton leak and less ATPlinked respiration than platelets from young individuals. Given that platelet respiration reflected muscle respiration, we next compared basal glycolytic rate and mitochondrial OCR in intact platelets isolated from the cohort of older adults (88 ± 2 years; n = 32) with platelets isolated from a younger cohort (26 ± 5 years; n = 32). Demographics for young and older adult subjects are shown in Table 1. Basal OCR was lower in platelets from older adults compared with younger adults (107.4 ± 5.29 vs. 123.6 ± 6.083 pmolO 2 /min/5 × 10 7 platelets; P = 0.047; Figure 3A). We next inhibited ATP synthesis with oligomycin to measure OCR not linked to ATP production, which is traditionally attributed to proton leak across the inner mitochondrial membrane. Proton leak was significantly higher in older adults compared with the young adults (39.78 ± 2.70 vs. 30.34 ± 2.32 pmolO 2 /min/5 × 10 7 platelets; P = 0.010; Figure 3B). ATP-linked OCR (or efficient respiration), calculated as the difference between basal OCR and proton leak, was significantly lower in older adults compared with young adults (74.9 ± 4.61 vs. 112 ± 7.58 pmol O 2 /min/5 × 10 7 platelets; P = 0.001; Figure 3C). However, there was no significant difference in the maximal capacity of respiration between the 2 groups (177.4 ± 15.01 vs. 214.5 ± 15.57 pmolO 2 /min/5 × 10 7 platelets; P = 0.091; Figure 3D). To determine whether glycolysis was increased in the older adults, we next calculated the basal glycolytic rate by measuring the extracellular acidification rate (ECAR) of intact platelets, which could be inhibited by the glycolytic inhibitor 2-deoxyglucose (2-DG). There was no significant difference in basal glycolytic rate between the older and young adults (5.85 ± 0.77 vs. 6.19 ± 0.39 measured pH/min/5 × 10 7 platelets; P = 0.69; Figure 3E). Platelets from older adults show lower enzymatic activity of the electron transport chain complexes and higher uncoupling protein 2 expression. To determine whether changes in mitochondrial proteins potentially underlie increased proton leak and lower ATP-linked respiration in older adults, we measured the protein levels and enzymatic activity of the platelet mitochondrial electron transport chain complexes. Within the electron transport chain, lower levels of complex III protein were observed in the older adults compared with the young adults, while there was no significant change in the protein levels of complexes I, II, IV, and V ( Figure 4, A-B). Measurement of the individual enzymatic activity of each electron transport complex showed that the enzymatic activities of complexes II, III, and V were significantly lower in the older adults compared with the young adults ( Figure 4C). When associations were tested between complex activities and platelet bioenergetic parameters, no significant correlation was found (Table 5). Uncoupling proteins (UCPs) are present in the inner mitochondrial membrane and allow the entry of protons into the mitochondrial matrix, resulting in both the generation of heat and attenuation of oxidant production (23,24). To determine whether the increased proton leak observed in older adults was due to upregulation of UCPs, we measured the protein abundance of UCP2. Protein levels of UCP2 were significantly greater in the platelets from the older adults than young adults ( Figure 4, D and E). Further, the protein level of UCP2 showed a significant positive correlation with proton leak in the older adults (r = 0.632; P = 0.001; Figure 4F). Values in the table denote mean ± SD or n (%). DW, dry weight. Figure 1. Subject enrollment and completion of the study endpoints. Of 216 Health ABC subjects who were eligible for phone screen for muscle biopsy studies, biopsies were ultimately obtained from 44 subjects. Of these biopsies, 33 were ultimately used. The 44 subjects who underwent biopsy were screened for eligibility for 31 P-MRS and platelet studies. The flowchart shows reasons for exclusion of subjects such that 31 P-MRS was obtained on 26 subjects and platelets isolated from 32 subjects. PCr, phosphocreatine. Platelet bioenergetic parameters correlate with parameters of physical function and fatigability. Because platelet respiration correlated with parameters of muscle respiration, we next assessed whether platelet bioenergetics in the older adults correlated with parameters of physical function and fatigability. These associations were tested in 28 subjects because 5 individuals did not complete the physical function tests or fill out the fatigability questionnaire. There was a positive correlation between platelet basal glycolytic rate and physical fatigability score (r = 0.451; P = 0.016; Figure 5A). This correlation was stronger after controlling for sex or race (Table 6). Additionally, increased proton leak was significantly associated with faster gait speed (r = 0.58; P = 0.0019; Figure 5B), and this relationship became more significant when controlled for BMI, sex, age, or race (Table 7). In contrast, no significant associations were found between platelet basal or maximal respiration and parameters of physical function (Tables 8 and 9). Discussion This study demonstrates that (a) platelet bioenergetic parameters correlate significantly with muscle mitochondrial function in the same cohort; (b) compared with young adults, platelets from older adults show an alteration in mitochondrial function characterized by higher proton leak, which is likely due to upregulation of UCP2; and (c) in the older cohort, greater platelet glycolysis and higher proton leak were associated with higher perceived physical fatigability and faster gait speed, respectively. Although this is the first report to our knowledge of bioenergetic measurements in platelets from healthy older versus young adults, the alterations in bioenergetics observed in platelets are consistent with changes observed in other tissues. For example, several studies in human skeletal muscle show decreased activity C L I N I C A L M E D I C I N E and content of citrate synthase and electron transport in older adults (10)(11)(12)25). Additionally, studies in permeabilized human platelets showed a weak correlation between decreased complex II activity and age in individuals between the ages of 12 and 60 (26,27). Notably, despite lowered levels of electron transport complex enzymatic activity, our data showed no difference in maximal respiratory capacity between the platelets from older and younger cohorts. Additionally, no correlation was observed between electron transport chain activity and platelet OCR, consistent with prior publications (28). This is likely due to the fact that with the exception of complex V, mitochondrial electron transport complexes in the platelet are expressed in excess of what is required to maintain respiration (28). Thus, when uncoupled maximal respiration is measured (in which complex V activity is uncoupled from the rest of the electron transport chain), the decrease in respiration observed basally is no longer present. However, one may speculate that although mitochondrial enzymes are in excess of what is necessary for respiration, under conditions of stress that damage these enzymes, older adults may be more susceptible to respiratory dysfunction compared with young adults given that enzymatic activities are already lower in older adults. Further studies that expose young and older platelets to oxidants or cell stressors are required to test this hypothesis. In addition to complex activity levels, the source of substrate and availability govern maximal respiration. For example, we have previously shown that in subjects with pulmonary arterial hypertension, maximal respiratory capacity is increased despite no change in the majority of the electron transport complexes, and this was due to a substrate switch from glucose to fatty acid oxidation (20). A limitation of the current study is that we did not measure substrate utility or availability in the platelets. More in-depth metabolomics studies are required to test this concept. We did not directly measure platelet ATP production in this study; however, it would be expected that the older adults would generate less ATP than the young adults by oxidative phosphorylation because ATP-linked respiration is decreased in platelets from older adults. This decrease in ATP-linked respiration is predominantly due to an increase in proton leak, which correlates with increased protein levels of UCP2. Although it is well established that UCPs decrease the efficiency of ATP production, several studies also suggest that low levels of mitochondrial uncoupling are beneficial to the cell (29,30). UCP1 is crucial to thermogenesis in brown adipose tissue (23). However, much less is known about UCP2, though all UCPs have been shown to be upregulated by reactive oxygen species and once expressed dissipate the high membrane potential of the mitochondrial inner membrane, which can decrease oxidant production by the electron transport chain (31)(32)(33)(34). In this regard, the upregulation of UCP2 may serve as an adaptive response to mitigate oxidant production, which is known to increase with age. This beneficial effect of UCP2 expression may also be involved in the mechanisms that associate increased proton leak in our study with increased gait speed. We observed no significant increase of glycolysis in the older adults compared to the young adults despite a decrease in ATP-linked OCR. Although absolute glycolytic rate may not be different between the older and young adults, it is still possible that basal glycolysis may increase with age in the same individual. Longitudinal studies examining glycolytic rate are required to investigate this further. Notably, we did observe a significant correlation between glycolytic rate and fatigability score. It is interesting to speculate that this association is indicative of a shift from oxidative phosphorylation to the less efficient process of glycolysis, which may lead to an ATP deficit and contribute to fatigue. However, direct measurement of ATP production is required to definitively determine whether glycolysis is mechanistically linked to or causative of perceived physical fatigability. Molina and colleagues previously reported that maximal respiration of PBMCs (consisting of lymphocytes and monocytes) correlate with gait speed as well as an expanded short physical performance battery in older adults (21,22). We did not observe any significant correlation between platelet ATP production and physical function. This could be due to differences in functionality of the participants as shown by Santanasto et al. (35). Alternatively, this could be due to a difference in demographics of the participants in the 2 studies. For example, the mean age of our study participants was greater (88 ± 2 years) compared to theirs (68 ± 4 years; refs. 21,22). However, it is more likely due to biological differences in the cell types used (PBMCs versus platelets). Indeed, Chacko and colleagues have defined differences in the bioenergetic profiles of intact platelets versus other leukocytes (36), and it is further plausible that the aging process differentially affects each circulating cell type. This suggests that perhaps an index composed of bioenergetic measurements in both PBMCs and platelets may offer increased opportunity to more precisely assess bioenergetic health and its relation to physical function in aging. Here we demonstrate that parameters of platelet oxidative phosphorylation correlate with similar measures in skeletal muscle by respirometry and 31 P-MRS. Notably, Molina and colleagues have shown in nonhuman primates that platelet and leukocyte respiratory capacity correlates with glucose metabolism measured noninvasively by 18 F-fluorodeoxyglucose PET imaging (16). Our study corroborates the utility of circulating cells as a potential proxy for noninvasive imaging methods and extends this concept to the use of 31 P-MRS in humans. Molina and colleagues also demonstrated in nonhuman primates that platelet maximal respiratory capacity correlates significantly with both maximal and state 3 res- Figure 3. Platelet basal respiration is decreased and proton leak is increased in older adults. Bioenergetic parameters were measured by XF analysis in platelets isolated from young (n = 32) and older adults (n = 32). (A) Basal OCR was measured in intact platelets in the absence of any treatment. (B) Proton leak was measured in the presence of the ATP synthase inhibitor oligomycin. (C) ATP-linked OCR was calculated for each subject by subtracting proton leak from basal OCR. (D) Maximal OCR was measured in the presence of the protonophore carbonyl cyanide p-(trifluoromethoxy) phenylhydrazone (FCCP) and represents the maximal capacity of respiration. (E) Basal glycolytic rate is the rate of extracellular acidification of platelets that is sensitive to treatment with the glycolytic inhibitor 2-DG. Each dot represents an individual subject, and the lines denote the mean ± SD. Significance was calculated by unpaired Student's t test. P < 0.05 was considered significant. insight.jci.org https://doi.org/10.1172/jci.insight.128248 C L I N I C A L M E D I C I N E piration of permeabilized skeletal muscle fibers (15). The results presented here again are consistent with previous results and extend this observation to humans. Although measurement of muscle biopsies and assessment by 31 P-MRS remain the gold standard in terms of methodology for the investigation of mitochondrial function, these techniques are complicated by their invasiveness (biopsy) and expense ( 31 P-MRS). Our data, consistent with prior findings by Molina and colleagues (15,16), are compelling from a methodological standpoint in that they suggest bioenergetic measurement from a simple, less invasive, and less expensive blood draw may serve as a powerful supplement or even surrogate for the measurement of mitochondrial function from muscle biopsies or by 31 P-MRS. This would allow for the measurement of bioenergetics repeatedly over a longitudinal study, particularly in aged populations undergoing muscle loss in which repeated muscle biopsies are not an option. In conclusion, we provide evidence that platelet mitochondrial function is altered with age and plate- C L I N I C A L M E D I C I N E let bioenergetic parameters correlate with markers of physical function, perceived fatigability, as well as muscle mitochondrial function. These data suggest that measurement of platelet bioenergetics may serve as a powerful translational tool to study the mechanistic links between mitochondrial function and physical decline with age. Moreover, the use of platelet bioenergetics as a surrogate for muscle biopsies or 31 P-MRS may serve as a powerful clinical tool, enabling the design of large longitudinal studies in which mitochondrial measurements can be made more frequently to understand the role of the mitochondrion in aging and to monitor therapies to improve mitochondrial bioenergetics. Methods Materials. All chemicals were obtained from MilliporeSigma unless otherwise noted. Study population. The study population consisted of 2 groups -young and older adults. The young adults consisted of 32 individuals (ages 18 to 35) recruited at the University of Pittsburgh via protocol 08110422 approved by the University of Pittsburgh Institutional Review Board (IRB). Young adults were recruited by advertisement, and inclusion criteria included being 18 years of age or older with no history of anemia, vascular disease, or any other diagnosed disease. Pregnant or lactating women were excluded. Older adults were a subset of the national Health ABC prospective cohort (37,38). Health ABC enrolled 3075 Black (41.7%) and White men and women (51.5%) aged 70 to 79 years between March 1997 and April 1998, who resided in the Memphis, Tennessee, and Pittsburgh, Pennsylvania, areas. Eligibility criteria included no self-reported difficulty walking a quarter mile, climbing 10 steps, or performing activities of daily living; no reported use of a walking aid; and no active cancer treatment. Exclusion criteria included Figure 5. Parameters of platelet bioenergetics correlate with parameters of physical function and fatigability. Parameters of physical function and fatigability were measured as described in the Methods in the cohort of older adults. Bioenergetic parameters were measured in intact platelets isolated from the same cohort. Platelet bioenergetic parameters were tested for an association with parameters of physical function and fatigability measured. (A) Pearson's correlation between platelet basal glycolytic rate and perceived physical fatigability score in 27 subjects. Pearson's r = 0.451; P = 0.016 (P < 0.016 considered statistically significant after Bonferroni's correction for multiple comparisons; Table 6). (B) Pearson's correlation between platelet proton leak and gait speed measured in 27 subjects. Pearson's r = 0.5801; P = 0.0019 (P < 0.016 considered statistically significant after Bonferroni's correction for multiple comparisons; Table 7). (37). The older cohort in the current study was a subset of the Health ABC cohort that included 32 participants who were aged 86 to 93 at the time of the study, who were from Pittsburgh, Pennsylvania, and who completed a muscle biopsy as part of Health ABC study. In addition, the Health ABC participants for this ancillary study had to safely complete a 31 P-MRS measurement. The study was approved by the University of Pittsburgh IRB (IRB960212), and written informed consent was obtained from all participants in accordance with the Declaration of Helsinki. See Table 1 for demographics of young and older subjects and Figure 1 for more information on subject enrollment and completion of endpoints. Table 6. Pearson's and partial correlations between platelet glycolytic rate and physical function parameters Determination of ATP kinetics by 31 P-MRS. Following an acute bout of knee extensor exercise, in vivo maximal mitochondrial ATP production (ATP max ) was determined using 31 P-MRS. To quantify rates of mitochondrial ATP production, PCr recovery after exercise was used. Here, participants were free of unsafe metal or other implants, were free of bilateral joint replacements, and were able to lie in a supine position for 1 hour. The exercise protocol was performed in a magnetic resonance imaging magnet (3T TIM Trio, Siemens Medical Solutions) where participants laid supine with the right knee elevated at approximately 30°. Straps were placed over the legs, and a 2.5-inch surface RF coil tuned to 31 P was placed over the quadriceps. Signal was collected by a hemisphere defined by the coil radius (1.25 inches). Participants kicked repeatedly as hard and as fast as they could for 2 bouts (30 and 36 seconds), each followed by a 6-minute rest. Phosphorus spectra were collected using a standard one pulse experiment to determine levels of PCr, ATP, Pi, and pH by integration using Varian VNMR 6.1C software (Varian Medical Systems) throughout exercise and recovery. Platelet isolation. Platelets were isolated from peripheral blood samples obtained by standard venipuncture without a tourniquet, to avoid platelet activation as previously described (18,20). Briefly, prostaglandin I 2 (1 μg/mL) was added to whole blood (to further prevent artifactual activation) before it was centrifuged (150 g, 10 minutes) to isolate platelet-rich plasma. A subsequent centrifugation step (1500 g, 10 minutes) yielded isolated platelets, which were washed with erythrocyte lysis buffer and resuspended in modified Tyrode's buffer (20 mmol/L HEPES, 128 mmol/L NaCl, 12 mmol/L bicarbonate, 0.4 mmol/L NaH 2 PO 2 , 5 mmol/L glucose, 1 mmol/L MgCl 2 , 2.8 mmol/L KCl, pH 7.4). The purity of the isolated platelet sample was determined by measurement of CD41a expression using flow cytometry as previously described (18,20). Platelet bioenergetics measurements. OCR and ECAR were measured in freshly isolated platelets (5 × 10 7 platelets/well) within 2 hours of blood draw by XF analysis (XF24, Agilent Seahorse Technologies) as previously described (18). After measurement of basal OCR, 2.5 μmol/L oligomycin A, 0.7 μmol/L FCCP (to measure maximal OCR), and 15 μmol/L rotenone were consecutively added. Mitochondrial OCR was calculated by subtracting the rotenone-insensitive rate from the basal, proton leak, and maximal OCR. ATP-linked OCR was calculated by subtracting the proton leak from basal OCR. Basal glycolytic rate was calculated by determining the basal ECAR that was sensitive to 2-DG (100 mmol/L). The assay was performed in unbuffered Dulbecco's modified Eagle medium supplemented with 25 mmol/L glucose, 1 mM pyruvate, and 2 mmol/L glutamine. All rates were normalized to platelet number. Western blots. Mitochondrial protein expression was measured by Western blot as previously described (18,20). Antibodies for complex II (MS204) and complex V (MS502) were purchased from Mitoscience; complex I (ab14711), complex III (ab14745), and complex IV (ab14744) from Abcam; and citrate synthase (sc30538) and UCP2 (sc-6525) from Santa Cruz Biotechnology. Blots were imaged with a LI-COR Odyssey imaging system and analyzed using LI-COR Odyssey infrared imaging software version 3.0. Blots were reprobed with integrin αIIβ antibody (sc-166599, Santa Cruz Biotechnology) for normalization. Electron transport chain complex activity. Enzymatic activity of complexes I, II, III, IV, and V and citrate synthase were performed by spectrophotometric kinetic assays as previously described (18,20). Muscle biopsy and permeabilized fiber bundle preparation. Skeletal muscle biopsy procedures occurred within 6 months of the blood draw for platelet measurements and fatigability questionnaire administration. Participants fasted overnight and did not engage in physical exercise for 48 hours before the biopsy. Percutaneous muscle biopsy samples were obtained under local anesthesia (2% buffered lidocaine) at the University of Pittsburgh's Clinical Translational Research Center and immediately prepped for mitochondrial respiration measurements as previously described (39,40). Skeletal muscle mitochondrial respiration. After permeabilization, the muscle fiber bundle was placed into the respirometer chamber of an Oxygraph 2K (Oroboros) and the assay run as previously described (39,40). Once a stable baseline was acquired, 5 mmol/L pyruvate, 2 mmol/L malate, and 10 mmol/L glutamate were added to measure state 4 respiration (oxygen consumption driven by proton leak). ADP (54 mmol/L) was added to elicit complex I-supported oxidative phosphorylation. Then, 10 mmol/L succinate was added to evaluate complex I-and complex II-supported state 3 respiration. Finally, 2 μmol/L FCCP was added to determine maximal uncoupled respiratory capacity. Cytochrome c was used to check the quality of the muscle fiber preparation and assess the integrity of the outer mitochondrial membrane. Steady state oxygen flux for each respiratory state was determined and normalized to dried bundle weight. Measurements of physical function and perceived fatigability. The Long Distance Corridor Walk (LDCW), an endurance walking test, was administered to the Health ABC cohort as previously described (41). The participants were asked to walk 10 laps around traffic cones placed 20 meters apart in a dedicated corridor for a total of 400 meters while wearing a portable oxygen consumption device (COSMED K4b2). They were given a 2-minute warm-up period where they were instructed to cover as much ground as possible and then asked to perform the LDCW as quickly as possible at a pace that could be maintained. Time to walk 400 meters No correlations were significant as determined by P < 0.016 (nominal α value for 3 comparisons); n = 27. was recorded and used to calculate gait speed (total meters walked/total time in seconds). Perceived physical fatigability was also measured in the Health ABC cohort using the Pittsburgh Fatigability Scale (PFS) (42). The PFS was self-administered within 2 hours of the blood draw and within 6 months of the skeletal muscle biopsy. This 10-item questionnaire assesses whole-body tiredness as a function of duration and intensity of activity. Each item is scored from 0, indicating no fatigue, to 5, indicating extreme fatigue. The 10 items are summed, with PFS scores ranging from 0 to 50, with higher score indicating higher fatigability. Of the 32 total subjects, 5 subjects did not complete the walk or fill out the fatigability questionnaire. Statistics. Comparisons of older versus young adults were made using unpaired 2-tailed Student's t test, but comparison of ECAR was made using a nonparametric Mann-Whitney test because of nonnormal distribution (evaluated by the Shapiro-Wilk and D'Agostino-Pearson normality tests). Using data from the older group, Pearson's correlations were used to determine the relationship among variables of platelet bioenergetics, muscle energetics, and physical function parameters. A 2-tailed P value was calculated, and Bonferroni's posttest for multiple comparisons was used to determine the appropriate α value for each set of comparisons. P values below 0.05 are highlighted in each table. Statistical significance is determined by the α value calculated for each individual experiment and is denoted in the respective table. P values of less than 0.05 were considered significant. All statistics and analyses were calculated using IBM SPSS (v22), and figures were generated using Prism 7 (GraphPad Software Inc.). Values reported in the text are mean ± SEM. Study approval. The study population consisted of 2 groups: young and older adults. The young adults were recruited at the University of Pittsburgh via protocol 08110422, approved by the University of Pittsburgh IRB. Older adults were a subset of the national Health ABC prospective cohort (37,38) and were enrolled as part of an ancillary study approved by the University of Pittsburgh IRB (IRB960212). Written informed consent was obtained from all participants in accordance with the Declaration of Helsinki. Author contributions AB and CGC contributed by conducting the experiments, acquiring and analyzing the data, and writing the manuscript. AJS and GD contributed by conducting the experiments and acquiring and analyzing the data. PMC, NWG, BHG, and ABN contributed by designing experiments and providing the reagents as well as editing the manuscript. SS contributed by designing the research studies, analyzing the data, providing the reagents, and writing the manuscript. SMN advised on and made the statistical calculations.
2019-05-24T13:07:11.710Z
2019-07-11T00:00:00.000
{ "year": 2019, "sha1": "0260e108b2840d7edf37844452b73d4cf1aa8992", "oa_license": null, "oa_url": "https://doi.org/10.1172/jci.insight.128248", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0260e108b2840d7edf37844452b73d4cf1aa8992", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
78811699
pes2o/s2orc
v3-fos-license
Patient empowerment as a promising avenue towards health and social care integration: results from an overview of systematic reviews of patient empowerment interventions Background: The ever increasing complexity of health care paired with the increasing proportion of chronic patients and other factors are clearly exacerbating the need for an all round well coordinated health (and social) system. This need for better integration of services has long sparked the interest for health system research and in the last 15 years this area has expanded and it now includes the increasingly active role of the patients, the field of patient empowerment. Although it has been strongly developed for over a decade now, it is still far from being consistent in terms of conceptualisations, categorisations and analysis. The results from the EMPATHiE project, by conducting a through review of systematic reviews analysing patient empowerment interventions targeting chronic conditions, aims to provide an overview of the field and advance in our common understanding of the role the patients as active players can have in the future developments of an integrated health (and social) systems. Background: The ever increasing complexity of health care paired with the increasing proportion of chronic patients and other factors are clearly exacerbating the need for an all round well coordinated health (and social) system.This need for better integration of services has long sparked the interest for health system research and in the last 15 years this area has expanded and it now includes the increasingly active role of the patients, the field of patient empowerment.Although it has been strongly developed for over a decade now, it is still far from being consistent in terms of conceptualisations, categorisations and analysis.The results from the EMPATHiE project, by conducting a through review of systematic reviews analysing patient empowerment interventions targeting chronic conditions, aims to provide an overview of the field and advance in our common understanding of the role the patients as active players can have in the future developments of an integrated health (and social) systems. Ob jectives: To identify the effective empowerment interventions targeting chronic patients (chronic respiratory diseases (COPD or Asthma); chronic cardiovascular diseases; diabetes mellitus (type 1 and 2); severe mental illness (schizophrenia or chronic depression); complex patients (multi-morbidity) or health or social professionals working with the described chronic patients).Also we aimed to describe main contextual factors that help or hinder their implementation.Results: The search identified 101 SRs of interest (corresponding to more than 2300 individual studies) A descriptive analysis detected that most of the interventions reported in the studies were addressed to patients at micro or meso level. Within a general positive tendency (when compared to usual clinical centred care) some specific interventions emerge as the most effective: self-management support interventions across all conditions and different formats of patient education for diabetic patients.Recent innovative practices (such as virtual interactive platforms and tele-monitoring through smartphones) present a positive tendency, mainly in diabetes and cardiovascular conditions.Finally, systemic changes regarding the model of care (such as the chronic care model), seem to yield positive results. Similar interventions report different levels of effectiveness, which can be partially explained by multiple factors such as targeted condition, specific components of the intervention, patient and provider characteristics, contextual factors and outcome measures used.The study of the effect appears to indicate that, to a significant degree, success and failure factors are related to the targeted behaviour, which in turn is mediated, by the type of condition in which it is applied. Conclusion: Interventions targeting patient empowerment tend to present positive results in several types of outcomes.Self-management support interventions and some type of patient education formats presented the most conclusive evidence in their effectiveness.Recent innovative practices (as IT based platforms) present a positive tendency but still need further research particularly regarding the ideal combination between more traditional care and these innovative practices.Practical implications for policy and clinical organization will be discussed during the presentation.Stronger evaluative work on effectiveness of meso and macro level initiatives of patient empowerment is needed.Overall patient empowerment has opened a promising avenue towards healthcare (and social) integration.Keywords: chronic diseases; patient empowerment; self-management : Overview of systematic reviews (SR) of empowerment interventions for patients with chronic conditions from 2000 to 2013 was conducted for EMPATHIE (EU Project on Patient empowerment).Selected articles were extracted collecting intervention characteristics, outcome measures and scientific quality (AMSTAR).The effectiveness of the interventions was measured in terms of patient empowerment related measures, clinical outcomes, quality of life measures and use of health services.The success and failure factors were identified with a mixed methodology: results from meta-analysis and subgroup analysis and qualitative review of the conclusions of the SR's authors.The interventions and identified factors are categorized by type of intervention, targeted condition, and level of evidence.andsocial care integration: results from an overview of systematic reviews of patient empowerment interventions.
2019-03-16T13:11:57.104Z
2016-12-16T00:00:00.000
{ "year": 2016, "sha1": "632b8c98e9c3a6c0b01aa90dca6d0bd58f36f267", "oa_license": "CCBY", "oa_url": "http://www.ijic.org/articles/10.5334/ijic.2895/galley/3709/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8c204576db3a7d7aa8618cb95626bcab6502a758", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73688118
pes2o/s2orc
v3-fos-license
Long-term Attitude Dynamics of Space Debris in Sun-synchronous Orbits: Cassini Cycles and Chaotic Stabilization Comprehensive analysis of space debris rotational dynamics is vital for active debris removal missions that require physical capture or de-tumbling of a target. We study the attitude motion of used rocket bodies acknowledgedly belonging to one of the categories of large space debris objects that pose an immediate danger to space operations in low Earth orbits. Particularly, we focus on Sun-synchronous orbits (SSO) with altitudes in the interval 600-800 km, where the density of space debris is maximal. Our mathematical model takes into account the gravity gradient torque and the torque due to eddy currents induced by the interaction of conductive materials with the geomagnetic field. Using perturbation techniques and numerical methods we examine the deceleration of the initial fast rotation and the subsequent transition to a relative equilibrium with respect to the local vertical. A better understanding of the latter phase is achieved owing to a more accurate model of the eddy currents torque than in most prior research. We show that SSO precession is also an important factor influencing the motion properties. One of its effects is manifested at the deceleration stage as the angular momentum vector oscillates about the direction to the south celestial pole. amassed quite a number of large debris objects, posing a real threat to space activities. At present, the SSO region is characterized by the highest debris density and requires to be cleaned (Anselmo and Pardini (2016)). Different aspects of active debris removal (ADR) missions are brought up in Bonnal et al (2013), Van der Pas (2014). One of the generally accepted ADR scenarios is tugging debris objects to the lower orbits, whereupon they burn in the atmosphere or fall to the Earth (Aslanov and Yudintsev (2013)). Most ADR techniques depend substantially on the character of the debris object's rotational dynamics, hence much effort has been spent lately to determine the rotation parameters through groundbased observations (Koshkin et al (2016); Kucharski et al (2014); Lemmens et al (2013);Šilha et al (2017); Santoni et al (2013); Yanagisawa and Kurosaki (2012)). At the same time, much attention has been paid to studying space debris rotational dynamics theoretically (Gomez and Walker (2015); Lin and Zhao (2015); Ojakangas et al (2012); Praly et al (2012); Albuja et al (2015); Sagnieres and Sharf (2017)). According to observation data (Šilha et al (2017)), there are two major types of large debris objectsdefunct satellites and rocket bodies. Although much of what is discussed in this paper regarding the longterm attitude motion evolution is applicable to both classes of debris objects, there are also distinctions, which require separate treatment. For this reason we shall here confine ourselves to the dynamics of the rocket bodies, whereas the defunct satellites story is told in (Efimov et al (2017b)). Simulation of rotational dynamics for a typical object of the rocket bodies class (Ariane 4 H10 stage) is conducted in (Praly et al (2012)). The model we use in our study comprises the same key factors as in (Praly et al (2012); Gomez and Walker (2015)) -gravity gradient torque and the torque due to eddy currents. As did Lin and Zhao (2015) we also take into account the orbit precession, which is responsible for remarkable dynamical effects unexamined in previous studies. Besides that, when calculating the torque due to eddy currents we employ a more accurate formula for eddy currents torque proposed in Golubkov (1972) and Martynenko (1985), which includes terms describing the influence of orbital motion that are considered small for fast rotations and are often neglected. The fact that rotational dynamics at 500 − 1000 km altitudes is substantially influenced by torques due to eddy currents became clear immediately after the first artificial Earth satellites launches (e.g., Smith (1964); Ormsby (1967)). It so happened, however, that when dealing with this phenomenon many researchers were mainly interested in fast rotations, whose orbital period is significantly greater than the rotation period. The complete formula for eddy currents torque allows correct description of all stages of the rotational dynamics evolution, including that when the angular velocity is comparable to the mean motion. Moreover, even for relatively large angular velocities (10-50 times greater than the mean motion) these terms can cause significant changes in the rotational axis direction for prograde spins. As in prior research we neglect other environmental torques, which can be done for the chosen class of objects. It turns out that the attitude motion evolution can be divided into three stages: transition to the rotation about the axis with the greatest moment of inertia (so called "flat" or "principal axis" rotation), exponential deceleration of angular velocity, and the stage of temporary slow chaotic dynamics. During the first relatively brief stage, the motion is primarily determined by internal dissipation. In the second stage, angular velocity decays exponentially due to eddy currents. When the angular velocity becomes comparable to the mean motion, the attitude dynamics begins to seem chaotic. This chaos, however, is temporary in the case of rocket bodies dynamics. It results typically in the stable relative equilibrium of the object with respect to local vertical (more exactly, the final regime corresponds to small oscillations about relative equilibrium). The paper is organized as follows. Section 2 describes the main assumptions of our model and the equations for gravity gradient torque and torque due to eddy currents. Section 3 presents the analytical study of the debris objects attitude motion evolution. We derive the evolution equations and introduce the means of their geometric interpretation in terms of angular momentum direction. At the end of this section we also provide the classification of the long-term evolution scenarios. Section 4 contains the simulation results validating the conclusions drawn from the analytical study and providing an understanding of the system's characteristic behavior in the stage of temporary chaos. Finally, the last section summarizes the results obtained for the characteristic evolution of large debris object rotational dynamics in SSO. Mathematical model of a debris object rotational dynamics in SSO Consider an object in a circular geocentric orbit of radius R O and inclination i. The Earth's oblateness causes the orbit's precession with angular velocity where R E = 6378.245 km is the Earth's mean equatorial radius, µ G = 3.986 · 10 5 km 3 /s 2 is the gravity parameter of the Earth, J 2 = 1.082626 · 10 −3 is the first zonal harmonic coefficient in the expansion of the Earth's gravity field. Our model pertains to SSO, where cos i < 0 and, consequently, n Ω > 0, i.e. the longitude of ascending node increases. Argument of latitude u varies as a linear function of time: where ω D = 2π/T D , T D is the draconic period of the object's revolution around the Earth (the time between two consecutive passages through the ascending node). Employing the formula for draconic period, given in Vallado (2007), we obtain: where ωo is the mean motion for the circular orbit of radius R O in the central gravity field with parameter µ G . Let us assume that the ellipsoid of inertia of the considered object is close to elongated ellipsoid of rotation. This assumption holds for rocket bodies, which are the primary target of this study. As in prior research (Gomez and Walker (2015); Lin and Zhao (2015); Praly et al (2012)) when modeling the rotational dynamics with respect to object's center of mass, we shall take into account gravity gradient torque M G and torque due to eddy currents M EC . Gravity gradient torque acting on the object in the Earth's gravity field is given by the formula (Beletsky (1966)): where J is the inertia tensor of the object, R O is the vector from the center of the Earth to the object's center of mass O. Torque due to eddy currents can be expressed as (Golubkov (1972); Martynenko (1985)): where S is the magnetic tensor of the object, B is the magnetic field, the derivativeḂ is calculated in a non-rotating reference frame with the origin at point O. Geomagnetic field is modeled as a field of dipole placed into the center of the Earth: where µ 0 ≈ 1.257 · 10 −6 N·A −2 is the magnetic constant, µ E ≈ 7.94 · 10 22 A·m 2 is the Earth's magnetic dipole moment, k E is the dipole direction. In Section 3, where the evolution equations are derived, we assume for simplicity that the dipole is directed along the Earth's rotation axis ("axial" dipole model). In Section 4 we validate this assumption by carrying out simulations with the use of a more precise model ("inclined" dipole, making an angle δµ = 11 • 33 with the Earth's rotation axis). It is shown in the Section 4.2 that within the accuracy of the averaging procedure the dipole model simplification is valid and allows studying the secular effects in the object's motion using the evolution equations obtained for the "axial" dipole model. The initial motion is assumed to be a rotation about the axis with the greatest moment of inertia. The initial angular velocity absolute value is specified in Section 3.6 and assumed to be much greater than the mean motion ωo. Such regime sets in quite fast under the influence of internal dissipation due to the motion of residual fuel in the fuel tanks of the rocket body (Ojakangas et al (2012); Efimov et al (2017a)). The parameters used in simulations are listed in Table 1. We shall use several reference frames with the common origin in the object's center of mass O. OXY Z is a semi-orbital reference frame: axis OY is perpendicular to the orbital plane, axis OZ is parallel to the vector from the Earth's center to the ascending node, axis OX is directed along the object's center of mass velocity as it passes the ascending node (Fig. 1). Oxyz is a reference frame bound to the vector of the object's angular momentum with respect to its center of mass L: axis Oy goes along L, axis Ox lies in the orbital plane (Fig. 2). The attitude of Oxyz with respect to OXY Z is described by angles ρ, σ (let us note, that given the values of these angles, we define the direction of the angular momentum L as well). Ox y z is a body-fixed frame with the axes directed along the object's principal axes of inertia. For simplicity we neglect in this section the small asymmetry of the object. Thus, the inertia tensor with respect to Ox y z is diagonal: Ascending node Remark : The assumption of dynamical symmetry is not restrictive. Secular evolution of the attitude motion in the case of triaxial ellipsoid of inertia is described by exactly the same equations with slightly modified parameters (see Section 3.4). Numeric experiments show that during the stage of exponential deceleration vector L remains virtually perpendicular to the object's symmetry axis. It helps simplifying the mathematical model: we further assume that Oy is always directed along L and thus coincides with Oy (this approach allows rigorous justification, which is omitted here). Let ψ be a rotation angle around Oy, which describes the attitude of the body frame Ox y z with respect to Oxyz. When ψ = 0 the two frames coincide with each other (Fig. 2). Let us denote the unit vectors of the introduced reference frames by e ξ , where the lower index ξ refers to the corresponding coordinate axis ξ ∈ {X, ..., x..., x , ...}. The unit vector ey can also be denoted by e L to emphasize that it is directed along L. Let us introduce two transformation matrices: where Γ transforms vectors from Oxyz to the body frame Ox y z , and Γ transforms vectors from semi-orbital reference frame to Oxyz. To write down the equations of motion we choose τ = n Ω t as independent variable. "Conservative evolution" (M EC = 0) The combined influence of the gravity gradient torque and the orbit evolution on the rotational motion of a satellite was studied in (Cochran (1972), Henrard et al (1987)). In this case the magnitude of the angular momentum vector is an approximate integral of motion. Direction of L with respect to the semi-orbital frame is described by the equations: where The dimensionless variable ω in (5) denotes the ratio of the current angular velocity and ω * . For typical SSO ω * ∼ 100 • /s, which is greater than observed angular velocities of rocket bodies immediately after separation. Therefore, without loss of generality, the dimensionless angular velocity ω will be assumed in our study to be less then unity. Equations (4) have stationary solutions, which are referred to as Cassini states (Henrard et al (1987)). It can be shown that for there exist four Cassini states: three stable and one unstable (Fig. 3). Fig. 3: Cassini states and Cassini cycles If we draw trajectories of the unit vector on the surface of a sphere S 2 , the separatrices proceeding out of the unstable equilibrium divide this surface into three regions (Fig. 3). Depending on positions of these regions with respect to the orbital plane, we shall denote them by R U (upper), R M (middle), and R L (lower). The stable Cassini states belonging to these regions are denoted by P U , P M , and P L respectively. The unstable Cassini state is denoted by P S . The values of p in the Cassini states are roots of the equation: For nearly polar retrograde orbits approximate expressions for the roots of the equation (7) can be easily obtained as: Let us use the value h of Hamiltonian H along the corresponding solution and the value of the dimensionless angular velocity ω as parameters in the solution family of (4): We shall refer to periodical solutions of (8) as Cassini cycles. They are represented by closed curves on the sphere S 2 around the stable Cassini states (Fig. 3). Fig. 4: Cassini cycles and separatrices dividing the regions R U , R M , and R L for different values of dimensionless angular velocity ω. Cases ω = 0 and ω = 1 are degenerate. At ω = 1 Cassini states P U and P S merge, and R U region vanishes. For ω → 0 the width of R M region tends to zero. Let us consider the values that the Hamiltonian h(ω) can take on the solutions of (8). The maximum h M (ω) and the minimum h L (ω) values of the Hamiltonian correspond to the stationary solutions P M and P L , respectively. In the region R U the minimum of the Hamiltonian h U (ω) is reached on the stationary solution P U . Separatrices have the same value of the Hamiltonian h = h S (ω) with the unstable stationary solution P S . It follows that for the trajectories enclosing P M (i.e. trajectories belonging to R M ) hamil- Vector L moves along a Cassini cycle with a period, which is calculated as follows Here p max and p min are maximum and minimum value of p for a given cycle, and designation I k is used for integrals where p 1 , ..., p 4 are roots of the equation R 4 (p) = 0, . The values of integration limits in (9) for cycles in R L and R M are p min = p 1 , p max = p 2 ; the corresponding values for cycles in R U are p min = p 3 , p max = p 4 (rational roots of (11) are arranged in ascending order of magnitude). Analytic expressions for I k are given in Appendix. 3.3 Derivation of evolution equations describing the eddy currents torque impact: averaging along the orbital motion and rotation about the center of mass Let us introduce the dimensionless torque due to eddy currents: where B * = µ 0 µ E / 4πR 3 O is the characteristic magnitude of the magnetic field along the orbit, S * is the characteristic value of the magnetic tensor components (it is supposed that in the body frame S = S * Σ , Σ = diag(1, 1, λ)). Let us denote by B = B/B * the dimensionless vector of the magnetic field, whose components in the semi-orbital frame are given by: To describe the evolution of rotation accounting for M EC effect, we introduce the averaged equations analogous to (4): where To study the secular effects in the attitude motion with the use of the equations (13), we need to obtain an expression for the averaged dimensionless torque M EC ψ,u . For convenience we shall represent the M EC as the sum of two terms, which will be averaged separately: The term can be called a dissipative component, as it causes the slowing down of the object's rotation (Ormsby (1967)). The second term is due to the change of magnetic field as the object moves along its orbit: Remark : It follows from ω D ω * that χ 1 and |M EC,1 | |M EC,2 | at the stage of fast rotation (ω ∼ 1). Nevertheless, our numeric experiments show that if the influence of M EC,2 is neglected, there appears a significant discrepancy between the solutions of non-averaged equations and solutions of (13), which arises long before the moment when the decelerated angular velocity value becomes comparable to ω D . Let us start the averaging procedure with the first term of M EC . Introducing for vector B the matrix we shall transform the expression for M EC,1 as follows: Supposing that the components of magnetic field vector are written in the semi-orbital reference frame, we average the expressions (17) over ψ: Taking into account where E 3 is the identity matrix, we obtain the following expression for M EC,1 ψ : Averaging (18) along the orbital motion yields: Let us proceed to the averaging of the second term of the M EC torque. In the expression for M EC,2 we shall also replace the dimensionless vector B by the matrixB : Averaging (20) yields: The following relations are satisfied: B 2 x u = 1 4 sin 2 i 9 2 + cos 2 σ , BxBz u = 1 2 sin i cos σ sin ρ cos i + 1 2 sin i cos ρ sin σ . Using the relations (21), we obtain: The equations (13) are of instrumental value for us. We shall use them to construct evolution equations, describing the rotational motion of the object at long time intervals. It may be difficult to draw definite conclusions about the properties of motion directly from the equations (13). However, it is worthwhile noticing that the last equation in the system (13) allows writing down the following inequalities, characterizing the changes in value of the dimensionless angular velocity during the object's fast rotation (Sarychev and Sazonov (1982)): Inequalities (23) become invalid when the magnitude of angular velocity becomes comparable to ω D . Averaging along Cassini cycles For small ε the behavior of variables σ and p in solutions of the system (13) can be described as a Cassini cycle with slowly changing parameters h, ω. Let us write the equations for h, ω and average them along the solutions of (8): We shall refer to (24) as evolution equations. For convenience let us write the right-hand sides of equations (24) as sums of integrals I k , which were introduced previously by the equation (10): where Upper index in c k denotes the corresponding component of the torque due to eddy currents (14). All coefficients that are not listed here equal zero. Remark : In general case an object may be asymmetrical. Let us denote its principle moments of inertia by A , B , C (A ≥ B ≥ C , A > C) and by S x x , S y y , S z z the diagonal components of its magnetic tensor written in the principal axes of inertia (magnetic tensor itself in these axes does not have to be diagonal). Using the evolution equations (25) to study the secular effects in the attitude dynamics of such object, requires the "effective" parameters A, C, S x x , S z z , which are calculated as follows: These effective parameters are then used to calculate the values of all the auxiliary quantities in (25). Evolution equations and qualitative analysis of large debris objects' dynamics for fast rotations about the center of mass For better understanding of the rotational motion evolution, let us draw phase portraits for the system (24). In order to see how far a solution goes into one of the regions R L -R U , we shall use "relative" variablesh instead of h:h We shall also use the auxiliary valueω of angular velocity of the object nondimentionalized by mean motion ωo. It is related to the previously introduced ω * by the formula: Figure 5 shows the phase portraits in the space ω,h , which describe the long-term evolution of Cassini cycles. The interval of angular velocities here corresponds to the applicability range of the averaged equations (25), i.e. from angular velocities comparable to mean motion (ω ∼ 1) to critical angular velocity value, at which two Cassini states vanish (ω = 1). As the real values of angular velocity are usually much smaller than ω * , this practically covers all possible variants of the exponential deceleration stage. Generally, trajectories of the system depend on the object's parameters (the depicted case corresponds to the set listed in Table 1). However, this dependence is weak and Fig. 5 correctly reflects the qualitative evolution of Cassini cycles in SSO for most objects. Fig. 5: Evolution of "osculating" Cassini cycles' parameters Let us analyze the acquired results. Because of exponential deceleration due to eddy currents torque, all trajectories head towards the zone of lower angular velocity values (Fig. 5). Map of R M reveals that most of the trajectories starting in this region do not leave R M and converge towards the Cassini state P M . The trajectories entering this region through the separatrices S also tend to P M . Most of the trajectories in R L are directed towards the separatrix and cross it, leaving R L . The dynamics in the region R U is most interesting. Typical trajectories in this region are "S"-shaped. The downward flow of trajectories in the region of high angular velocities (ω 700) exists mainly as an artifact of normalization (27). Because at ω = 1 region R U vanishes and point P U merges with the separatrix (Fig. 4) for ω 1 the apparent general direction of trajectories in Figure 6 is defined by the rapid inflation of R U . The dynamics in the rest of the region R U is characterized by the change in trajectories' flow direction from upward to downward atω ∼ 10 ÷ 50. It is governed by the interplay of two components in eddy currents torque: dissipative M EC,1 and orbital M EC,2 given by (15) and (16) respectively. As |M EC,1 | ∝ ω and M EC,2 does not depend on ω, the evolution for very fast spins is defined by the dissipative component of the eddy currents torque, which drives the angular momentum towards the orbital plane. Consequently forω ∈ (50, 700) the flows of trajectories in the R L and R U regions are directed towards the separatrices and look very much alike. The orbital component of the eddy currents torque, as seen from (22), has a part directed along e Y , which is close to direction towards P U (Fig. 4). Therefore, this component spins the debris object up about the orbital normal and results in deflection of trajectories in R U towards P U atω ∼ 10 ÷ 50. It should be noted, that this interval corresponds to relatively fast spins for which |M EC,1 | |M EC,2 |. However, near the separatrix the directions of these torques turn out to be such that M EC,1 mainly affects angular velocity value, while the direction of rotational axis is primarily influenced by M EC,2 . Thus the orbital component of eddy currents torque starts to have a noticeable effect on attitude dynamics long before the value of the angular velocity becomes comparable to mean motion. In other words, the orbital component of the eddy currents torque keeps most of the trajectories in R U from crossing the separatrix, while in R L it only increases the rate at which trajectories approach the separatrix. To illustrate the transitions of phase trajectories between the regions, phase portraits (Fig. 5) are joined together along the separatrices, as shown in Figure 6. Directions of transitions are indicated in Table 2. Remark : ω t1 ≈ 0.03, ω t2 ≈ 0.14, ω t3 ≈ 0.2, and ω t4 ≈ 0.27 are the values of the dimensionless angular velocity which separate the attracting and repelling segments of the border S of the phase portraits Fig. 5. Transitions in the odd columns have quasi-probabilistic character. Most of the trajectories starting in R M and R U remain in respective regions. In contrast to this, almost all trajectories from the region R L do cross the separatrix and transit to R M or R U . This transition has a quasi-probabilistic nature and the probabilities of a trajectory going to either one of those regions depends on the object's parameters: the greater relative value of the eddy currents' torque leads to the greater probability of transition into R U . The middle region R M quickly becomes very narrow (Fig. 4) because of the exponential deceleration of ω. Thus, transition into region R M resembles a capture into oscillations about P M , which for SSO roughly corresponds to the direction towards the south celestial pole. This transition can also be considered as a resonance phenomenon, since mean precession rate of the angular momentum vector in the inertial reference frame equals to the precession rate of the orbital plane ( σ = 0). Classification of long-term evolution scenarios: mapping the space of initial conditions To study how the attitude motion evolution depends on the initial values of ρ and σ we consider an object with parameters given in Table 1 rotating with the angular velocity 12 • /s (ω = 200, ω ≈ 0.086). This value of the initial angular velocity, on the one hand, is close to angular velocity of real rocket bodies after payload separation (De Pontieu (1997)), and, on the other hand, corresponds to approximately even partition of the initial conditions space to regions R L , R M , and R U in terms of their area (Fig. 4), thus producing a representative set of different dynamical cases. Let us classify different scenarios of the attitude motion long-term evolution according to pairs of regions R i → R f , where index i denotes the region in which the evolution starts, and index f indicates the region in which the system is found by the end of the exponential decay stage. Both indices i, f ∈ {L, M, U }. This notation implies nine possible scenarios. However, judging by the phase portrait of the region R M (Fig. 5) what starts in R M stays in R M , and thus only one out of three R M → R f scenarios actually exists -R M → R M . Also there are no transitions leading into R L , therefore R U → R L is impossible as well as R M → R L . Lastly, the scenario R L → R L , although feasible according to Figure 5, turns out to have a negligibly small phase area of the corresponding initial conditions. Thus, to all practical purposes there remain five different scenarios of long-term evolution: To give an idea of how the phase trajectories corresponding to different scenarios are mixed we present in Fig. 7 the partition of the initial conditions (ρ and σ) for the averaged system (13). It does not predict exactly the type of evolution in the original non-averaged system. Nevertheless, it correctly characterizes the sensitivity to variation of initial conditions. According to the analysis in Section 3.5, if evolution starts in R U , the system in most cases remains in R U during the whole exponential decay stage. However, if initial conditions are close enough to separatrix, the transit R U → R M can take place, as revealed by the narrow band between R U and R M corresponding to this scenario (Fig. 7). In the region R L the domains corresponding to scenarios R L → R M and R L → R U take the form of tightly interleaved stripes. This is the reason behind the previously discussed quasi-probabilistic nature of these transitions. A small variation of initial ρ value can lead to change of evolution scenario, therefore uncertainty in initial conditions, which always exists in practice, does not allow uniquely determining the type of subsequent evolution. 4 Numerical study of fast rotation evolution of large space debris objects in SSO Numerical Simulation Setup The regimes of motion described earlier in Section 3 are of temporary character. They are destroyed when the angular rate decreases to become comparable to ωo. From this point the rotation evolution cannot be described by the equations (24), derived under the assumption of the object's fast rotation. Hence to study the transformations of the motion regimes and discover the final motion modes, we carried out numerical experiments. Furthermore, the numeric simulation was necessary to corroborate the conclusions drawn in Section 3 for objects with realistic parameter values (listed in Table 1), because in that case ε ≈ 0.5. Yet, even for such values of parameter ε the averaged equations (24) proved to be accurate enough to describe both qualitative and quantitative properties of the object's motion in the stage of exponential decay. In all simulations the following motion characteristics were kept track of: absolute value of angular velocity; angle δ between the axis with the least moment of inertia and the local vertical; angles ρ and σ, describing the angular momentum L direction. All simulations start from an orbital position corresponding to the crossing of the ascending node, and with the angular momentum directed along the axis with the greatest moment of inertia, i.e. "flat" spin. The initial value of angular velocity equals 200ωo and is the same as in Section 3.6. 4.2 Simulation results: validation of the averaged equations (13) and (25) Figure 8 shows the comparison of numerical simulation results with the solutions of double averaged system (13) and thrice averaged system (25). For the latter one we used the mean value of angle ρ in the Cassini cycle for any given h. Thrice averaged system is not presented on σ(t) plot, because this angle defines the position on the cycle, and this information vanishes when the motion is averaged along Cassini cycles. For both averaged systems the set of parameters modified in accordance with (26) was used. It can be seen, that solution of (13) (drawn by the red line) closely follows the numerical results (black points) at the beginning, but starts slightly deviate from them as time grows. This happens due to rise of fluctuations, caused by gravity gradient torque at smaller angular velocities. The conformity is totally lost at time t ≈ 500d with the end of exponential decay and the beginning of slow chaos stage. Angular velocity value at this moment equals approximately 4ωo. Figure 8 also demonstrates how accurately the solution of the thrice averaged system (blue line) describes the secular evolution of the angle ρ. Simulation results: exponential deceleration and slow chaotic stabilization Numerical experiments show that direction of initial angular velocity has no significant effect on subsequent behavior of its absolute value. Typical dependence of angular velocity on time is presented in Figure 9. It takes about 500 ÷ 600 days for angular velocity to decrease to values comparable to the mean motion. The right side of Figure 9 shows scaled graph so as to demonstrate the stage of slow chaotic motion preceding the gravitational stabilization of the object (ω ∼ 1). The moment t G when the graph δ(t) crosses the line δ = 90 • for the last time is natural to define as moment of gravitational capture. It is clearly seen in Figure 10 and t G = 600 ÷ 640 days for different simulations. At t > t G angle δ converges to either 0 • or 180 • , as shown in Figure 10. These outcomes are equiprobable, the only difference between them being whether the rocket body orbits the Earth with thrusters down or up. Gravitational stabilized body rotates synchronously with the local vertical, therefore after t > t G angular venosity value tends to ωo (ω → 1), as seen in Figure 9. The convergence of evolution to gravitational stabilization is a consequence of strong elongation of rocket body inertia ellipsoid, which leads to high gravity gradient torque. For debris objects with less elongated inertia ellipsoid, e.g less typical rocket bodies similar to Ariane 5 or defunct satellites, other final motion regimes exist (Efimov et al (2017b)). Simulation results: evolution of angular momentum orientation To demonstrate all five scenarios of long-term evolution described in Section 3.6, numerical simulations for five sets of initial condition N 1 -N 5 shown in Figure 7 were carried out. In all subsequent plots we shall use same colors as in Figures 3-6 to denote different regions corresponding to the system's state during the stage of exponential decay: orange -R M , blue -R U , green -R L . Subsequent stages of slow chaos and gravitational stabilization are colored gray. Figure 11 shows the evolution of angles ρ and σ in case N 1 , as an example of R M → R M scenario. The angular momentum vector here indeed oscillates about the direction to the south celestial pole as ρ and σ oscillate about values 90 • and 270 • respectively. The amplitudes of these oscillations decreases over time, as the system converges to Cassini state P M (Fig. 3) in full agreement with Figure 5. In Figure 12, which shows the simulation results for case N 2 , the transition from region R U to R M at t ≈ 140 d is visible. As the angular momentum vector becomes captured in the middle region, the circulation of σ over the whole interval [0 • , 360 • ) is replaced by oscillation about σ = 270 • . Convex shape of plot ρ in Figure 13 corresponds to the concavity of trajectories in R U in Figure 5 atω ∼ 30. The angle ρ increases while system comes closer to the separatrix, and starts to decrease, when it moves back to P U . Thus, the axis of rotation in this scenario initially leans towards orbital plane, Fig. 12: Evolution of angles ρ and σ: an example of R U → R M scenario, based on set N 2 of initial conditions but deflects back to the orbital normal at t ∼ 300 d. As explained in Section 3.5, this non-monotonous behavior is caused by the influence of the orbital motion on the eddy currents torque. Fig. 13: Evolution of angles ρ and σ: an example of R U → R U scenario, based on set N 3 of initial conditions Evolution for case N 4 is shown in Figure 14. Here capture in the middle region of the trajectory that starts in R L is seen, which is very similar to case N 2 . Alternatively, Figure 15 shows the evolution for case N 5 that starts very close to case N 4 (Fig. 7), but instead of being captured into R M , the systems jumps past it into R U . After that, the angular momentum vector is carried away from the orbital plane towards P U by the orbital component of eddy currents torque, similar to the second half of evolution in case N 3 . The transition R L → R U is also characterized by the change of angle σ circulation direction (Fig. 15). Figures 11-15 correspond to evolution following the stage of exponential decay, and thus complement the analytical study carried out in Section 3. One can see, that in all cases after the end of the slow chaos stage, ρ tends to 0 • , as the gravitationally stabilized object rotates about the orbital normal. Conclusion Using analytical techniques and numerical simulation we have conducted a comprehensive study of the rotational motion of large objects in SSO. It is remarkable that despite of seeming insignificance, both precession of the orbit and influence of orbital motion on induced eddy currents proved to have a major impact on attitude dynamics. The natural next step is to discover the predicted effects in the motion of the real objects. In particular, it would be desirable to check if the angular momentum vector (for objects similar to those we have modeled) in some cases indeed oscillates about the direction to the south celestial pole. The other potentially observable phenomenon is the lack of fast rotating objects with retrograde spins (represented by region R L in our study). Most of them should fairly quickly switch to prograde spins, or become captured into angular momentum oscillations with the axis of rotation lying near the orbital plane. One of the possible prospects of our work is the study of the rotational evolution of large space debris objects in the satellite class. In comparison with the rocket bodies the inertia ellipsoid of a typical satellitelike object is more similar to a sphere. Preliminary simulations show that among the final regimes for this class of objects there is not only the gravitational stabilization regime, but also rotation about the orbital plane normal with mean angular velocity equal to 9ωo/5 (ω = 1.8), which is governed by eddy currents. In addition to that satellites might have a significant magnetic moment, which impacts the final stages of evolution and leads to even greater variety of final regimes (Efimov et al (2017b)).
2017-12-22T17:55:07.000Z
2017-12-22T00:00:00.000
{ "year": 2017, "sha1": "02228af7ecff0eedc0e6a7dc02c74b6e9554e072", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1712.08596", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "02228af7ecff0eedc0e6a7dc02c74b6e9554e072", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
109993331
pes2o/s2orc
v3-fos-license
Design of Subject and Developing Environment of Preschool Education Relevance of the problem being researched in this article is due to the need of introduction new pedagogical technologies into the system of preschool education. The article is aimed at summarizing the first experience of design subject development environment of pre-school education laboratory established on the basis of Yelabuga Institute of Kazan Federal University. Special emphasis in the Laboratory is given to developmental subjects designed, manufactured and tested by the members of the Laboratory to determine their effectiveness in the game oriented at cognitive-developmental activities of children aged from 3 to 6-7 in kindergarten and beyond, allowing to form a fully versatile developed person. The article can be useful for educators, instructors of preschool educational institutions and rehabilitation centers in the work with children of preschool age. Introduction 1. Nowadays Russia is experiencing new reforms in the education system at all its levels.An important and integral factor in this context is the increase of quality and efficiency especially in pre-school education, which is the first stage of education on the whole. Theoretical basis of subject development environment of preschool education research problem in Russia is primarily determined by regulatory and local documents adopted and implemented both at regional and federal levels. One of the main documents regulating activities of educational organizations in Russia,the Federal Law of the Russian Federation «On education in the Russian Federation» (2013), was implemented on the 1-st of September 2013. As a result of its analysis, we have identified that educational institutions, including pre-school, should carry out educational activities with regard to the requirements of federal state educational standard, as well as exemplary educational programs of preschool education; should identify and retain a variety of educational publications, the use of pedagogically sound ways, means, methods of training and education; right on creativity, development and application of original programs and methods of training and education used in implementation of educational programs for preschool education and in accordance with article 13 when implementing educational programs can be used in a variety of educational technologies. Today science pedagogy has an incredible amount of educational technologies, which are much wider than method itself.M. Kohler, and P. Mishra (2009) define them as complex interaction between three areas of knowledge: content, pedagogy and technology.The effectiveness of learning, in their opinion, depends on knowledge of technology.While the very basic concept of «technology» is interpreted as «newer», which allows understanding of more complex things by using simple methods (Kalimullin , 2014;Kalimullin & Gabdilkhakov, 2014;Telegina et al., 2015).Some of technologies are summarized, grow out of theory, while the others are practical result of research, and as for still others they have originated at the intersection of theory and practice.In any case in pre-school educational institution applied learning technologies should be aimed at reducing the «energy» on the part of an educator, the development of motivation and commitment to self-development, self-knowledge with the psycho physiological conditions of children' s development.Brian D. Cox (2009) states: «Teaching methods, typically based on fixed assumptions about mind of a child to learn». Educational psychology (Stolyarenko, 2004) defines eight periods of personality's formation depending on views of leading activity.Pre-school age, considered by us, appropriates for early childhood (3-6-7 years old).According to E. I. Ilyin (2002) 3-5 years-old children get pleasure from the game, while in half of the cases 5-years-old children prefer games of those who are interested to play, and at the age of 5-6 children not only gain pleasure from the process, but from results of the games. In this regard, as noted by V.P. Valkova, children at the age of 6 -7 approach more differentially to the choice of partners in their games, calling several reasons: their ability to play in a group, ability to play well, their creative abilities in the game, assisting in the process of the game (Rybalko, 1990). A game is the basis of teaching young children.During the game a child using a toy can get a lot of valuable learning opportunities for learning.Angie Rupan, a coordinator of «The child development Center in South San Francisco», ( Geiser, 2013) says: «While playing, children begin to understand and process the world».Having worked more than 20 years as an educator of early childhood, she confirms the following: «Children's game opens their creative potential and imagination, develops reading, thinking and problem solving skills, as well as motor skills.It forms the basis for learning». From this we can conclude that leading activity for children of preschool age is a game aimed at cognition of surrounding world, formation of attitudes and development of relationships between peers and adults (teachers, educators, parents, family members, and so on).When designing educational environment and selecting technologies of work with children of preschool age, we will focus on teaching AIDS which would contribute to the development of children and their social adaptation in modern conditions in the process of playing a game. Methodological Framework 2. In order to create conditions for implementation of innovative educational projects and programs of preschool education in accordance with the Federal Law of the Russian Federation «On education in the Russian Federation» and in accordance with the Charter of the Federal state Autonomous educational institution of higher professional education «Kazan (Volga region) Federal University» design and subject laboratory of environment preschool education development was established in Yelabuga Institute of Kazan Federal University. It is said in the most modern encyclopedia the following (Rapacewich, 2005): «Design and subject laboratory of environment preschool education development was established by the charter of the federal state autonomous educational institution of higher professional education «Kazan (Volga region) Federal University» in Yelabuga institute of Kazan federal university». The main tasks of design and subject laboratory of environment preschool education development of Yelabuga institute are the following: * conducting applied (including interdisciplinary) researches in the field of education; * engaging teachers, students and graduate students of EI KFU in scientific research of the laboratory, use of research laboratory results in the educational process of EI KFU; * training new programs of academic disciplines and teaching materials in the areas of training within the Faculty of Psychology and the Faculty of Engineering and Technology.Having defined the methodology and goal setting of the Laboratory, we have compiled a work plan consisted of products subject domain-developing environment of preschool education, development of their design and manufacture for the first phase.At subsequent stages of the Laboratory development according to the plan we have to determine effectiveness of our development implementation in pre-school educational institution.In case of positive results of pedagogical experiment and recommendations of teachers, educators, parents, students and specialists it will be possible to direct our educational items for children of preschool age in mass production. As any game is leading activity among preschoolers, there is a need in the objects defining educational and developmental and role play activities of children. According to our observations, unusual items that are not often found at home attract children's attention. In her own blog Kathryn Warner from Texas (2014) offers ideas on the organization of the educational environment of the child, she gives much attention to educational subjects, even to the way these toys and books are dispersed. Today market offers a wide range of textile and educational products for children of preschool age, but not all parents and even educational institutions can buy them.Yelabuga Institute of Kazan Federal University with a solid base of training teachers has decided to test its capabilities in the creation and implementation of educational products.It is a new area for work and self-realization and some kind of development and experience for students.To determine up products we together with the students of engineering and technology faculty of EI KFU have studied objects of preschoolers' subjective activity.Basically the Internet offers development works of Montessori's school ( 2013), which has vast experience in various thematic research works of development and adaptation of a child in different social contexts.Alternative sites of finished products for child's education offer a wide range of products and different methodological assistance to them (2003), the online shopping with entertaining and stimulating production (2011), which is aimed to develop interest in a particular area, such as music, photography, math, arts and crafts or language. Having studied market and consumer of kindergarten students, we have set the subject of products.In the process of determining structural and functional components of educational products, we have taken into account the psychological and pedagogical requirements for games and toys in modern conditions (Sterkina, 1995): multifunctionality, possibility of using toys in joint activities, didactic properties of toys and toys ' accessory for Handicrafts.And educational toys should have instructions or guidelines, containing age targeting, methods or applications. According to pedagogical significance of toys, they can be classified as follows (NARC, 2005): * Toys for practice.In this category there are toys that can be arranged in different ways or require repetition of words or sayings; * compound toys (from several parts).They include construction games, puppets and fretwork; * Regulation toys.Such as board games, dominoes, chess, etc.According to educators of municipal budget preschool educational institution «Kindergarten 3 «Teremok» of Yelabuga municipal district», kids prefer to play with such textile toys as home-transformers, lace, dolls, including theatrical costumes for role-playing games and others.Taking into consideration all requirements to modern educational toys, in our Laboratory we have performed a number of product models: finger toys, tactile gloves, lace, labyrinth, sorters, etc. Students of engineering and technology faculty, studying in areas of training 051000 Vocational training (by industry) program: Decorative and applied art and design; 050100 Pedagogical education, profile training: Technology; 030600 (050502) Technology and entrepreneurship specialization: Culture of house and decorative-applied arts.First we have elaborated and approved sketches, only after this work we have developed technological design documents specifying dimensions in natural size, defined compositional decision, justified the choice of material, manufacturing technology and design, and at last we have produced economic and environmental assessment of products.The work began with the creation of a product in a single instance, while doing it we consulted with educators and made relevant amendments. 3. With the aim of obtaining an objective assessment of our product we exhibited it at the International training seminar named «Speech development of preschool and younger school age children: Russian, national and foreign languages», held in Izhevsk, on the 27 -30-th October, 2014.Due to subjective evaluation of its participants, the most popular among all of them became a developing textile book, meeting all psychological and pedagogical requirements. First, this educational book is polyfunctional.It can be used for tactile abilities and qualities development, as well as for motor skills formation, including small.Different tasks for identification objects, their mapping, functionality, which contribute to the development of creativity, imagination, motivation and other important qualities of effective preschool children performance, are presented in the structural content of the book. Secondly, our authoring can be used in the joint activities of an educator (a parent) and a student.For example, such games as «Bunny-carrot» and «What does grow on the tree?» can involve a group of children (including an adult participant as a playing partner) and to initiate joint actions (collective buildings, cooperative games and others).Almost all pages of this educational book can be used by an educator as a visual aid in the classroom, because all pages can be easily removed with the basics and have loops which are used to hold this book or to attach it to the hook. Thirdly, this textile book implements its didactic function fully, since it includes methods of teaching a child the process of lacing, skills with a variety of clasps and fittings, observing color, shape (geometric and spherical), development of speech and rhetoric. Fourthly, this book is entirely a product of the author's execution, which can be attributed to the decorative and applied art, in which artistic composition and color are sustained; moreover this book consists of different handmade items, forming aesthetic taste and culture of a preschool aged child. This developing book consists of 7 sheets; it is completely made of textiles and recommended for children at the age of 3 and elder.Practically at every page of this book (all in all they are 14) there are different subject compositions in color, made of different textures of fabric, equipped with stickers, buttons, lock-outs and drawstrings. In the framework of joint regional workshop for educators of preschool educational institutions named «Interaction.Cooperation.Support» which took place on the base of municipal budget preschool educational institution «Kindergarten 3 «Teremok» of Yelabuga municipal district» on the 27-th of February 2015 we gave its participants chance to take part in the creative process. The laboratory staff conducted a master class for participants of the seminar, where they tried to perform developmental finger toys «Teddy Bear» and «An owl». They tried to create new images of these toys having found a basic shaping «keyhole».They liked the idea very much.Moreover in turn they suggested different ways of toys usage, which would contribute to the development of a preschool child personality at any age. Discussions 4. During our studies of the products subject of development and environment of preschool education implementation in the framework of the Laboratory, we have determined its practical significance: the author's textile book for 3 -6 -7 years old children can be the basis for improving personal development of a preschool child. Research on the subject of development and environment of pre-school education is not completed at this stage yet.Further elaborate of written instructions and guidelines of textile-educational books usage in the educational process of preschool education is being organized recently.onclusion 5. Students of Yelabuga Institute of Kazan Federal University, including members of design and subject laboratory of environment preschool education development during the execution of educational products for children of 3-6-7 had the opportunity not only to apply all types and techniques unit-to-unit processing, manufacturing and processing textiles, wood and ornamental materials they have studied, as well as to interpret.In the process of manufacturing the textile educational book students used machine and manual seams, they demonstrated a high level of artistic and design skills.Thus, a product of design and subject laboratory of environment preschool education development of Yelabuga Institute of Kazan Federal University has been applied: members of the Laboratory suggest a very useful product while students acquire skills and sharpen professionalism.
2017-09-09T19:32:38.293Z
2015-03-27T00:00:00.000
{ "year": 2015, "sha1": "0abb727c256e10a9bafc43b4548ccade61dd6e5e", "oa_license": "CCBY", "oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/6050/5816", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0abb727c256e10a9bafc43b4548ccade61dd6e5e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
11525127
pes2o/s2orc
v3-fos-license
Sea lampreys elicit strong transcriptomic responses in the lake trout liver during parasitism Background The sea lamprey (Petromyzon marinus) is a jawless vertebrate that parasitizes fish as an adult and, with overfishing, was responsible for the decline in lake trout (Salvelinus namaycush) populations in the Great Lakes. While laboratory studies have looked at the rates of wounding on various fish hosts, there have been few investigations on the physiological effects of lamprey wounding on the host. In the current study, two morphotypes of lake trout, leans and siscowets, were parasitized in the laboratory by sea lampreys and the liver transcriptomes of parasitized and nonparasitized fish were analyzed by RNA-seq (DESeq2 and edgeR) to determine which genes and gene pathways (Ingenuity Pathway Analysis) were altered by lamprey parasitism. Results Overall, genes encoding molecules involved in catalytic (e.g., enzymatic) and binding activities (factors and regulators) predominated the regulated gene lists. In siscowets, the top upregulated gene was growth arrest and DNA-damage-inducible protein and for leans it was interleukin-18-binding protein. In leans, the most significantly downregulated gene was UDP-glucuronosyltransferase 2A2 - DESeq2 or phosphotriesterase related - edgeR. For siscowets, the top downregulated gene was C-C motif chemokine 19 - DESeq2 or GTP-binding protein Rhes - edgeR. Gene pathways associated with inflammatory-related responses or factors (cytokines, chemokines, oxidative stress, apoptosis) were regulated following parasitism in both morphotypes. However, pathways related to energy metabolism (glycolysis, gluconeogenesis, lipolysis, lipogenesis) were also regulated. These pathways or the intensity or direction (up/downregulation) of regulation were different between leans and siscowets. Finally, one of the most significantly downregulated pathways in both leans and siscowets was the kynurenine (tryptophan degradation) pathway. Conclusions The results indicate a strong transcriptional response in the lake trout to lamprey parasitism that entails genes involved in the regulation of inflammation and cellular damage. Responses to energy utilization as well as hydromineral balance also occurred indicating an adjustment in the host to energy demands and osmotic imbalances during parasitism. Given the role of the kynurenine pathway in promoting immunotolerance in mammals, the downregulation observed in this pathway during parasitism may signify an attempt by the host to inhibit any feedback suppression of the immune response to the lamprey. Electronic supplementary material The online version of this article (doi:10.1186/s12864-016-2959-9) contains supplementary material, which is available to authorized users. Background The sea lamprey (Petromyzon marinus) is a jawless fish that is native to the Atlantic Ocean. While it may also have been native to Lake Ontario [1], sea lampreys only became abundant in the Great Lakes following improvements to the Welland Canal that connects Lake Ontario to Lake Erie and bypasses Niagara Falls. Together with overfishing, the lamprey was responsible for the decline in lake trout populations in the Laurentian Great Lakes [2,3]. As an adult, the sea lamprey is a parasite that attaches to fish with a rasping mouthpart and feeds off the tissues and body fluids of its host [4]. Significant efforts have been made to control lamprey in the Great Lakes and, while populations have been reduced, lamprey parasitism still remains an issue that could be exacerbated by global climatic changes affecting the Great Lakes [5]. The sea lamprey can parasitize a number of large bodied fish species, however, in the Great Lakes the effects on lake trout have been the most dramatic and have had the most significant consequences. There have been a number of laboratory studies looking at the rates and types of wounding by lampreys on various fish hosts ( [6] for review). In contrast, there have been surprisingly few studies that have investigated the physiology of the host during or following lamprey parasitism. For obvious reasons, mortality has been a major focus of research on lamprey parasitism. However, since many fish may survive lamprey wounding [7], it would be important to understand what occurs in the host during parasitism and how that could affect the physiology of the surviving host. Several investigators have looked at blood parameters after wounding and have shown increases in circulating lymphocytes [8,9], and decreases [10] or increases [9] in blood hematocrit. Lampreys are parasites and foreign to their host, thus, it would be logical that the immune system of the host would react to the lamprey. However, to our knowledge there have been no investigations of the immune reaction of the host to lamprey parasitism. In other hematophagous parasites such as ticks, compounds are produced by the parasite that are released into the host to avoid host recognition or to block parts of the innate immune response (e.g., complement). This is thought to be strategic so that the host will not mount an immune response to the parasite (reviewed: [11]). We could hypothesize that similar activities might occur in a fish being parasitized by a lamprey. Interestingly, there have been a number of studies that have isolated bioactive compounds from the buccal glands of other parasitic lampreys including Lampetra japonica. While a primary goal of those studies has been the isolation of compounds with potential pharmaceutical applications [12], they have uncovered several interesting compounds that may be important to the natural biological relationship of the lamprey and its host during parasitism. These include compounds that are active as inhibitors of lymphocyte proliferation, neutrophil activity and platelet aggregation [13][14][15], ion channel blockers [16], and compounds with fibrinolytic activity [13]. While a number of morphotypes of lake trout were once present in the Laurentian Great Lakes (e.g., [17]), only Lake Superior currently contains naturally sustaining populations of different lake trout types including the lean and siscowet lake trout. In the wild, siscowet lake trout morphotypes have larger fins and eyes, a shorter snout, larger caudal peduncle, and higher lipid content in the muscle than lean lake trout morphotypes [18,19]. Lean lake trout tend to be distributed in waters shallower than 100 m while siscowet lake trout are found mostly at depths greater than 100 m [3]. In addition, lean and siscowet lake trout have different life histories with leans being shorter-lived, faster growing, maturing at a younger age, and experiencing higher mortality regimes [20,21]. Studies have shown that some differences observed between wild siscowet and lean lake trout are likely to have a genetic or epigenetic basis [22,23]. These include differences in growth and lipid levels in the muscle. In fact, it appears that leans and siscowets represent metabolotypes that can be distinguished by differences in energy reserves in the liver and muscle [23]. Given these differences in morphometry, physiology and life history, we were interested to see whether the response to lamprey parasitism would also differ between morphotypes. In the current study, lean and siscowet lake trout that have been reared in the hatchery from eggs to adults under identical environmental conditions [22] were used for controlled lamprey parasitism experiments in the lab. Endocrine and bioenergetic changes in relation to the lamprey parasitism on the hatchery-reared lake trout morphotypes have been presented separately [24]. Here we describe the changes in the hepatic transcriptome of lean and siscowet lake trout following lamprey parasitism. The results indicate a strong transcriptional response to lamprey parasitism that may involve reactions to an inflammatory and antigenic response brought on by lamprey wounding, and also suggest that there may be an interesting interaction of the lamprey with the immune system of the host. Responses to energy utilization as well as hydromineral balance were also observed, indicating an adjustment in the host to energy demands and osmotic imbalances that occur during parasitism. RNA-seq analysis Across all 24 samples that were analyzed, there were on average 20,127,690 trimmed sequences/sample (Table 1, complete individual sequence data provided in Additional file 1). Of these, an average of 90 % mapped to the lake trout reference transcriptome produced by Trinity (all contigs provided in Additional file 2). When analyzed by DESeq2 and edgeR, there were 1341 and 668 genes regulated (up and down) between parasitized and nonparasitized leans at an adjusted p ≤ 0.05, respectively ( Table 2, Additional files 3 & 4). Of these, a total of 452 genes were shared. In contrast, there were 2985 and 2343 genes that were regulated (up and down) between parasitized and nonparasitized siscowets at an adjusted p ≤ 0.05 when analyzed by DESeq2 and edgeR, respectively ( Table 2, Additional files 5 & 6). Of these 1964 were shared. GO annotation of the genes that were regulated by both the DESeq2 and edgeR analyses (intersection) indicates that the majority are involved in metabolic and cellular processes (Table 3). Based on the molecular function annotation, a majority of the genes encode molecules involved in catalytic (e.g., enzymatic) and binding activities (factors and regulators) ( Table 4). In general, the percentage of genes involved in a given biological process did not differ when comparing genes up or down regulated during wounding (Table 3). However, in looking at molecular functions the proportion of several gene categories appeared to increase (e.g., receptor activity; catalytic activity) or decrease (e.g., translation regulator activity; enzyme regulator activity; transporter activity) when comparing up to down regulated genes, and this was consistent across morphotypes (Table 4). Tables 5, 6, 7 and 8 show the top 25 up and downregulated genes for parasitized lean (Tables 5 & 6) and siscowet (Tables 7 & 8) lake trout. Of the top 25 upregulated genes based on adjusted p values, 16 were observed by both DESeq2 and edgeR between parasitized and nonparasitized siscowets (Table 7) but only five between parasitized and nonparasitized leans (Table 5). Within a RNA-seq analysis, one upregulated gene was shared between siscowets and leans for DESeq2 (Tables 5 & 7) though several other genes that appeared to have similar functions based on annotation (e.g., ubiquitin carboxyl-terminal hydrolase and ATP-binding cassette) were shared. There were three genes shared for edgeR (Tables 5 & 7). For siscowets, the top upregulated gene was growth arrest and DNAdamage-inducible protein (GADD45) in both RNA-seq and edgeR analyses, and for leans it was interleukin-18binding protein (IL18BP) for both DESeq2 and edgeR (Tables 5 & 7). Of the top 25 downregulated genes based on adjusted p values, 16 and 15 were observed by both DESeq2 and edgeR between parasitized and nonparasitized leans and siscowets, respectively (Tables 6 & 8). Within a RNA-seq analysis, four downregulated genes were shared between siscowets and leans for DESeq2 and 10 for edgeR (Table 6 & 8). In leans, the most significantly downregulated gene was UDP-glucuronosytransferase 2A2 (UGT2A2) in the DESeq2 analysis or phosphotriesterase related (PTER) in the edgeR analysis (Table 6). UDP-glucuronosyltransferase 2A2 was observed in the edgeR analysis but phosphotriesterase related gene was not in the top 25 downregulated genes for leans though it did appear in the complete downregulated gene list (Additional file 4). For siscowets, the top downregulated gene was C-C motif chemokine 19 (CCL19) when analyzed by DESeq2, or GTP-binding protein Rhes (Rasd2) when analyzed by edgeR ( Table 8). The C-C motif chemokine 19 was the second most significantly downregulated gene in edgeR, while the GTP-binding protein Rhes was the third most significantly downregulated gene in the DESeq2 analysis (Table 8). qPCR analysis The results of qPCR analyses on at least five genes that were up or down regulated in either DESeq2 and/or edgeR analyses in leans and siscowets were highly consistent with the RNA-seq analyses (Table 9). In all cases, the direction of fold change (up or down) was exactly the same for all genes when comparing the qPCR analyses and the RNA-seq analyses. In addition, with a few exceptions, all of the qPCR comparisons (parasitized versus nonparasitized/morphotype) were significant at p < 0.05 or lower. Most of the ones that were not significant at p < 0.05, had nearly significant p values (e.g., 0.054, 0.070). In many cases trends in the overall magnitude of fold differences between the two analyses was also observed ( Table 9; e.g., cyclic AMP-dependent transcription factor ATF-3 and dual specificity protein phosphatase 2). IPA analysis IPA analysis showed that there was a total of 11 pathways for parasitized leans and 26 for siscowets in which genes from the edgeR analysis significantly (Benjamini-Hochberg Method; p ≤ 0.05) overlapped with genes in the IPA pathways (Figs. 1 & 2). For leans, the most significant (p ≤ 0.01) pathways were protein ubiquitination, aldosterone signaling in epithelial cells, tryptophan degradation III, glucocorticoid receptor signaling, glycolysis I, and gluconeogenesis I (Fig. 1, Additional file 7). For siscowets, the most significant (p ≤ 0.01) pathways were tryptophan degradation III, NRF2-mediated oxidative stress response, xenobiotic metabolism signaling, aryl hydrocarbon receptor signaling, and LXR/RXR activation (Fig. 2, Additional file 8). Of all the significant (p ≤ 0.05) pathways, eight were shared between leans and siscowets (Fig. 3). Discussion The results of this study indicate that lamprey parasitism elicits a striking response in the hepatic transcriptome of both lean and siscowet lake trout. Some of these responses are shared between morphotypes but some are not. Since the fish were not perfused prior to tissue sampling, it is possible that some differences in gene expression could have been related to changes in the relative Table 3 The proportion of genes that were shared between the DESeq2 and edgeR analyses ( Table 4 The proportion of genes that were shared between the DESeq2 and edgeR analyses ( numbers of blood cells in parasitized versus nonparasitized fish as a result of wounding, but this would not affect changes in hepatic cell transcription. Overall, many of the genes that were in the regulated list were enzymes involved with catalytic processes. This is not surprising since the liver is the site of many enzymatic processes involving carbohydrate, lipid and amino acid metabolism and some of these processes appear to be affected by the wounding as discussed in detail below. In addition, genes that are involved with pathway regulation such as binding factors as well as the response to cell death (apoptosis) were also in the list. As an adult, lampreys attach with a rasping mouthpart and feed off the tissues and body fluids of their host [4]. This dramatic wounding activity would be expected to have a significant impact on the physiology of the host yet this has not been well characterized in the literature. Increases in circulating lymphocytes particularly neutrophils [8,9], and both decreases [10] and increases [9] in blood hematocrit have been reported. The wound that is produced during lamprey parasitism should have significant effects on the immune system of the host that could induce inflammatory and antigenic responses. Curiously, the transcriptomic response observed following wounding was not typical in comparison to what might be observed during pathogen (bacterial or viral) exposure. For example, cytokines like interleukins (IL) 1 and 6 or tumor necrosis factor (TNFα) were not in the regulated genes. It could be that these responses occurred at an earlier time just following parasitism and the sampling was completed after it. In support of this, genes (e.g., IL binding protein 18 -and see below for further details) that are believed to be anti-inflammatory and produced to help regulate inflammatory reactions, were in the regulated gene list. It could also be that the reaction brought on by the parasite wounding is fundamentally different from that of a pathogen since interactions with cellular components of the immune system in the host would occur with pathogens and elicit typical cytokine responses. Those interactions may not occur during lamprey wounding. Still, the RNA-seq analysis revealed the regulation of a number of genes in the liver following parasitism that could be a response to inflammation or tissue damage. For example, the most highly upregulated gene in siscowets was growth arrest and DNA-damage-inducible protein (GADD45), a gene that was originally characterized from cells that were subjected to agents such as UV, N-acetoxy-2-acetylaminofluorene and H 2 O 2 that damage DNA [25,26]. In humans there is a family of GADD45 proteins (α,β,γ) that are stress sensors upregulated under various physiological and Table 5 Top 25 annotated genes upregulated in parasitized versus nonparasitized lean lake trout. Genes ranked by padj values. Boxed numbers indicate genes shared between the two analyses and underlined genes are shared between siscowets and leans within the DESeq2 and edgeR analyses. A complete listing of all genes is provided in Additional file 3 & Additional file 4. Note: There were no nonannotated genes in DESeq2 and edgeR in the top 25 environmental stressors. They are associated with cellular proteins that are implicated in cell cycle regulation and the response of cells to stress including PCNA (proliferating cell nuclear antigen), p21, cdc2/cyclinB1, and the p38 and JNK (c-Jun N-terminal kinases) stress response kinases [27][28][29]. From the wound produced by lampreys we might expect to see the stimulation of various proinflammatory cytokines in the host and a number of these including IL-6 have been shown to stimulate GADD45 proteins [30]. The reported outcome of GADD45 stimulation is complex and can be both cell protective or pro-apoptotic (cell death). To some extent this may depend upon the circumstance and/or the type of GADD45 protein being regulated [29]. How it may be functioning in the specific case of lamprey parasitism is unknown but the association of the regulation of this protein with wounding is logical given what is know about the function of these proteins. Very little has been published on GADD45 in fish though it has been proposed to be involved in demethylation and somatogenesis in zebrafish [31,32] and it has been reported to be regulated in the liver of the Antarctic fish, Trematomus bernacchii, during heat stress [33]. Other genes that would logically be upregulated during stress such as the CCAAT/enhancer binding protein [34] were present in the gene lists for both leans and siscowets but were not significantly upregulated according to their adjusted p values. In contrast, heat shock proteins, also involved in cellular stress responses, were significantly (padj < 0.05) upregulated in both leans and siscowets (DESeq2, Additional files 3 & 5) though they were not in the top 25 upregulated gene lists. Significant upregulation of GADD45 was also observed in parasitized leans (padj = 0.013; DESeq2, Additional file 3); though not in the top 25 upregulated genes. Instead, the most significantly upregulated gene in leans was the interleukin-18-binding protein (IL18BP) in both DESeq2 and edgeR analyses. This gene was also in the top 25 upregulated genes in parasitized siscowets. As indicated earlier, the wounding produced by the lamprey could produce an inflammatory reaction in the host and thus we could expect to see the expression of proinflammatory genes such as interleukins. Interleukin 18 (IL-18) has been identified in fish but the function is unclear [35]. In mammals, IL-18 is a cytokine that strongly stimulates interferon gamma. It is considered a proinflammatory cytokine but the actions are somewhat different compared to TNFα or IL-1 [36]. Interleukin 18 binding protein is an extracellular protein that has very high affinity for IL-18 and in mammals is believed to play a role in modulating the action of IL-18 given its strong activation of interferon [36]. If IL-18 has a similar function in Table 6 Top 25 annotated genes downregulated (− fold change) in parasitized versus nonparasitized lean lake trout. Genes ranked by padj values. Boxed numbers indicate genes shared between the two analyses and underlined genes are shared between siscowets and leans within the DESeq2 and edgeR analyses. A complete listing of all genes is provided in Additional file 3 & Additional file 4. Note: There were 0 and 1 nonannotated genes in DESeq2 and edgeR, respectively in top 25 fish, then the strong upregulation of the IL-18 binding protein may indicate that IL-18 is being upregulated in response to lamprey parasitism. It is important to note that IL18BP has homology to the interleukin 1 receptor, type 2 that is considered to be a decoy receptor for IL-1 [37] and this is upregulated in fish during LPS stimulation [38]. In addition, we did not find any upregulation of interferon. Thus, given the similarity in structure, it is unclear whether IL-1 or IL-18 binding protein is actually being regulated. In contrast to the upregulated genes, there was less consensus between the DESeq2 and edgeR analyses for the top downregulated genes in siscowets or leans even though many downregulated genes were shared between the analyses. In siscowets, two genes, C-C motif chemokine 19 (CCL19) and GTP-binding protein Rhes (Rasd2) were shared between the two analyses and were in the top three downregulated genes. In general, chemokines are leukocyte attractants that are involved during normal homeostasis and inflammatory conditions [37]. However, not much is known about chemokine 19 and, given the large number of chemokines, it is possible that the regulated sequence may actually be another structurally similar chemokine. GTP-binding protein Rhes is a GTP binding protein that is highly expressed in the mammalian brain and, in particular, the striatum [38]. However, it appears not to be expressed in the liver of mammals. Recently, Rasd2 has been shown to be an agent that activates autophagy in the brain [39]. Autophagy is a self-degradative process that can occur at different cellular levels and is involved in normal homeostasis and organelle and energy recycling, but can be ramped up during periods of cellular stress [40]. We could not find any reports on Rasd2 in fish but given the role in autophagy, upregulation of it may be relevant to degradative processes occurring during lamprey parasitism. In leans, the top downregulated gene in the DESeq2 analysis was UDP-glucuronosyltransferase 2A2 (UGT2A2) and in edgeR it was phosphotriesterase related (PTER) protein. UDP-glucuronosyltransferases are well studied enzymes that catalyze the formation of lipophilic glucuronides from substrates, including steroids, bile acids, bilirubin, hormones, dietary constituents, and thousands of xenobiotics using UDP-glucuronic acid as a cosubstrate [41]. As such they allow for solubilization and removal of lipophilic products that otherwise might be toxic to the body [41]. Glucuronidation has been frequently studied in the liver and involvement of this process is certainly consistent with the conditions occurring during lamprey parasitism where agents arising from inflammation or introduced into the host from the parasite (and see below) might be toxic. Therefore, why this gene would Table 7 Top 25 annotated genes upregulated in parasitized versus nonparasitized siscowet lake trout. Genes ranked by padj values. Boxed numbers indicate genes shared between the two analyses and underlined genes are shared between siscowets and leans within the DESeq2 and edgeR analyses. A complete listing of all genes is provided in Additional file 5 & Additional file 6. Note: There were one and three nonannotated genes in DESeq2 and edgeR, respectively in top 25 be downregulated rather than upregulated is unclear unless this was a result of some feedback activity to try and control this process. Phosphotriesterase related protein is more of an enigma since very little is known about the function of this protein in vertebrates. The PTER gene has been identified in mice, rats, humans and Bombyx mori [42,43]. The precise role of this gene is unclear but in mice, silencing this gene using RNA interference diminished albuminuria-induced inflammatory and pro-fibrotic cytokine production in kidney tubular cells [43]. Thus, downregulation of this gene in the fish liver may be associated with the continued expression of inflammatory agents as a result of parasitism. In this study we used and compared the results of two RNA-seq analyses; DESeq2 and edgeR. We were interested to see how consistent the results were across analyses and across morphotype. Compared to edgeR, DESeq2 found a greater number of regulated genes in parasitized leans and siscowets. When looking at genes downregulated during parasitism there was good agreement between the results of the two analyses within a morphotype with 16 and 15 of the top 25 genes shared for both leans and siscowets, respectively. That was also the case for the top upregulated genes in parasitized sicowets but not for leans where only five genes were shared between the two analyses in the top 25. Why this particular comparison did not show consistent results between analyses while others did is not clear. Interestingly the edgeR analysis for the top upregulated genes in parasitized leans had several occurrences of cyclic AMPdependent transcription factor ATF-3 (ATF3), a protein that is well characterized as being involved with cellular stress brought on by various stimuli including cytokines, genotoxic agents, apoptotic factors as well as conditions that promote amino acid and glucose deprivation [44,45]. Given the inflammatory reaction and probable load on the host energy stores following parasitism, upregulation of this gene in the liver is logical. That the RNA-seq analysis was accurately depicting the differential regulation of genes in parasitized versus nonparasitized lake trout livers was also confirmed by qPCR. All of the qPCR analyses indicated the correct direction of regulation and nearly all were significant when statistically analyzed. While the analysis of regulated genes on an individual basis is interesting, a more global approach would be to look at the regulation of potential physiological or cellular Table 8 Top 25 annotated genes downregulated (− fold change) in parasitized versus nonparasitized siscowet lake trout. Genes ranked by padj values. Boxed numbers indicate genes shared between the two analyses and underlined genes are shared between siscowets and leans within the DESeq2 and edgeR analyses. A complete listing of all genes is provided in Additional file 5 & Additional file 6. Note: There were 0 and 1 nonannotated genes in DESeq2 and edgeR, respectively in top 25 pathways involving suites of regulated genes. We used IPA analysis to try and address this. Given the caveat that the pathways derived within IPA are based primarily on the proposed functions of their genes in mammals, this analysis indicated some interesting pathways that appeared to be regulated during lamprey parasitism in the liver. In this analysis, we employed a conservative approach using the edgeR RNA-seq gene analysis that had fewer genes overall than the DESeq2 analysis, together with the Benjamini-Hochberg Method to determine the significance of gene overlap with those of the IPA pathways. While we could have used the gene list from the intersection of the DESeq2 and edgeR analyses, we felt that some pathway information could be lost since those gene lists were greatly reduced compared with those from DESeq2 or edgeR. As observed with the number of individually regulated genes, there were more significant pathways uncovered with parasitized siscowets than leans. However, a number of these pathways were still shared between the morphotypes. In leans, the top functional pathway was protein ubiquitination and a majority of the genes in this pathway were upregulated. In the context of the IPA analysis, the protein ubiquitination pathway refers to gene products involved in the degradation of short-lived or regulatory proteins including ones in the cell cycle, cell proliferation, apoptosis, DNA repair, transcriptional regulation, cell surface receptors, ion channel regulators, and antigen presentation. All of these processes would logically be associated with lamprey parasitism particularly proteins involved in cell proliferation, cell cycle regulation and apoptosis given the strong upregulation of genes such as GADD45. While the protein ubiquitination pathway was also stimulated in siscowets this was not as significant as in leans (p = 0.001 lean vs 0.037 siscowet). In contrast, tryptophan degradation was the most significantly regulated pathway in siscowets but was also very highly regulated in leans (p = 0.008 lean vs 0.0001 siscowet). Tryptophan is an essential amino acid that can be a substrate for serotonin synthesis. However, when metabolized, approximately 95 % of tryptophan goes into the kynurenine (KYN) pathway [46]. The rate-limiting step in the KYN pathway is the enzyme that converts tryptophan to N-formylkynurenine. It is now known that at least three enzymes can do this: tryptophan 2,3-dioxygenase (TDO), indoleamine 2,3-dioxygenase-1 (IDO1) and indoleamine 2,3-dioxygenase-2 (IDO2) [47]. Studies have demonstrated that some fish species have genes for all three of these enzymes though efficiency for the conversion of tryptophan by the fish IDO2 enzyme is very low compared with mammals while IDO1 has moderate efficiency compared to mammals [48,49]. In mammals, tryptophan 2,3-dioxygenase is found predominantly in the liver, while IDO1 and two are also found in the kidney and testes and less in the liver [47,50]. Following the formation of kynurenines, there are two possible outcomes in the KYN pathway; a nonenzymatic conversion to quinolinic acid or conversion to 2-aminomuconic acid 6-semialdehyde by 2-amino 3-carboxymuconate 6semialdehyde decarboxylase (ACSD) [51]. Interestingly, in both siscowets and leans the genes in the tryptophan degradation pathway were nearly all downregulated (Figs. 1 & 2) suggesting a strong inhibition of this pathway. In addition, in siscowets another pathway, tryptophan degradation to 2-amino 3-carboxymuconate, was also downregulated which would be the pathway catalyzed by ACSD. Consistent with these pathway observations, the genes for indoleamine 2,3 dioxygenase (IDO) as well as ACSD were consistently and significantly downregulated in the DESeq2 and edgeR analyses in both parasitized siscowets and leans (Tables 6 & 8; Additional files 3,4,5 and 6). Other pathways that were regulated according to the IPA analysis and are related to tryptophan metabolism include glutaryl-CoA degradation and NAD biosynthesis II (from tryptophan). The KYN pathway has been strongly linked to immune function in mammals in various ways. For example: 1) Kynurenine metabolites produced in the KYN pathway can have direct effects on cells by the activation of the aryl hydrocarbon receptor; 2) Local depletion of tryptophan in a cell can activate a local stress response stimulating cell cycle kinases and transcription factors (like ATF-3 discussed earlier); and 3) IDO can, in addition to being an enzyme, act directly as a cellular signaling molecule [52]. The net result of KYN activation is complex and can involve many inputs. In mammals the KYN pathway is stimulated during inflammatory reactions via interferons but the kynurenine metabolites produced may ultimately function as immunsuppressors [52]. Indeed, it is known that IDO stimulation promotes immunotolerance of grafted allogeneic tissues whereas inhibition of IDO results in rejection [53]. So one hypothesis is that the stimulation of IDO results in dampening of the immune response and immunotolerance. Thus, predicting the overall immune response of downregulating or upregulating this pathway is difficult particularly since it is unknown if there are similar IDO functions in fish. As far as we can tell, the relationship of the KYN pathway and immunity has not been investigated in any fish species though transcripts encoding IDO2 were downregulated in rainbow trout fry following challenge with Flavobacterium psychrophilum [54]. It seems clear that this pathway is downregulated following lamprey parasitism and if the KYN pathway is ultimately immunosuppressive in fish and acts to temper the inflammatory reaction, then downregulation might be a mechanism to block immunosuppression and continue to respond to the presence of the lamprey (i.e., not be immunotolerant). The KYN pathway has been extensively investigated during infections by intracellular parasitic protozoans such as Leishmania major [55]. During leishmaniasis the KYN pathway is stimulated resulting in local depletion of tryptophan and kynurenine production. In gene knockout mice lacking IDO or folIowing the application of IDO inhibitors, there is actually a decrease in Leismania infection suggesting that pathogens such as Leishmania may act to suppress the host immune system by stimulating the KYN pathway and thereby promoting immunotolerance [55]. In other parasitic lampreys (Lampetra japonica) a number of products have been isolated from the buccal gland [12] that are probably released around the wound site and into the host circulation. As with other hematophagous parasites, some of these compounds are probably released to keep blood from coagulating so that feeding of the circulation by the lamprey can continue. Indeed, experiments conducted some time ago on the sea lamprey demonstrated that fluid obtained directly from the buccal glands inhibited clotting of fish blood [4]. At the same time it was found that injection of small volumes of sea lamprey buccal gland secretion into the muscle of nonparasitized fish caused the formation of very large edemas suggesting the presence of compounds that could be highly cytolytic. Some compounds in the buccal gland secretions may be released in an attempt to block the immune response of the host or be used to hide from the host. Curiously, L-3-hydroxykynurenine O-sulfate has been isolated from the buccal glands of the parasitic lamprey, Lethenteron japonicum [56]. Could this kynurenine be released by the lamprey into the circulation of the fish host and act to mimic the stimulation of the host's KYN system? If so, this may be a mechanism that the parasite uses to promote immunosuppression so that it can continue to parasitize the host. In any case, compounds (particularly proteins) that are produced by the lamprey and released into the circulation during parasitism may add to the overall antigenic response occurring within the host and be responsible for some of the pathways being simulated. Two carbohydrate bioenergetic pathways that were regulated were glycolysis and gluconeogenesis. These were regulated significantly in both leans and siscowets but glycolysis was regulated to a greater extent in leans than siscowets (p = 0.01 leans vs 0.04 siscowets). Glycolysis is the process in which glucose is metabolized to pyruvate and results in the production of ATP. Gluconeogenesis is the reverse of glycolysis and hence the production of glucose. While most of the reactions in glycolysis are reversible there are some differences primarily as a result of the steps in which energy is produced and these include the conversion of pyruvate to phosphoenolpyruvate, fructose 1,6-bisphosphate to fructose 6-phosphate, and glucose 6-phosphate to glucose. While not dramatic, another difference between leans and siscowets with regard to glycolysis was that in leans it appeared that there was a greater proportion of genes upregulated (Fig. 1) while in siscowets it was almost equal or even slightly more downregulated (Fig. 2). In the wild, siscowets have higher muscle lipid than leans and this is a heritable trait [22]. In fact, leans and siscowets can be considered metabolotypes in which a number of energetic characteristics are different between them including lipid (higher in muscle and liver in siscowets vs leans) and glycogen (higher in muscle and liver in leans vs siscowets) [23]. Given the consumption of host tissue and blood, lamprey parasitism must be bioenergetically draining, and how the host compensates for that most likely depends on the way energy is stored. Given the differences in lipid and carbohydrate between the two morphotypes, it may not be surprising that glycolysis is upregulated in leans to a greater extent than in siscowets. In addition, in siscowets several IPA pathways involved in lipid metabolism or the regulation of lipid metabolism were regulated including LXR/ RXR (liver X receptor/retinoid X receptor) activation, PPAR (peroxisome proliferator-activated receptor) signaling, and PXR/RXR (pregnane X receptor/retinoid X receptor) activation [57]. None of these pathways were observed to be regulated in parasitized leans. It appears that in the wild, siscowets are parasitized at a higher rate and more intensely than leans [24]. While there could be several reasons for this difference, the high lipid levels in siscowets may make them more capable energetically of sustaining lamprey parasitism events. Two other pathways that were significantly regulated in both leans and siscowets were aldosterone signaling in epithelial cells and glucocorticoid receptor signaling and these may be related. Based on p values these two pathways appeared to be more highly regulated in leans than siscowets and for both morphotypes there was a greater proportion of genes that were upregulated. In the case of IPA the aldosterone signaling in epithelial cells involves genes of the phosphatidylinositol and protein kinase C intracellular signaling pathways as well as Na + /K + ATPase pumps and channels. The glucocorticoid receptor signaling involves some similar second messenger pathway genes but also genes involved in inflammation and cell cycle control. Both of these pathways are logical given the possible inflammation associated with the parasitism and since there would likely be ionic/osmotic imbalances during the wounding, pathways involving ion pumps and channels would be logical. The cortisol stress response has been well documented in fish [58] and it is likely that lake trout experiencing lamprey parasitism undergo stimulation of the hypothalamic pituitary interrenal axis that would result in elevated cortisol and the stimulation of the glucocorticoid receptor pathway. Whether aldosterone really exists in fish is debated [59] and the mineralcorticoid in fish may be other steroids. However, cortisol also functions as a mineralcorticoid in fish and, thus, the pathways designated as those specific to aldosterone in the IPA analysis could in effect be stimulated by cortisol, particularly those that regulate Na + /K + ATPase [59]. A major pathway that was significantly regulated in siscowets (p = 0.0003) but not in leans (p = 0.1208) was NRF2-mediated oxidative stress response. In IPA this pathway involves gene products that are regulated by the nuclear factor-erythroid 2-related factor 2 (NFE2L2) in response to oxidative stress caused by an imbalance between the production of reactive oxygen and the detoxification of reactive intermediates by enzymes including glutathione S-transferase, cytochrome P450, NAD(P) H:quinone oxidoreductase, heme oxygenase and superoxide dismutase. Many things can cause oxidative stress but certainly inflammation is one of them and so it is not surprising to see this pathway activated during parasitism. NFE2L2 regulates many enzymes known to be involved in the detoxification of drugs and chemicals that are foreign to the body [60] so we might expect to see associated pathways such as the xenobiotic metabolism signaling also being significantly regulated. NFE2L2 can also influence intermediary metabolism and has been show to regulate AhR (aryl hydrocarbon), PPAR, and RXR receptors that contain ARE (antioxidant response element) sites [60]. So again, is not surprising to see those pathways (aryl hydrocarbon receptor signaling, PXR/RXR activation, PPAR signaling) being regulated and, if this is related to lipid metabolism, may explain why the NRF2-mediated oxidative stress response was regulated in siscowets and not leans. Conclusion In conclusion, it appears clear from the RNA-seq analysis that during lamprey parasitism, there is a very strong response in the liver that entails genes involved in the regulation of inflammation and cellular damage. In some cases it looks like genes may be stimulated as a feedback mechanism to the responses being mounted in the host. Overall, the IPA analysis indicates the involvement of pathways related to 1) energy metabolism (glycolysis, gluconeogenesis, lipolysis, lipogenesis); 2) removal and degradation of molecules arising from cellular processes such as apoptosis and oxidative stress; 3) hydromineral balance and 4) tryptophan degradation (KYN pathway). In fact, several pathways related to tryptophan degradation were observed and we hypothesize that these are actually responses to immune reactivity brought on by the lamprey wounding and may even involve compounds produced by the parasite that are released into the host. Several of these pathways including tryptophan degradation, hydromineral balance, and ubiquination were shared by both morphotypes but there were also noticeable differences particularly in pathways related to carbohydrate and lipid metabolism. There are very large natural differences between leans and siscowets in the levels of carbohydrate and lipid reserves and, therefore, differences observed in these metabolic pathways may depend on these energy reserves and have biological relevance in terms of how the two morphotypes cope energetically with lamprey parasitism. Animals and lamprey parasitism trials Lean and siscowet lake trout used for the laboratory lamprey parasitism were part of a common garden rearing study investigating the basis of phenotypic differentiation of these morphotypes that was previously described [22]. Briefly, the original lean and siscowet laboratory lines were derived from gametes of wild adult fish obtained in 2006 from Lake Superior. The fertilized eggs and subsequent juveniles and adults were reared under identical environmental conditions from 2006 at the Great Lakes WATER Institute (GLWI, School of Freshwater Sciences, University of Wisconsin-Milwaukee). Lamprey parasitism experiments for the transcriptomic experiments were conducted from October through December 2010 when sea lamprey seasonally intensify their feeding to prepare for spawning [61]. Sea lamprey were obtained from commercial fisherman in the Hammond Bay, Michigan and Blind River, Ontario areas and transported to our facilities. All sea lamprey were parasitizing a host at the time of capture to ensure that the sea lamprey used in this experiment were in the parasitic phase. Lake trout were anesthetized individually in 2-phenoxyethanol (Sigma-Aldrich, St. Louis, MO), weighed, and placed in individual covered tanks (265 L) for experimental trials. Each test lake trout was randomly paired with a control lake trout of the same morphotype that remained in its individual tank for the same duration of time but was not parasitized. Test and control lake trout were usually of the same sex, although errors in sex identification did occasionally occur, as lake trout are not obviously sexually dimorphic. The lake trout used for these trials were four years old and not sexually mature. Sea lampreys (N = 4) were randomly chosen, weighed, identified by fin clips, and placed in each test lake trout tank. After the addition of sea lamprey to the test tanks, test and control lake trout were checked three times per day at regular intervals. Once a sea lamprey attached to a lake trout, the other non-attached lampreys were removed from the tank. We estimated sea lamprey feeding duration to be the period from when the sea lamprey was first noted to be attached to when the sea lamprey was first noticed to have detached or was physically detached from the test lake trout. The average parasitism time was 3.2 days for both morphotypes and ranged from 2 to 4 days. After the experimental trial, test and control lake trout were euthanized using an overdose of tricaine methanesulfonate (MS-222) (Sigma-Aldrich, St. Louis, MO). Lake trout and sea lamprey final weights were recorded to aid standardization of parasitism events. The number and type of sea lamprey wounds on the lake trout were characterized and blood and gonad samples were taken for physiological analyses described separately [24]. A liver sample was taken from each fish and flash frozen on dry ice and stored at −80°C until RNA extraction. All experiments were performed in strict accordance with Michigan State University's Institutional Animal Care and Use Committee (IACUC) approved procedures. Transcriptomic analysis Total RNA from six liver samples/treatment/morphotype (lean nonparasitized; lean parasitized; siscowet nonparasitized; siscowet parasitized) was extracted on an individual basis using Tri Reagent (Molecular Research Center, Inc.) according to the manufacturer's protocol [62,63]. The RNA was treated with DNAse I and cleaned using the RNeasy MinElute Cleanup kit (Qiagen, Valencia, CA) and submitted to the University of Washington High Throughput Genomics Unit at the University of Washington (Seattle, WA) for sequencing. Individual libraries were constructed using the TruSeq RNA library kit (Illumina) and sequenced (36 bp single end) using the Illumina GAIIx platform (San Diego, CA). Sequences were barcoded and all 24 samples were sequenced in the same lane and this was repeated on different dates for a total of three lanes. For transcriptomic analysis, sequences were combined across all three lanes for each treatment (parasitized vs nonparasitized) per morphotype (lean vs siscowet). All raw sequences are available at NCBI's Sequence Read Archive (SRA) under Project PRJNA316738. Sequences were trimmed for quality (cutoff 0.05) using CLC Genomics Workbench (6.5.1), ends were trimmed for ambiguous bases, and adapters (Illumina) were removed. Sequences less than 20 bp were removed. Sequences from the individual samples were combined with sequences that had been obtained previously from a preliminary pooled experiment on the same samples to produce a de novo assembly using Trinity version r2013-02-25 with default settings [64]. The assembled contigs (42,077, average 577 bp, median 356 bp, Additional file 2) were then annotated using BLAST and NCBI's nr and nt databases [65][66][67]. Individual sequences were mapped to the de novo assembled contigs using CLC Genomics Workbench. Count data for each sample's run were totaled into a single table for each sample. The count data were then analyzed for gene expression levels and statistical significance using the following R packages: DESeq2 [68] and edgeR [69]. Within the text, gene names are initially italicized when referred to and, when available, the HGNC (http://www.genenames.org/) accepted symbol is provided in parentheses. Genes (up and down regulated/ lean and siscowet) that were shared between the DESeq2 and edgeR analyses were GO annotated at the biological process and molecular function levels using Panther [70] that accesses the most up to date GO annotations at the Gene Ontology Consortium. Quantitative Polymerase Chain Reaction (qPCR) analysis Complimentary DNA (cDNA) was produced by reverse transcription in a PTC200 thermocycler (Bio-rad MJ Research). Oligo(dt) primer (0.25 μg) was added to 500 ng of total RNA in a volume of 5 μl. The mixture was allowed to incubate at 70°C for 5 min, and then 4°C for 5 min. Following this, 4 μl of 5× reaction buffer, 2.4 μl of MgCl 2 (25 mM), 1 μl of dNTP mix (10 mM), 1 μl of Promega ImpromII RT, and 6.6 μl of water were added and incubated at 25°C for 5 min, 37°C for 1 h, and 70°C for 15 min. All qPCR reactions were created as master mixes and individual reactions were conducted in duplicate and contained the following: 1.0 μL of cDNA, 10 pM each of forward and reverse gene primers (Additional file 9), and 10 μl Lightcycler 480 SYBR Green PCR Master Mix (Roche). Cycling and fluorescence measurements were carried out in a Lightcycler 480 II qPCR system (Roche) with the following cycling parameters: 1 cycle of 95°C for 5 min; 45 cycles of 95°C for 10 s, 58°C for 10 s, and 72°for 10 s. Raw data were processed with Real-time PCR Miner [71]. Quantification was performed by calculating the relative mRNA concentration (R0) for each gene/individual sample. Briefly, this was calculated using the following equation: R0 = 1/(1 + E)^Ct where E is the gene efficiency calculated as the average of all individual sample efficiencies across all reactions for a given gene/qPCR plate, and Ct is the cycle number at threshold [71]. The R0 for each gene was normalized to an actin control R0 from each individual sample. Data were tested for normality and differences between means for parasitized and nonparasitized leans and siscowets were analyzed by Student's t-test. IPA analysis Complete sequences obtained from the edgeR analysis were uploaded to Ingenuity Pathway Analysis (IPA) to analyze potential biochemical and physiological pathways that were being regulated in the liver during lamprey parasitism (IPA®, QIAGEN Redwood City, www.qiagen.com/ingenuity). Padj values of ≤0.05 were used for all IPA analyses and the significance of potential pathways was analyzed in IPA using the Benjamini-Hochberg method [72] that provides a corrected p value to control for the rate of false discovery. The results from edgeR rather than DESeq2 were used since they were conservative in terms of the total number of genes that were regulated but larger than the gene list from the intersection of the DESeq2 and edgeR analyses. Gene pathway names are taken verbatim from IPA and are italicized when referred to in the text.
2017-08-03T01:29:56.711Z
2016-08-24T00:00:00.000
{ "year": 2016, "sha1": "15fb1d027e69d8d789575696b5ccdeedd0469561", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-016-2959-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "56726454e74c8a18f8d3b94911f792d69cd36663", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
120955378
pes2o/s2orc
v3-fos-license
On the mean cluster size of a network of cracks Brittle polycrystalline materials such as rocks, ceramics, and certain metals contain microcracks that can grow and coalesce under sufficiently high stress, resulting in failure and, possibly, fragmentation. Such processes are idealized in this paper by treating the cracks as circular disks whose coalescence forms clusters and can terminate growth after a number of intersections. A non‐linear recurrence relation for the probability of cluster size is developed and solved by means of a generating function, providing information on the mean size of the crack clusters and the standard deviation. This solution leads to a simple expression for the percolation threshold. The probability of clusters of size n is also determined. Above the percolation threshold the probabilities of finite and infinite clusters are treated separately. Explicit expressions for the probabilities can be approximated by taking a Laplace transform in a simple case, thus clarifying the behaviour of the solution. Appendices show the relation of the theory to practical problems, Monte Carlo approaches, the probability of infinite clusters, and discuss the uniqueness of solutions to such geometrical problems, i.e. Bertrand's Paradox. Copyright © 2005 John Wiley & Sons, Ltd. INTRODUCTION When brittle materials are subjected to high stresses, isolated cracks grow and intersect other cracks at random, forming sets of connected cracks, or clusters. The probability of a cluster of size n is examined in what follows. A recurrence relation that uses the probability of intersection in accounting for the coalescence of crack sets is developed. For this purpose it is convenient to idealize cracks as being penny-shaped (a term coined by the British). A simple analytic expression for the mean size of a crack set is obtained by means of an appropriate generating function. Fracture mechanics is not necessary here}it is only necessary to take the probability p of intersection as given, much as the probability of collisions is given in statistical mechanics. This probability can be deduced from geometric considerations [1], and is discussed briefly in the appendices. In this effort the cracks are considered to be distributed homogeneously and isotropically; the theory is being adapted to more complex distributions such as the coalescence of nearly parallel cracks into a nearly plane fault of the type occurring, for example, in spall [2] and in earthquakes [3]; though there have been numerous observations, the formation of random crack networks has not been accounted for theoretically in previous work. The coalescence of cracks is a complex topic that has not received a great deal of attention because of its complexity and the variety of possibilities. In the coalescence studies by the author, two kinds of coalescence are distinguished, the formation of T and X cracks. When separate cracks are so oriented that their growth leads to a T intersection, growth has been terminated, as can be seen in photographs and micrographs of damaged materials. This is considered to be an effect of blunting of the intersecting crack by the one intersected. Another possibility, an X-crack, is that the cracks are oriented so that the edges cross. In this case there is no blunting and the cracks are free to grow, almost as though no intersection had occurred. The former case is thought to represent the dominant effect in the formation of crack networks, while the latter is not accounted for here. The validity of geometric probability is sometimes viewed with skepticism because of Bertrand's paradox, which argues that the solution to a certain problem in statistical geometry is not unique; but, in fact, the paradox has been clearly resolved by Jaynes [4], who argues that a physically well posed problem will have a unique resolution. This is briefly discussed and an alternative approach with the same conclusion is proffered. The coalescence of cracks is a complex process that is considerably idealized here. It is the basis of a more general theory, Statistical CRAck Mechanics (SCRAM) developed by the author with a variety of colleagues [5][6][7][8][9][10][11]. We believe that it can provide useful insights into dynamic failure by accounting for the opening, shear, growth and coalescence of an ensemble of cracks, accounting for such phenomena as the sensitivity of explosives and propellants (XDT) and the dynamics of fragmentation in rocks and ceramics. The size of crack clusters is an important aspect of SCRAM, making it possible to account for many details of damage and brittle failure. FORMULATION AND RESOLUTION OF THE RECURRENCE RELATION Assume that the circumference of a growing crack C can intersect a maximum of a other growing cracks, as illustrated in Figure 1. (After a intersections the circular crack tip is assumed to have become blunted in several places and, as a result, is no longer capable of growth.) Evidence for the termination of crack growth can be seen in detailed maps and micrographs such as Figure 2 [12]. Denote the probability of an intersection by p. Different portions of the edge of C intersect different cracks independently. Denote the probability of intersecting b different cracks out of a possible a by Q b a . Of course, the probability that the number of cracks intersected is one of the numbers 0 to a must be unity, i.e. Q b a is the bth term in the binomial expansion of ðð1 À pÞ þ pÞ a : The probability of no intersection, which is also the probability that a crack belongs to a set of size 1, is Q a 0 ¼ ð1 À pÞ a : Now, a crack belonging to a set of n cracks can intersect other cracks to form a crack set of size n þ 1 in several ways. It can join another crack that belongs to a set of size n, forming the end of a series. Or it can join two smaller sets such that the total number of cracks in the two sets is n. Or it may join three sets whose total is n cracks, and so on. The number of sets it may join is arbitrary, in principle, but in practice it is thought unlikely to join more than 4 other cracks. There is some experimental evidence that the actual number lies between 3 and 4, as discussed in Appendix A. In the current analysis the number of possible intersections is considered to be a fixed integer, a. The probability that a crack C intersects exactly one other crack is denoted by p. (This probability can be computed using the idea of a crack mean free path [1].) When growth terminates in one portion of the circumference, that local relief of stress does not inhibit growth at remote points on the circumference. In what follows the problem of determining the expected number of cracks in a connected set is addressed. It will be shown that when p reaches a critical value the expected number of cracks in a connected crack set (a cluster) becomes infinite, and that, above the threshold, cracks are divided into finite and infinite clusters. Example: two intersections To illustrate the main ideas, consider first the case when growth of crack C terminates after only 2 intersections, i.e. a ¼ 2: The intersection probabilities are: Figure 1. Crack C grows until it intersects X, Y, and Z, cracks that cause C to transition from an unstable to a stable (inactive) crack as a result of the blunting. Crack P intersects C, but has little effect on the behaviour of C in this approach. Fluids (reaction products in explosives and propellants, oil or water in reservoirs) can readily penetrate connected cracks, which largely control the permeability. (This idealization is actually realized by a set of line segments in a plane and its consequences are discussed further at the end of this section.) A set of cracks of size n þ 1 can be formed by the growth of C in one of 2 ways. First, by the intersection with one other crack belonging to a set of size n. Second, by intersection with 2 cracks that belong to sets of size m and n À m; thereby joining them. Now, let R n denote the probability that a crack belongs to a set of size n. (The goal of this work is to find R n .) Also let R ðiÞ n denote the probability that C combines with i crack sets such that the total number of cracks sums to n. The probability that a crack belongs to a set of size n þ 1 is the sum of the probabilities for these two (mutually exclusive) cases. Briefly, C either intersects only one crack and that crack either belongs to a set of size n, or it joins 2 cracks belonging to sets whose sizes Reproduced by permission of Barton [12]. sum to n Now, Q a n is given above, and R ðiÞ n can be readily computed in terms of R n leading to It is convenient to define R m ¼ 0 for m51 here and in the calculations that follow. Here, this avoids a special treatment of the last term when n ¼ 1; the last term does not contribute to R 2 since it is only meaningful when 2 crack sets are joined by a third, a situation that does not contribute to clusters of 2, the occurrence involved in forming R 2 (n ¼ 1). Before addressing the solution of this system it is appropriate to verify that Equation (4) leads to credible values for the probabilities}especially since this verification illustrates an approach leading to an exact solution for R n . Verification involves summing Equations (4) from n ¼ 1 to infinity and making use of the quantity f defined by This sum should turn out to be equal to 1, since this is the probability that one of all possible set sizes obtains. Carrying out the sum over all n in Equation (4) leads to Changing the order of summation is natural, but as written would require a careful consideration of the limits. An alternative is to define R m for general m, including zero and negative values, letting it be zero for m negative or zero. Then this equation can be rewritten as The order of summation can be reversed in this equation, in view of the premise of Equation (5) that f ¼ 1; so that This can be rewritten as Now, using Equation (5) and the convention that R n is zero when n is not positive, this becomes Furthermore, the probability that a crack belongs to a set of size 1 (that is, it is not connected to any other) is R 1 ¼ ð1 À pÞ 2 : Then This quadratic equation in f can be readily solved; but it can be seen by inspection that f ¼ 1 is indeed a solution. The remaining solution is important, but its role is deferred to a later section when the percolation threshold is discussed. A modification of the preceding analysis leads to an exact solution of Equation (4) for the R n . We define a generating function This implies the concomitant relations which are used below. Note that the derivative F s ð1Þ ¼ P 1 n¼1 nR n ¼ % n is the expected number of intersections. Multiply Equation (4) by s nþ1 ; sum from 1 to 1, and proceed as before, changing the order of summation and using the definition of R n for non-positive n. Then the result becomes where we have reverted to the more general form of the intersection probabilities. This can be expressed as a quadratic equation in F in the form As for Equation (11), this equation can be solved explicitly for F(s), but an explicit solution is unnecessary for the current objective, which is to determine the expected size of a crack set. The expected set size is given by The derivative in brackets is written as F s . At s ¼ 1 it can be obtained by differentiating Equation (15) and recalling that Fð1Þ ¼ 1 Solving for F s (1) Substitution of the expressions given by Equations (2b) and (2c) for Q 2 1 and Q 2 2 into the expression above leads to Thus, the expected set size is unity for p ¼ 0 and infinite for p ¼ 1 2 : At the critical point p ¼ 1 2 ; f ¼ 1 becomes a double root of Equation (11). The results apply for termination of penny-shaped cracks when two intersections terminate growth, but this was intended as an illustration and not as a likely failure criterion. It does determine, however, when a plane becomes divided in parts by line elements in the plane. To examine this further, the probability that the segments formed by intersections of 3-dimensional cracks with a plane will themselves intersect is found in Appendix B, and is p ¼ 1 À e Àp 2 N 0 % c 3 : But, when a ¼ 2 the set size becomes infinite when p ¼ 1 2 : It follows that a critical number of intersections form in a plane when N 0 % c 3 ¼ 0:0702; and we may surmise that when the cracks satisfy this criterion, a failure plane would form shortly thereafter in a damaged material. Though the set is infinite in extent, it consists only of line segments, which are not individually unstable; thus, this criterion fails to define complete fragmentation, but it suggests that failure is imminent. It is interesting to compare this with a more intuitive estimate described in Appendix C in which failure is said to occur in a plane P when the mean projected area per unit area (PAPUA) of the cracks that intersect P is sufficient to cover the plane, i.e. PAPUA ¼ 1: This occurs for N 0 % c 3 ¼ 0:0795; only slightly 'later,' if the cracks are considered to be growing. Example: three intersections Consider now the case a ¼ 3: Equation (4) can be generalized to As when a ¼ 2; multiply Equation (20) by s n and sum on n to get an equation for F. Then the difference equation becomes where R 1 ¼ ð1 À pÞ 3 : The Q 3 n are: where we use standard results for the binomial coefficients C n m : As before, differentiating F with respect to s, setting s ¼ 1; and solving leads to A general relation for the binomial coefficients that is useful in evaluating this expression but is not given in standard compendia can be derived. The proof begins with the binomial expansion where C n m is the binomial coefficient, as discussed in connection with Equation (1). This relation can be used to obtain the final result and can also be used for general a. General result for cluster size Clearly, the procedure leading to Equations (17) and (24) can be generalized to obtain the general result The cluster size becomes infinite when p assumes the critical value 1=a: This can be considered a percolation threshold, though this is not exactly percolation theory since: (1) the cracks are not on a regular lattice; (2) termination of growth involves a sequence of events; and (3) the intersections are not symmetric bonds; thus, the physical picture is rather more complex than in standard percolation theory. It is shown in Appendix D that the critical value of p can be derived by an alternative, and simpler, line of argument. Standard deviation of cluster size The specific object of this work is to compute the mean number of cracks in a cluster for use in damage calculations, but this result is useful from a practical (engineering) point of view only if the standard deviation s is of modest size compared to the mean. To compare these quantities we require the second derivative of F, F ss , since with angle brackets used to denote the mean for typographical reasons. Copyright Then The standard deviation s is equal to the mean cluster size when For larger values of p the standard deviation is large enough that the meaning of the mean is somewhat dubious. For a ¼ 3 and Thus, although the percolation threshold p ¼ 0:333 is of considerable interest as the limit when the expected size of a cluster becomes infinite, in practical terms a structural element would be exceedingly dangerous at the much lower threshold when p L ¼ 0:183: In fact, even this would be dangerous, since the probability of a cluster whose size exceeds n ¼ 6 is 1 À P 6 n¼1 R n ¼ 0:0487; as shown in the section that follows. Probability of cluster of size n The object of this paper is to determine the expected number of cracks in a connected crack set (cluster), which is given by Equation (30). However, the methodology can be used to determine explicitly the probability, R n , that a cluster contains n cracks. For this purpose Equation (15) or (24) can be used for a equal to 2 or 3. On substituting the series of Equation (12) into these equations, the coefficient of each succeeding power of s is set to zero to find a new R n . A few results are listed for a ¼ 2 and 3. For a ¼ 2 ; almost 3% of the clusters involve more than 6 cracks. These expressions for cluster size do not show any special behaviour at or above the percolation threshold at p ¼ 1 2 ; though they do vanish at p ¼ 0 and 1 as expected. Based on direct calculation the sum of the R n does not converge to unity for large p as we might naively expect, but to a smaller value. However, this smaller value agrees with the probability of a finite cluster, which is calculated in Appendix D. For example, for p ¼ 3 4 the sum of these 6 terms is 0.1079 while the probability that the cluster is finite based on Appendix D is 0.1111. The smaller value, ð1 À pÞ 2 =p 2 ; is the second root of Equation (11), 1 9 in the example of p ¼ 3 4 : Thus, the analysis divides the crack sets into separate categories (though this was not implicit in the formulation). This result is supported in the next section, which addresses the solution by Laplace transforms for the case when a ¼ 2: For a ¼ 3 For p ¼ 1 10 ; the sum of these 6 terms sums to 0.9971, while for p ¼ 1 4 the sum is 0.8424. Thus, for p ¼ 1 4 more than 15% of the clusters involve more than 6 cracks. For p above the percolation threshold, the sum of these terms is essentially the value of Q obtained from Appendix D; e.g. for p ¼ 3 4 the sum and Q both round off to 0.0183. Asymptotic behaviour of cluster size in the continuous approximation We can examine the asymptotic behaviour by making use of an approximate solution to the governing equation in the case a ¼ 2: The approximation consists of taking the integer index n to be a continuous variable, and representing the difference R nþ1 À R nÀ1 by twice the derivative R 0 ¼ dR=dn in order to make the difference equation centred at n. The difference equation becomes a (non-linear) differential equation which, as it turns out, can be integrated exactly by means of Laplace transforms. The global properties of this approximate solution turn out to be surprisingly good, though the individual values of the R n are not well estimated. Equation (4) becomes with b ¼ 2pð1 À pÞ and It is more natural when dealing with Laplace transforms to make use of an independent variable ' that ranges from zero to infinity, and a slightly different function * R that is continuous as ' ranges from 0 to infinity * appropriate, of course, only for integer values of ': Also put * Rð' À 1Þ ffi * Rð'Þ À ðd * R=dnÞDn (with Dn ¼ 1) and Replacing the sums by an integral, we can write and, as before, * Rð'Þ ¼ 0; for '50: Then the difference equation becomes with the term on the left denoting the derivative. Letting an overbar denote the Laplace transform, using the convolution (Faltung) theorem, and noting that * Rð0Þ ¼ q 2 ; this becomes Let us define an auxiliary function T(t) with the Laplace transform [13] % where I 1 ðzÞ denotes the modified Bessel function of order 1. Power series, asymptotic series, mathematical relations and tables for I 1 are well known [14]. Solving the quadratic equation (44) and using the shift rule of Laplace transforms, the transformed solution of the differential equation is and the direct solution is This solution exhibits (surprisingly?) many properties of the exact solution of Equation (4). Specifically (a) The total probability is unity for p below the percolation threshold; Q, the probability that a cluster is finite (Appendix D), above the percolation threshold, as found from different considerations in exceeds the exact value by only 1. The probability that a crack is isolated (a cluster of 1) is exact (c) Using the known asymptotic expansion for I 1 [14] the asymptotic series allows us to write where j is near unity for large ': Note that the exponent is negative for all values of p except p ¼ 1 2 ; where it vanishes. This expression shows why the mean cluster size is infinite when p ¼ 1 2 ; the exponent in the expression above vanishes and the series is divergent. Though the infinite sum for mean size diverges, the individual asymptotic expressions for cluster size are valid. However, a typical cluster size using Equation (41) is only roughly approximated. For example, for p ¼ 0:4 * Rð5Þ ffi 0:05121; R 6 ¼ 0:03784 i.e. the probability of a cluster of size 6 is too high by 35%. A computer program to calculate cluster size and the moments using a numerical recipe for the modified Bessel function [15] shows that the integral for the zeroth moment is precisely 1 (to at least 5 figures) below the percolation threshold and precisely equal to the probability of a finite cluster (Appendix D) above the percolation threshold. The first moment agrees with the exact value given by Equation (30) rather than the value inferred from the approximation of Equation (49). Apparently, the approximations made by representing the sum as an integral and the exact R n by * R just cancel out. DISCUSSION This analysis has several curious aspects. First, solutions by means of generating functions are not uncommon, but they usually address linear systems. The governing difference equation herein is highly non-linear, but still allows for an exact solution, at least in the sense that the generating function F(s) is the solution of an algebraic equation of order a, and its moments can be found by patient differentiation of F(s). A second curiosity is that the analytic approach is relatively straightforward, while numerical (or experimental) modelling would be decidedly awkward since it would be hard to decide when numerous intersections of circular disks (penny-shaped cracks) occur in a computer model. Even if such a geometry were simulated, the result would not be a general formula for set size but some isolated number. (Such an effort is discussed in a subsequent paragraph.) However, it is especially desirable to have a result in the simple closed form found here since it is needed in finite-element SCRAM calculations in which the coalescence calculation should represent only a modest part of the total finite-element calculation. The closed form may be useful in computing the permeability of brittle materials such as rock [7,16]. A third curiosity is that the calculation of cluster sizes is valid beyond the percolation threshold, with the sum of the probabilities of finite sets and the probability of infinite sets totaling unity. This feature was not included explicitly in formulating the equations, but is an interesting consequence. This holds for both the exact solutions and the approximate Laplace transform approach, though a mathematical demonstration has not been found since this was observed only in the last stages of preparation of this article. The crack coalescence model is essentially different from those envisaged in most applications of percolation theory because the intersections are not symmetric (Figure 1). In particular, if crack A grows and intersects crack B, A becomes blunted (especially if B is open) and its growth becomes limited in that direction. However, the effect on crack B is generally small, since A intersects a low-stress region of B, provided it is not near an edge of B. We can expect A to terminate its growth after 3 or 4 intersections, but 3 or 4 intersections of crack B will have only a modest effect on its behaviour. This is the asymmetry mentioned above. The value of a can be estimated from measurements of failure in polymers by Zhurkov and Kuksenko [17], which are summarized in Appendix A. The average value of 3.34 lies between 3 and 4, as one might expect intuitively. This kind of behaviour is of the type addressed in percolation theory in having a critical probability. It is different in not involving a regular array of sites (cubic, for example). The random crack ensemble is thought to reflect real problems in crack statistics better than a regular array, which does not normally occur in material damage. The networks described in the current formulation differ from those addressed in classical percolation theory in several respects. In particular, since the clusters consist of cracks of varying size, they will not contain any definite area, nor do they represent any definite volume since the cracks are considered very thin. Clusters may penetrate one another. This behaviour is thought to represent that of actual polycrystalline materials, which can contain a great variety of microcracks. Though the clusters may not define specific volumes, in themselves, they may form fragments when the cracks have grown sufficiently, with the cracks as faces. Thus this approach differs from standard fragmentation analyses, which focus on the volumes of fragments and represent their distribution statistically. This subject is related in a general way to the gelation phenomena studied in polymer chemistry, but the differences are so substantial that a detailed comparison would not be fruitful. The view of fracture and fragmentation taken in this paper and, more broadly, in SCRAM differs from that of Robinson [18] and numerous others, many cited by Sahimi [19] and Herrmann and Roux [20]. In particular, it is not assumed that cracks can cross one another with impunity but, rather, cracks may terminate when an edge encounters the face of another, causing it to blunt. If it were not for this effect, samples of brittle material would not fail gracefully but would shatter after the first crack becomes unstable, for cracks become increasingly unstable as their size increases. Of course, some materials, typically not polycrystalline, do shatter. The initiation and arrest of crack growth by coalescence with other cracks is believed to be the source of the acoustic emission observed in rock mechanics testing. The terminated cracks (T-cracks) are illustrated in Figure 2. On the other hand, Robinson and others concerned with percolation permit unlimited crack crossing in their computational simulations. Nevertheless, a connection can be made with Robinson in his Case V of a series of Monte Carlo calculations. In that case only 2 intersections per crack were allowed. Rather than penny-shaped cracks he uses squares as the elementary defects, but the model is not essentially different from assuming penny-shaped cracks with a ¼ 2: He finds that the percolation threshold is attained at 1.231 squares per unit volume. From this, using the methodology of Appendix B, it follows that the mean free path is l ¼ 2=N ¼ 1:625: Now, the mean length L in a unit square is 1.122 units. Then, the critical probability of intersection in the Robinson computer simulation is p c ¼ 1 À expðÀL=lÞ ¼ 0:514: On the other hand, the current theory (Equation (30)) concludes that for a ¼ 2; p c ¼ 1=a ¼ 0:500: This is considered satisfactory agreement in view of the difference in geometry and concept. It is interesting that most of Robinson's (substantial) computing time is concerned with finding intersections, as anticipated in the second paragraph of this discussion. The current approach is practical in a finite-element calculation where a million elements may be tracked, whereas a Monte Carlo type analysis would be quite impractical. Bertrand's paradox [21], cited in most texts on probability, implies that there may be a variety of solutions to problems in geometric probability, raising doubts about the validity of solutions to such problems. This is discussed in Appendix E, which suggests that use of the term 'random' raises semantic difficulties concerning just what is meant, but the paradox has been resolved at length by Jaynes using indifference arguments; he devotes some 11 pages of discussion to this subject. An alternative and simpler argument is presented in Appendix E that resolves the paradox by inverting the problem. It is emphasized that the uniqueness difficulty is not essential when problems are sufficiently well posed, which involves taking a more physical than mathematical point of view. CONCLUSION It is feasible to represent a random network of cracks with an analytic model that accounts for coalescence and, thereby, the formation of clusters. Simple expressions describe the mean crack size and standard deviation, and cluster size has been determined both below and above the percolation threshold. An approximate solution can be developed by assuming the size is a continuous function and using Laplace transforms, but it has been examined in detail only in the case when crack growth terminates after 2 intersections per crack. The transform solution accounts for the asymptotic behaviour surprisingly well, and provides useful information even when the percolation threshold is exceeded. The mean set size can be used in formulating constitutive laws that account for coalescence, damage, and fragmentation. In this paper only isotropic crack distributions are considered, but it is possible to extend the approach to account for crack distributions with preferred orientations, due either to anisotropy of the matrix material, or to anisotropy resulting from dominant growth of cracks in particular orientations as a result of the state of stress. This makes it possible to estimate the probability of failure following complex loading paths. The details of brittle failure are not predictable in detail, but statistical predictions may be useful in estimating risks. Experiments to track failure processes in greater detail and to examine, for example the relation of X and T-shaped cracks would be most useful. The behaviour of static and dynamic failure may be quite different; this analysis is directed at dynamic failure where crack growth tends to be planar. APPENDIX A: EXPERIMENTS OF ZHURKOV AND KUKSENKO Zhurkov and Kuksenko [17] investigated the formation of submicroscopic cracks in polymers by small-angle X-ray scattering. The analysis of their data is based on the theoretical result where N denotes the number of cracks per unit volume; L, crack size; f, scattering angle; A and B are constants. A plot of lnðdI=dfÞ vs f 2 allows the authors to determine crack size L from the slope, and the concentration of cracks from the intensity at f ¼ 0: They considered the growth of cracks under load in 8 polymers and arrived at many useful conclusions. One is that the density N does not change significantly during loading. (In an example, N varies from 1e15/cc to 5e15/cc.) This shows that growth dominates nucleation in these situations, resolving a criticism of SCRAM theory to the effect that crack nucleation may (or does) dominate crack growth. They also show that the growth process is thermally activated. Their conclusion that NL 3 at rupture remains roughly constant as the loading and rate of loading are varied is especially relevant to the coalescence and percolation discussed in this paper. (Though in no case would the loading be considered dynamic.) The value of NL 3 averaged 0.0377 for the 8 materials considered. Excluding the Acetobutyratcellulose outlier, the remaining seven materials had values of NL 3 at rupture that varied only by a factor of 4, ranging from 0.0200 to 0.0808. This, in spite of variations in L from 0.009 to 0.3 microns and in N from 1e12 to 9e16 (Table AI). The theoretical result of Dienes [8] relevant to these tests is that where a is the number of crack intersections at the critical condition (rupture). From this relation it is found that a ranges from 2.20 to 4.43, excluding the outlier, a range of values largely consistent with The distribution function for random penny-shaped cracks of radius c in three dimensions is taken to be as frequently assumed in the geophysics literature [3,22] and references cited by Dienes [23]. The ensemble of penny-shaped cracks (PSCs), is considered to be randomly distributed, by which we mean that the distribution is homogenous and the orientations, isotropic; location, orientation and size are taken to be uncorrelated. The number of cracks per unit volume in the range of sizes (c, c þ Dc) with orientation O and solid angle DO is LDcDO: Using the method described by Dienes [1] it can be shown that the mean free path in the space occupied by these cracks is 1=2pN 0 % c 2 : The expected number of intersections of a random line of length L with such cracks is 2pN 0 % c 2 L: A Poisson distribution governs the probability of intersection, which is p ¼ 1 À expðÀ2pN 0 % c 2 LÞ: We turn now to the intersections of the PSCs with a random plane, P. Some members of the ensemble cited above will intersect P, forming line segments of length '; as discussed by Dienes [23]. It is shown in that paper that the number of line segments in P per unit area is ðp=2ÞN 0 % c and the mean length of the line segments per unit area is Q ¼ ðp 2 =2Þ% c 2 N 0 : (The character of the distribution is quite unlike the exponential distribution of PSC radii, starting at 0 for ' ¼ 0:) Since their orientation is random as well, the projected length of the line segments on a fixed line in P is 2Q=p per unit area, and this is the 'frequency' n with which a line in P intersects the segments. The reciprocal, l ¼ 1=n ¼ 1=p% c 2 N 0 ; is the mean free path in P. The expected number of intersections of a line of length L with the segments in P is nL: The intercepts have a Poisson distribution and, consequently, the probability of no intersection for a line of length L is exp(ÀnL); the probability of intersection is, then, 1 À expðÀnLÞ: The distribution of segment sizes m per unit area is determined in the reference and is given by where K 0 is a Hankel function and m ¼ m=2% c: The distribution of sizes for a particular PSC is given by nðmÞ=ððp=2ÞN 0 % cÞ ¼ mK 0 ðmÞ=2% c: Then the mean size is [23] % m ¼ Then the exponent in the expression for the probability of an intersection of one segment with another is n % m ¼ p 2 N 0 % c 3 and This result is used to compare the current theory with a Monte Carlo approach [18] in the text. In experimental studies of spall a critical impact condition is determined at which a mass of material becomes separated from the main mass. Using statistical crack mechanics (SCRAM) we have information on the changing distribution of cracks, but this does not in itself determine a sharp failure criterion. Some functional of the distribution is required that determines when physical separation of the material takes place. Such a functional is determined in this Appendix. This is somewhat different in spirit from the main thrust of this paper, which is to determine the expected size of crack sets resulting from coalescence and a critical probability, but this result provides an interesting comparison concerning separation of a slab of material. A random isotropic distribution of penny-shaped cracks intersects a plane P forming line segments in the plane, as discussed above [23]. Under some conditions the cracks near P can combine to cause complete failure in that plane under tensile stress (spall). We seek the projected area per unit area (PAPUA) of the cracks that intersect P, which provides a measure of damage in the plane. Let denote the number density of cracks exceeding c in radius per unit volume per 2p: The total number of cracks per unit volume is, then, 2pL 0 : The cracks that intersect the plane P at z ¼ 0 are ones that satisfy the condition z=sin y4c ðC2Þ where z denotes the height of the centre of the crack above the plane and y is the angle the crack makes with the plane or, equivalently, the angle of the crack normal with the z-axis. In the space defined by these variables the inequality above defines a subspace within which intersection occurs. The projected area of the cracks intersecting the plane is the integral being taken over the appropriate subspace, and O representing symbolically the orientation of the crack normal. Also, dO represents the element of area on the unit sphere. It is assumed that the distributions of z; c; f; and y are uncorrelated, which implies that the distribution function can be expressed in the form where now the symbolic representation of the crack normal is replaced explicitly by the polar co-ordinates y and f. Using Equation (C1), the inequality, and the element of area on the unit sphere, the integral becomes The integration is straightforward, leading to APPENDIX D: PERCOLATION THRESHOLD Consider the crack C, which may be connected to any of a other cracks with probability p, as indicated conceptually in Figure D1. Let the probability that C does not belong to an infinite set of cracks be denoted by Q. Then the probability that C is either not connected to C 1 or, if it is, that C 1 does not belong to an infinite set is assuming that the distribution of cracks is homogeneous and isotropic. The probability that C does not belong to an infinite set through any of the a possible connections is These equations can be combined into the following: Figure D2. The probability P that a crack belongs to an infinite set as a function of p, the probability of intersection. What is the probability of C not belonging to an infinite set? One trivial solution can be found by referring to Equation (D1), which is obviously satisfied by Q ¼ 1: Another is found by dividing out the factor 1 À Q 1=a with the result The probability that a crack belongs to an infinite path is 1 À Q ¼ P: A graph of the behaviour of P(p) is given in Figure D2, with the dark line representing the two branches of the solution that have been found. The point where the branches join is the critical point, at which there are just enough intersections to make an infinite path possible. When p ¼ 1; the set size is infinite with probability 1. APPENDIX E: BERTRAND'S PARADOX AND THE ISSUE OF RANDOMNESS Following Bertrand [21], numerous other authors have suggested that a variety of solutions to the following problem are valid: if a long straw is tossed at random onto a circle, what is the probability that the chord thus defined is longer than a side of the inscribed equilateral triangle? The following description of this paradox is given by Jaynes [4]: Three approaches to selecting a random chord are: assign uniform probability density to (A): the linear distance between center of chord and circle; (B) angles of intersection of the chord on the circumference; and (C) the center of the chord over the interior area of the circle. These assignments lead to the results p A ¼ 1 2 ; p B ¼ 1 3 ; p C ¼ 1 4 ; respectively. A rather detailed discussion of the problem is given by Jaynes. He points out that the specification that a distribution is random is not sufficient to make it unique, and the apparent randomness leads to the paradox. He cites the views of ten authors, prominent mathematicians, who agree that the various approaches to specifying randomness lead to various solutions. On the other hand, argues Jaynes, a physical view would suggest that the answer has to be unique. Jaynes discusses his approach at length using indifference arguments and arrives at a unique solution. His solution agrees with a test he performed with a colleague that involved tossing straws onto a circle confirming, with 'embarrassing accuracy', selection A above. I had independently addressed this problem by considering the random line to be generated by selecting two points in the plane at random and examining the consequences, with the same conclusion. Subsequently, I discovered Jaynes masterful analysis on the internet, where the controversy rages on in spite of Jaynes and common sense. I suggest here an alternative and more intuitive resolution of the paradox that, necessarily, concurs with Jaynes. Rather than toss straws, or select chords at random, invert the problem by considering circular hoops tossed onto a grid of parallel lines whose spacing equals the diameter of the hoops. This is tantamount to putting the observer on the straw rather than on the hoop. Recall that the sides S of an equilateral triangle inscribed in a circle bisect the radii perpendicular to them. So, if the centre of the hoop lies less than half a radius from the intersected grid line, the chord generated by the toss is longer than the sides, S. This will occur just half the time, Copyright i.e. p ¼ 1 2 : This line of argument is considerably simpler than Jaynes', which occupies 11 pages. Inverting the problem makes it similar to Buffon's famous needle-tossing problem. If instead of tossing the hoop, one point is fixed on the grid line and it is rotated by a random amount (uniformly distributed angles over 2p) the chord thus formed will be longer than S just 1 3 of the time, so that p ¼ 1 3 : To a physically minded analyst, this is clearly not what is intended in the formulation of the problem. The area argument leading to the solution p ¼ 1 4 is specious, but it would be appropriate for a different problem: a hoop tossed onto a fixed hoop of the same diameter. The chord drawn between the points of intersection will be longer than S if the centre of the tossed hoop lies inside the fixed hoop. This will occur just a quarter of the time, i.e. p ¼ 1 4 ; omitting cases when the hoops do not intersect. But this involves a substantial change in the intent of the problem statement. The paradox arises from using the term 'random' to mean various things, and may be considered semantic. Still, some choices are more physically motivated than others, and physicists have had good success in predicting accurately a wide variety of phenomena that have a statistical basis using the principle of indifference, as discussed at length and with great wisdom by Jaynes. The various approaches to choosing chords in Bertrand's paradox may appear 'random' but this appearance can be superficial since choices can be constrained in various ways and still be considered random. The valid solution has no (or at least minimal) constraints. The term 'random' is not adequate to specify a process, though it is sometimes taken to be sufficient, any more than 'non-linear' is sufficient to specify a type of differential equation, or 'nonelephant' to define an animal. A more useful way to consider many probability issues is to specify processes that are 'uncorrelated.' This is often what is intended by 'random.' In approach B described above, for example, the lines are correlated because they go through a common point.
2019-04-18T13:12:51.829Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "86e797833569161b0c67191d801506bc55428bbf", "oa_license": null, "oa_url": "https://doi.org/10.1002/stc.126", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "7c26056519f9053221fe380bab6a241adf1ffc89", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119106126
pes2o/s2orc
v3-fos-license
Blazar Flaring Patterns (B-FlaP): Classifying Blazar Candidates of Uncertain type in the third Fermi-LAT catalog by Artificial Neural Networks The Fermi Large Area Telescope (LAT) is currently the most important facility for investigating the GeV $\gamma$-ray sky. With Fermi LAT more than three thousand $\gamma$-ray sources have been discovered so far. 1144 ($\sim40\%$) of the sources are active galaxies of the blazar class, and 573 ($\sim20\%$) are listed as Blazar Candidate of Uncertain type (BCU), or sources without a conclusive classification. We use the Empirical Cumulative Distribution Functions (ECDF) and the Artificial Neural Networks (ANN) for a fast method of screening and classification for BCUs based on data collected at $\gamma$-ray energies only, when rigorous multiwavelength analysis is not available. Based on our method, we classify 342 BCUs as BL Lacs and 154 as FSRQs, while 77 objects remain uncertain. Moreover, radio analysis and direct observations in ground-based optical observatories are used as counterparts to the statistical classifications to validate the method. This approach is of interest because of the increasing number of unclassified sources in Fermi catalogs and because blazars and in particular their subclass High Synchrotron Peak (HSP) objects are the main targets of atmospheric Cherenkov telescopes. INTRODUCTION Blazars are active galactic nuclei (AGN) with a radio-loud behavior and a relativistic jet pointing toward the observer. ) These sources are divided into two main classes: BL Lacertae objects (BL Lacs) and Flat Spectrum Radio Quasars (FSRQs), which show very different optical spectra even if in other wavebands they are similar. FSRQs have strong, broad emission lines at optical wavelengths, while BL Lacs show at most weak emission lines, sometimes display absorption features, and can also be completely featureless. Compact radio cores, flat radio spectra, high brightness temperatures, superluminal motion, high polarization, and strong and rapid variability are commonly found in both BL Lacs and FSRQs. Blazars emit variable, non-thermal radiation across the whole electromagnetic spectrum, which includes two components forming two broad humps in a νf ν representation. The low-energy one is attributed to synchrotron radiation, and the high-energy one is usually thought to be due to inverse Compton radiation. See Ghisellini (2013) for a recent review of the properties of γ-ray AGN. Blazars can also be classified into different subclasses based on the position of the peak of the synchrotron bump in their spectral energy distribution (SED), namely, low frequency peaked (LSP or sources with ν S peak < 10 14 Hz), intermediate frequency peaked (ISP or sources with 10 14 Hz < ν S peak < 10 15 Hz) and high frequency peaked (HSP or sources with ν S peak > 10 15 Hz ) (Abdo et al. 2010). This subclassification suggests the possibility that the γ-ray properties of the sources may lead to constraints on the type of objects responsible for the radiation especially in view of the increasing number of detections obtained by the Fermi Large Area Telescope (LAT) that still have to be properly classified. The Third Fermi-LAT Source Catalog (3FGL) listed 3033 γ-ray sources collected in four years of Table 1 we show the growth of the number of blazar-class sources in Fermi-LAT catalogs and the relative fraction of each blazar source subclass. The percentage of BCUs within the blazar sample increased from 13.8% in 1FGL to 33.4% in 3FGL. Although the detailed multiwavelength analysis necessary for unambiguous classification has been done and is continuing for many of these (Alvarez et al. 2016), a first classifying screening of BCUs, as our method proposes, can be very useful for the blazar scientific community. The aim of this work is to find a simple estimator in order to classify BCUs and, when it is possible, to identify high-confidence HSP candidates. The present generation of Imaging Atmospheric Cherenkov Telescopes (IACTs), such as VERITAS, H.E.S.S. and MAGIC, has opened the realm of ground-based γ-ray astronomy in the Very High Energy range (VHE: E >100 GeV). The Cherenkov Telescope Array (CTA) will explore our Universe in depth in this energy band and lower. For a recent review of present and future Cherenkov telescopes, see (De Naurois et al. 2015). The BL Lac HSP sources are the most numerous class of TeV sources. The TeV catalog (Horan et al. 2008) reports 176 TeV sources. 46 of them are HSP BL Lacs and only 5 FS-RQs, therefore the ability to correctly identify HSP objects will be very important for the Cherenkov scientific community and in the determination of CTA targets, in order to increase the rate of detections, since IACTs have a small field of view.The novelty of the present approach is that our study relies exclusively on variability data collected at γ-ray energies where Fermi-LAT is most sensitive (0.1 -100 GeV) and it remains totally independent from other data at different wavelengths. The paper is laid out as follows: in Section 2, we present the γ-ray data and the ECDF light curves considered for our analysis; in Sect. 3, we describe the use of artificial neural networks, and in Sect. 4 we present the results of the ANN analysis. In Sect. 5 we present a summary of the results of our classification of BCUs listed in the 3FGL Fermi-LAT and we highlight the most promising HSP candidates. In Sect. 6 we test our method comparing the predicted classifications with additional data, obtained through optical spectroscopy and radio observations. We summarize our conclusions in Sect. 7. The Large Area Telescope The LAT is the primary instrument on the Fermi Gammaray Space Telescope, launched by NASA on 2008 June 11 and it is the first imaging GeV γ-ray observatory able to survey the entire sky every day at high sensitivity orbiting the Earth every 96 minutes. The Fermi LAT is a pair-conversion telescope with a precision converter-tracker and calorimeter. It measures the tracks of the electron and positron that result when an incident γ ray undergoes pair-conversion and measures the energy of the subsequent electromagnetic shower that develops in the telescope's calorimeter (Atwood et al. 2009). Data obtained with Fermi-LAT permit rapid notification and facilitate monitoring of variable sources such as the BCUs that we consider in this study. In this paper we used the monthly γ flux value from LAT 4-year Point Source Catalog (3FGL) and the Fermi Science Support Center (FSSC) for any other data 1 . B-FlaP: Blazar Flaring Patterns Variability is one of the defining characteristics of blazars (Paggi et al. 2011). We considered the light curves of the blazar sources evaluated with monthly binning, as reported in 3FGL catalog, and with these data we designed the basic structure of the B-FlaP method. The original idea was to compare the γ-ray light curve of the source under investigation with a template classified blazar class light curve, then measure the difference in a proper metric. Typically γ-ray AGN are characterized by fast flaring that could alter significantly the light curve and could make the comparison difficult. In addition, different flux levels could hide the actual similarity of light curves. As first approach of this study we compute the Empirical Cumulative Distribution Function (ECDF) of the light curves (Kolmogorov 1933). We constructed the percentage of time when a source was below a given flux by sorting the data in ascending order of flux and then compared the ECDFs of BCUs with the ECDFs of blazars whose class is already established, ( §2). This is our variation of the Empirical Cumulative Distribution Function (ECDF) method. In Fig. 1 we show the ECDF plots for 3FGL blazars and BCUs. In principle, differences due to the flaring patterns of BL Lacs and FSRQ appear in two ways: (1) the flux where the percentage reaches 100 represents the brightest flare seen for the source; and (2) the shape of the cumulative distribution curve reveals information about the flaring pattern, whether the source had one large flare, multiple flares, or few flares.The BL Lacs have fewer large flares than the FSRQs, and the FSRQ curves are more jagged, suggesting multiple flares compared to the smoother BL Lac curves. The difference between the classes is observed when we plotted the two blazar classes together. At the bottom left of Fig. 1 is shown the significant overlap between the types where it is hard to distinguish individual objects, and there are outliers that extend beyond the range of the plots, but it is possible to recognize on the top left of the diagram a specific area where the overlap between BL Lac and FSRQ is minimal. This area, at values of the flux less than ∼ 2.5 × 10 −8 ph cm −2 s −1 , could lead to a first qualitative recognition of BL Lac objects. In B-FlaP, special attention is needed for upper limits, which arise whenever light curves are constructed with fixed binning, as is the case here. They can be naturally incorporated into the current ECDF method, as the points plotted in the diagrams are the percentage of time that the source is below a given flux value. Nevertheless, upper limits could introduce biases, skewing the cumulative distribution toward higher percentages. Upper limits could be avoided entirely by producing light curves with adaptive binning (Lott et al. 2012), a technique that could be implemented into a possible follow-up study. For this reason and because the ECDF plots represent only a proof of concept of the whole method, we follow up the ECDF first analysis with an Artificial Neural Network analysis (ANN) by an original algorithm developed to distinguish the single BCU object and to give its likelihood to be a BL Lac or a FSRQ. The reasons for the flaring patterns differences between BL Lacs and FSRQ are very likely connected with the processes occurring at the base of the jet, where the largest concentration of relativistic particles and energetic seed photons are expected. While in FSRQs accretion onto the central black hole produces a prominent and variable spectrum, characterized by continuum and emission-line photons, usually accompanied by the ejection of relativistic blobs of plasma in the jet, BL Lacs do not show such kind of activity and most of the observed radiation originates within the jet itself. As a consequence, the production of γ-ray emission through inverse Compton (IC) scattering can change much more dramatically in FSRQs than in BL Lac-type sources, where the contribution of the central engine to the seed radiation field is weaker (Ruan et al. 2012) High Synchrotron Peak blazar With reference to the aim of this study we applied the same ECDF technique to the blazar subclasses. Using the Third Catalog of Active Galactic Nuclei detected by Fermi-LAT (3LAC, , we collected information about classification and SED distribution of the blazars. The third release of the catalog considers only 1591 AGN detected at |b| >10°where b is the Galactic latitude, 289 are classified sources as HSP on the basis of their SED, where 286 of them are represented by BL Lac objects and 3 by FSRQs. 160 of the 573 BCUs are HSP suspects. For all the other data in this study we referred to 3FGL. While ISP and LSP blazars show the most variable patterns and can belong to both the BL Lac or FSRQ families, HSP objects are characterized by nearly constant emission. In Fig. 2 we plotted the ECDF for 3LAC HSPs versus FSRQs. As we expected, because of the fact that HSPs are almost exclusively represented by BL Lac objects, the HSPs went through the BLL clean area at the upper left corner of the plot. Even if ISP and LSP contamination is not negligible (Fig. 3), the result observed in Fig. 2 suggests the potential ability of ECDF B-FlaP to identify a flux range at the 100th percentile (less than ∼ 2.0 × 10 −8 ph cm −2 s −1 ) where it is possible to not only determine the blazar class but also to tentatively assign the HSP subclass for a BCU source. However, even here, visual inspection of the curves in all the ECDF figures suggests that the shape of the curve does not show major differences between the observed blazar classes. In order to improve the analysis we used the same ANN algorithm developed for BCUs for the HSP classification. ARTIFICIAL NEURAL NETWORKS In this section we describe the use of Artificial Neural Networks (ANNs) as a promising method to classify blazar of uncertain types on the basis of their EDCF extracted from their γ-ray light curves. The basic building block of an ANN is the neuron. Information is passed as inputs to the neuron, which processes them and produces an output. The output is typically a simple mathematical function of the inputs. The output of an ANN can be interpreted as a Bayesian a posteriori probability that models the likelihood of membership class on the basis of input parameters (Gish 1990;Richard et al. 1991). Hereafter we refer to such a probability as L. The power of ANNs comes from assembling many neurons into a network. The network is able to model very complex behavior from input to output. ANNs exist in many different models and architectures. Because of the relatively low complexity of our data, we decided to use a simple neural model known as Feed Forward MultiLayer Perceptron and in particular a two-layer feed-forward network (2LP), which is probably the most widely used architecture for practical applications of neural networks. It consists of a layer of input neurons, a layer of "hidden" neurons and a layer of output neurons. In such an arrangement each neuron is referred to as a node. The nodes in a given layer are fully connected to the nodes in the next layer by links. For each input pattern, the network produces an output pattern, compares the actual output with the desired one and computes an error. The error is then reduced by an appropriate quantity adjusting the weights associated to each link through a specific learning algorithm. This process continues until the error is minimized. Fig. 4 shows a schematic design of such a network. In γ-ray astronomy, ANNs are often used for such applications as background rejection, though other techniques (e.g. classification trees) are also used for such purposes. In recent years ANNs were also used for classifying Fermi-LAT unassociated sources (Doert et al. 2014). This technique uses identified objects as a training sample, learning to distinguish each source class on the basis of parameters that describe its γ-ray properties. By applying the algorithm to unknown objects, such as the unclassified sources, it is possible to quantify their probability of belonging to a specific source class. There are different packages available to perform an ANN analysis (e.g. MATLAB Neural Network Toolbox 2 or PyBrain 3 ), but we decided to develop our own 2LP algorithms to address our specific problem. We wrote our algorithms in Python programming language 4 . Our choice gives us a number of advantages. First of all our ANN does not work as a "black box ", which is a typical problem of any available ANN package for which the learning process is always unknown. Since we have implemented our algorithms, Parkinson et al. (2016) have explored the application of machine learning algorithms to source classification, based on some γ-ray observables, showing that there is much to be gained in developing an automated system of sorting (and ranking) sources according to their probability of being a specific source class. The present work differs from these in applying the technique to different types of blazars rather than trying to separate AGN in general from other source classes. We tuned a number of ANN parameters to improve the performance of the algorithm. We renormalized all input parameters between 0 and 1 to minimize the influence of the different ranges. We used a hyperbolic tangent function as activation function connected to each hidden and output nodes. The outputs were renormalized between 0 and 1 to handle them as a probabilities of class membership. We ran-domly initialized the weights in the range between -1 and 1, not including any bias. The optimal number of hidden nodes was chosen through the pruning method (Reed 1993). We used the standard back-propagation algorithm as learning method setting the learning rate parameter to 0.2. We did not add the momentum factor in the learning algorithm because it does not improve the performance of the network. We used the learning algorithm in the on-line version, in which weights associated to each link are updated after each example is processed by the network. Source sample and predictor parameters Since the aim of this work is to quantify the likelihood of each 3FGL BCU being more similar to a BL Lac or a FSRQ, we chose all 660 BL Lacs and 484 FSRQs in the 3FGL catalog as a source sample. This is a two-class approach, where the output LBLL expresses the likelihood of a BCU source to belong to the BL Lac source class and LFSRQ= 1−LBLL to the FSRQ one. Because our interest is only in blazars, we do not expect any contribution to the BCU sample from other extragalactic source classes, and thus we did not estimate their contamination in our analysis. We encoded the output of the associated blazars so that LBLL is 1 if the known object is a BL Lac, and LBLL is 0 if it is a FSRQ. Following the standard approach, we randomly split the Data enter the 2LP through the nodes in the input layer. The information travels from left to right across the links and is processed in the nodes through an activation function. Each node in the output layer returns the likelihood of a source to be a specific class. 3FGL blazar sample into 3 subsamples: the training, the validation and the testing one. The training sample is used to optimize the network and classify correctly the encoded sources. The validation sample is used to avoid over-fitting during the training. This is not used for optimizing the network, but during the training session it monitors the generalization error. The learning algorithm is stopped at the lowest validation error. The testing sample is independent both of the training and validation ones and was used to monitor the accuracy of the network. Once all optimizations were made, the network is applied to the testing sample, and the related error provides an unbiased estimate of the generalization error. We chose a training sample as large as possible (∼ 70% of the full sample) while keeping the other independent samples homogeneous (∼ 15% for each one). Since we used an on-line version of the learning algorithm, we decided to shuffle the training sample after the full training sample was used once to optimize the network. This choice allowed us to maintain a good generalization of our network. Because we want to distinguish BL Lacs from FSRQs only on the basis of their γ-ray ECDF, we selected flux values extracted from such a distribution as predictor parameters. We included in our ANN algorithm γ-ray fluxes corresponding to 10th, 20th, 30th, 40th, 50th, 60th, 70th, 80th, 90th and 100th percentile Our choice to use only 10 input parameters originates from a compromise between a good representation of each ECDF and a limited number of input parameters, in order to avoid problems related to upper limits associated to some bin times. We also tested the performance of the network adding the Variability Index defined in the 3FGL catalog as an additional parameter. The Variability Index is a statistical parameter that tests if a γ-ray source is variable above a certain confidence level, in particular if its value is greater than 72.5 the object is statistically variable at the 99% confidence level. The information given by the Variability Index is more limited than the ECDF, which also provides a characterization of the variability pattern and is probably related to spectral variability during the flare state. Including the Variability Index in the algorithm did not significantly improve the performance of the network, showing that this parameter does not add independent information in distinguishing the two blazar subclasses. Defining the importance of each input parameter as the product of the mean-square of the input variables with the sum of the weights-squared of the connection between the variable's nodes in the input layer and the hidden layer, the Variability Index was observed to be the less important parameter. Fig. 5 confirms that the distribution of Variability Index is very similar for 3FGL BL Lacs and FSRQs. Although the mean variability is higher for FSRQs, the distributions overlap strongly, making this parameter hard to use as a discriminator. We excluded from our analysis both γ-ray and multiwavelength spectral parameters, because the aim of this work is to develop a classification algorithm that can be efficiently applied to γ-ray sources when rigorous γ-ray spectra or multiwavelength information is missing. Since the best way yet found to single out BL Lacs from FSRQs is to analyse their spectral energy distribution , we used multiwavelength spectral information to validate our algorithm, comparing the distribution of BL Lac and FSRQ candidates with known ones as discussed in Section 4, 6.1 and 6.2. As a result of these choices, our feed-forward 2LP is built up of 10 input nodes, 6 hidden nodes and 2 output nodes. Figure 6. Distribution of the ANN likelihood to be a BL Lac candidate for 3FGL BL Lacs (blue) and FSRQs (red) in the testing sample. The distribution of the likelihood to be a FSRQ candidate (L FSRQ ) is 1−L BLL . Optimization of the algorithm and classification thresholds At the end of the learning session, the ability of the algorithm to distinguish BL Lacs from FSRQs is optimized, and for each blazar produces a likelihood of its membership class. Fig. 6 shows the likelihood distribution applied to the testing sample. The distribution clearly shows two distinct and opposite peaks for BL Lac (blue) and FSRQ (red), the former at LBLL∼ 1 while the latter at LBLL ∼ 0. Since the testing sample was not used to train the network, the distribution shows the excellent performance of our algorithm in classifying new BL Lacs and FSRQs. We defined two classification thresholds to label BCUs as BL Lac or FSRQ candidates. Our thresholds are based on the optimization of the positive association rate (precision), which is defined as the fraction of true positives with respect to the objects classified as positive, of ∼ 90%. The classification threshold of LBLL> 0.566 identifies BLL candidates, while threshold LFSRQ> 0.770 identifies FSRQ candidates. Another parameter useful to characterize the performance of our classification algorithm is the sensitivity, defined as the fraction of objects of a specific class correctly classified as such. According to this definition, the threshold for BL Lac classification is characterized by a sensitivity of ∼ 84%, while we get a sensitivity of 69% for FSRQs. The precision and sensitivity of our classification algorithm help us to predict the completeness and the fraction of spurious sources in the list of BL Lac and FSRQ candidates. Thresholds defined on the basis of high precision are useful to select the best targets to observe with ground telescopes, optical or Cherenkov, to unveil their nature, while high sensitivity gives us an idea of how many BL Lacs and FSRQs remain to be identified in the 3FGL BCU sample. In the end, according to our classification thresholds, the expected false negative rate (misclassification) is ∼ 5% for BL Lacs and ∼ 12% for FSRQs. Sensitivity, misclassification and precision reveal that the FSRQ γ-ray ECDF is broader and more contaminated than the BL Lac one, as we expected from Fig. 1. The combination of high precision rate and low misclassification rate indicates a very high performance of our optimized network. Selecting the most promising HSP candidates Although the ECDF of HSPs are not clearly separated from those of ISPs and LSPs, we developed a new ANN algorithm to select the best HSP candidates among BCUs, in order to optimize observations by VHE facilities. Following the procedure described in the previous sections we chose as a source sample all 289 HSPs and the 824 non-HSPs identified by their spectral energy distribution. We used as predictor variables the same ECDF parameters used to classify BLLs and FSRQs. The new feed-forward 2LP is built up of 10 input nodes, 5 hidden nodes and 2 output nodes. Fig. 7 shows the optimized networks applied to a testing sample that represents 15% of the full sample. The distribution reveals a peak at low LHSP for non-HSP and a nearly flat distribution for HSP sources, showing the optimized network was not able to clearly classify HSPs on the basis of ECDF as expected. Defining a classification threshold of LHSP > 0.891 so that the precision rate is ∼ 90%, we are able to discover the best HSP candidates. According to this definition, the sensitivity of our algorithm is just 4.5% while the fraction of non-HSPs erroneously classified as HSP candidates is very low (< 1%). This result shows that only a very small fraction of HSPs can be separated from non-HSPs by this method. We name all the BCUs in this region as Very High Confidence (VHC) HSP candidates. All the blazars in this area are BL Lacs. The only FSRQ characterized by a higher LHSP value, ∼ 0.85, is 3FGL J1145.8+4425. This means that all the VHC HSP candidates will also be VHC BL Lac candidates. In addition, we decided to define a less conservative classification threshold (LHSP > 0.8) in order to increase the number of targets to observe with VHE telescopes at the expense of a smaller precision (∼ 75%). In this way the sensitivity increases to ∼ 15% and the misclassified non-HSP remains very low (∼ 2%). We label BCU characterized by a LHSP greater than such a classification threshold as High Confidence (HC) HSP candidates. ANN RESULTS AND VALIDATION In this section we first discuss the results of our optimized ANN algorithm at classifying BL Lac and FSRQ candidates among 3FGL BCU sources. Then we validate our statistical method comparing the PowerLaw Index distribution of known BL Lacs and FSRQs with that of our best candidates. Then we analyze the performance of our algorithm based on ECDF with respect to the other γ-ray parameters usually used to classify blazars, such as PowerLaw Index and Variability Index. In the end we discuss the results on the identification of the most promising HSP candidates. Applying our optimized algorithm to 573 3FGL BCUs we find that 342 are classified as BL Lac candidates (LBLL > Figure 7. Distribution of the ANN likelihood to be a HSP candidate for HSP (blue) and non-HSP (red) in the testing sample. 0.566), 154 as FSRQ candidates (LF SRQ > 0.770) and 77 remain unclassified. Hereafter we will define as BLL3FGL and FSRQ3FGL blazars classified in the 3FGL catalog, while as BLLANN and FSRQANN BCUs classified by ANN and BCUANN BCUs that remain uncertain. The likelihood distribution of BCUs membership class is shown in Fig. 8 and such a distribution reflects very well those of BLL3FGL and FSRQ3FGL in the testing sample (see Fig. 6) as we expect for a well-built classification algorithm. Taking into account precision and sensitivity rates, our optimized algorithm predicts that there are about 365 BL Lacs and about 200 FSRQs to be still identified. This prediction is rather interesting, because at present the fraction of BLL3FGL is ∼ 1.4 times that of FSRQ3FGL while a larger fraction (∼ 1.8) of BL Lacs to be identified is expected by our analysis. After the launch of the Fermi observatory it was discovered that BL Lacs and FSRQs are characterized by different γ-ray spectral properties. The former usually show harder spectra than the latter . Fitting 3FGL blazars assuming a power-law spectral model we observe that the best-fit photon spectral index (in 3FGL named PowerLaw Index ) distribution is rather dissimilar for the two subclasses as shown in Fig. 9. The PowerLaw Index distribution mean values and standard deviations are 2.02 ± 0.25 and 2.45±0.20 for BL Lacs and FSRQs respectively, making this observable one of the most powerful γ-ray parameters to distinguish the two blazar subclasses. Since we did not include this parameter in our algorithm, we compared the PowerLaw Index distribution for BLLANN and FSRQANN with what we know from already classified objects to test the performance of our algorithm and to validate it. Fig. 4 shows in the left panel the PowerLaw Index distributions for BL Lacs while in the right one for the FSRQs. Such distributions are in good agreement, confirming the accuracy of our classification algorithm. The PowerLaw Index distri- Figure 8. Distribution of the ANN likelihood of 573 3FGL BCU to be BL Lac candidates. Vertical blue and red lines indicate the classification thresholds of our ANN algorithm to label a source as BL Lac or FSRQ respectively as described in the text. bution means and standard deviations are 2.02 ± 0.27 and 2.48 ± 0.18 for BLLANN and FSRQANN respectively as expected. Moreover almost all sources classified through the BFLaP-ANN method are within the PowerLaw Index distribution range associated to their blazar subclass. An effective way to evaluate the power of our method is to compare ANN predictions for distinguishing blazar subclasses based on B-FlaP information with those found by a simple analysis of γ-ray spectral or timing properties. Analyzing the PowerLaw Index distribution shown in Fig. 9 we can define two classification thresholds to separate BL Lacs from FSRQs with a degree of purity equal to what we used for ANN thresholds, 90%. According to this hypothesis all blazars characterized by a PowerLaw Index < 2.25 or > 2.64 will be classified as BL Lac and FSRQ candidates respectively with a precision rate of 90%. All blazars with an intermediate value will remain unclassified owing to high contamination. Fig. 11 shows the PowerLaw Index distribution against the ANN likelihood to be a BL Lac of all 3FGL BCUs. Vertical and horizontal dashed lines indicate classification thresholds defined for the two distributions to single out BL Lacs from FSRQs. Comparing the two predictions we observe they agree for ∼ 63% of BCUs (blocks along the diagonal from top left to bottom right), while disagree only for ∼ 3.5% (top right and bottom left blocks). As a key result we observe that ANN method based on B-FlaP is able to provide a classification for ∼ 30% of BCUs remaining uncertain on the basis of their spectra (top and bottom central blocks) while the opposite occurs only for ∼ 3.5% of BCUs. This comparison highlights the power of our analysis with respect to the standard one based on spectral information. To be thorough we followed the same approach to compare ANN predictions based on B-FlaP with those obtained by Variability Index. As discussed in the previous Section, we expect this parameter is not efficient at distinguishing blazar subclasses so that we did not include it in our analysis. We defined two classification thresholds as before from the Variability Index distribution (see Fig. 5) so that blazars with a value smaller than 31 are classified as BL Lac candidates while those with a value larger than 5710 are FSRQ candidates in agreement with the 90% precision criterion. These areas are very small because the overlap in the Variability Index distribution is very large. As shown in Fig. 12, the two methods agree only for ∼ 17% of BCUs and disagree for ∼ 0.2%. No BCU classified by the Variability Index remains uncertain with ANN, while for a very large fraction, ∼ 83%, ANN is able to provide a classification where the Variability Index is not. This analysis clearly shows Variability Index is not effective at classifying blazar subclasses as we expect, and it must be replaced by the more robust B-FLaP for this purpose. In the end, applying our algorithm optimized to select the most promising HSPs among 573 3FGL BCUs, we can single out 15 VHC HSP candidates (LHSP > 0.891) and 38 HC ones (LHSP > 0.8) for a total of 53 very interesting targets to be observed through Very High Energy telescopes. Fig. 13 plots the likelihood distribution of BCUs. Such a distribution reflects very well those of the entire testing sample (see Fig. 7) showing a nearly flat distribution at high LHSP values related to a large overlap between HSPs and non-HSPs in the B-FlaP parameter space. We compared our predictions with those found by the 3LAC catalog on the basis of the study of broadband Spectral Energy Distributions (SED) collected from all data available in the literature. The SED classification is based on the estimation of the synchrotron peak frequency ν S peak value extracted from a 3rd-degree polynomial fit of the low-energy hump of the SED. Out of 15 VHC HSPs, 11 (∼ 73%) are classified as HSPs on the basis of their broadband SED and 4 (∼ 28%) remain unclassified. Out of 38 HC HSPs, 22 (∼ 58%) are classified as HSPs, 8 (∼ 21%) are classified as non-HSPs and 8 (∼ 21%) remain unclassified by their broadband SED. To conclude, classifications agree for ∼ 63% of most promising HSPs selected by ANN, validating the efficiency of our algorithm; they disagree for ∼ 15%, in agreement with the expected contamination rate; and for the remaining ∼ 22% ANN provides a classification as most promising HSPs, while the SED is not rigorous enough or available. B-FLAP CLASSIFICATION LIST Two of the main goals of our examination are to classify 3FGL BCUs as BL Lac or FSRQ candidates and to identify the most promising BCUs to target in VHE observations. We used an innovative method to extract useful information. We investigated for the first time the distribution of blazars in the ECDF of γ-ray flux parameter space, and we applied an advanced machine learning algorithm as ANN to learn to distinguish BL Lacs from FSRQs and to recognize the most likely HSP candidates. The power of our approach was tested in the previous Section, and we present a summary of our results in Table 2. The full table of individual results, available online, contains the classification of BCUs listed in the 3FGL Fermi-LAT as the key parameter. We provide for each 3FGL BCU the ANN likelihood (L) to be a BL Lac or a FSRQ, and the predicted classification according to the defined classification thresholds. We label the most promising HSP candidates, splitting these objects into High Confidence HSPs and Very High Confidence HSPs in agreement with their likelihood to be an HSP-like source. Table 3 shows a portion of these results, the full table being available electronically from the journal. Optical data Ultimately, the classification of a blazar depends on spectroscopy, especially optical spectroscopy to identify redshift and the presence or absence of lines. In order to assess the reliability of the B-FlaP ANN method in the identification of the various blazar classes, we carried out optical spectroscopic analysis of a sample of targets listed as BCUs in 3FGL, for which we had a classification likelihood. Spectral data were obtained both by combining the public products of the 12 th data release of the Sloan Digital Sky Survey (SDSS DR12, Alam et al. 2015) and of the 2 nd data release of the 6dF Galaxy Redshift Survey (6dFGRS DR2, Jones et al. 2004Jones et al. , 2009, as well as by direct observations performed with the 1.22m and the 1.82m telescopes of the Asiago Astrophysical Observatory. 5 . The selection of targets for spectroscopic analysis is affected by the possibility to associate the low energy counterpart within the positional uncertainty of the γ-ray source. Because of the verified correlation of radio flux and γ-ray flux (Ghirlanda et al. 2010;Ackermann et al. 2011a) we chose the targets for spectroscopic observations by looking for coincident emission at these frequencies. The typical positional uncertainties of a few arc seconds achieved by radio and X-ray instruments can constrain the source position on the sky better than the γ-ray detection and, therefore, greatly reduce the number of potential counterparts. When the candidate counterpart turned out to be covered by a spectroscopic survey, we analyzed the corresponding spectrum. If, on the contrary, it was not covered by a public survey, but it was still bright enough to be observed with the Asiago instruments (typically operating below the visual magnitude limit of V ≤ 18 in spectroscopy), we carried out specific observations. The observational procedure involved exposures of each target and standard star, immediately followed by comparison lamps. The spectroscopic data reduction followed detector bias and flat field correction, wavelength calibration, flux calibration, cosmic rays and sky emission subtraction. All the tasks were performed through standard IRAF tools 6 , customized into a proper reduction pipeline for the analysis of long slit spectra obtained with the specific instrumental configuration of the telescopes. At least one standard star spectrum per night was used for flux calibration, while the extraction of mono-dimensional spectra was performed by tracking the centroid of the target along the dispersion direction and choosing the aperture on the basis of the seeing conditions. The sky background was estimated in windows lying close to the target, in order to minimize the effects of non-uniform sky emission along the spatial direction, while cosmic rays were identified and masked out through the combination of multiple exposures of the same target. The targets for which we obtained spectral data are listed in Table 4. The sample is described with reference to the 3LAC terminology which divides BCUs into three sub-types: • BCU I has a published optical spectrum but not sensitive enough for a classification as an FSRQ or a BL Lac • BCU II is lacking an optical spectrum but a reliable evaluation of the SED synchrotron-peak position is possible • BCU III is lacking both an optical spectrum and an estimated synchrotron-peak position but shows blazar-like broadband emission and a flat radio spectrum. Table 3. Classification List of 3FGL BCUs -sample. The table is published in its entirety in the electronic edition of the article. The columns are: 3FGL Name, Galactic Latitude and Longitude (b and l), the ANN likelihood to be classified as a BL Lac (L BLL ) and a FSRQ (L F SRQ ), the predicted classification and the most promising HSP candidates labeled as Very High C. or High C., where C. is for Confidence With the adopted thresholds of LBLL≥ 0.566 to predict a BL Lac classification and LBLL≤ 0.230 to give a FSRQ classification, these data are fully consistent with the expected 90% precision of the method, because only 3FGL J0904.3+4240 and 3FGL J1031.0+7440 turn out to be misclassified (exactly 2 sources out of 20). We note, however, that the choice of more severe likelihood thresholds could easily give even more accurate results, at the obvious cost of classifying a smaller fraction of the BCU population. Radio data Besides the different γ-ray properties and optical spectra, BL Lacs and FSRQs are also dissimilar in their radio prop- Table 4. The sample of objects selected from the 3FGL Source Catalogue for optical observation. The table columns report, respectively, the 3FGL source name, the associated counterpart, the coordinates (right ascension and declination) of the γ-ray signal centroid, the 3LAC classification of the counterpart and the source of optical spectroscopic data. erties. BL Lacs are generally less luminous than FSRQs, so a classification based on radio luminosity could be a useful diagnostic for BCUs. However, radio luminosity is a quantity that can only be calculated if a redshift is known -and very often, nearly by definition, BCUs do not have an available optical spectrum suitable for the determination of z (this is actually the case for ∼ 91% of our BCUs). In any case, as we Figure 12. ANN likelihood against Variability Index distributions as described in Fig. 11 are going to show (see also Ackermann et al. 2011b, the separation between BL Lacs and FSRQs remains rather clear also according to the flux density parameter. For this reason, we study here the radio flux density distribution of the 3FGL BL Lacs, FSRQs, and BCUs, in order to show that (1) the classification proposed by our B-FlaP ANN method is in agreement with the typical radio properties of known BL Lacs and FSRQs (i.e. the radio flux density distribution of the BCUs classified by us matches with that of the already Figure 13. Distribution of the ANN likelihood of 573 3FGL BCUs to be HSP candidates. Vertical blue and steel blue lines indicate the classification thresholds of our ANN algorithm to identify a source as Very High Confidence or High Confidence HSP respectively as described in the text. classified BL Lacs and FSRQs) and (2) our method is more powerful than a simple analysis of the radio properties (i.e. there are many BCUs that can be classified as BL Lacs or FSRQs based on the ANN method, but would remain uncertain if we only looked at their radio flux density). Since blazars are, nearly by definition, radio-loud sources, radio flux densities for all of them can be readily obtained from large sky surveys. In particular, the 3LAC reports the radio flux density at 1.4 GHz from the NRAO VLA Sky Survey (NVSS, Condon et al. 1998) or at 0.8 GHz from the Sydney University Molonglo Sky Survey (SUMSS, Bock et al. 1999) for blazars located at Dec. > −40 • or < −40 • , respectively. In very few cases (only 20 in the entire clean 3LAC), radio flux densities are obtained at 20 GHz from the Australia Telescope Compact Array. In any case, blazars are flat-spectrum sources, and the error associated to assuming that α = 0 (i.e. treating all data as if they were taken at the same frequency) is not expected to be large. Hereafter, we indicate with Sr the radio flux density, regardless of the source catalog. In Fig. 15, we show the distribution of Sr over the entire range of BCU flux densities, dividing between BL Lacs (blue histogram) and FSRQs (red histogram). The overall distribution is clearly bimodal, with BL Lacs peaking at lower flux density than FSRQs. Based on these distributions, we define two clean areas where the density of sources of one class is predominant with respect to the other and where it is possible to separate BL Lacs and FSRQs with a 90% degree of purity. These areas are defined by the thresholds S < 140 mJy (90% probability of being a BL Lac) and S > 2300 mJy (90% probability of being a FSRQ). We further note that there is only one FSRQs with S < 35 mJy (while there are 170 BL Lacs in the same interval), corresponding to a superclean area with 99.5% probability of being a BL Lac. On the other hand, the overlap in the high flux density region is much larger and the radio flux density is not as reliable a predictor when it comes to identifying FSRQs. In Fig. 16, we compare the Sr distribution for the sources classified through the B-FlaP ANN method (BLLANN and FSRQANN, shown by shaded histograms) with that of the sources already classified in the 3FGL (BLL3FGL and FSRQ3FGL, shown by the empty histograms). In the left panel, we show the BL Lacs, in the right panel the FSRQs. It is readily seen that the radio flux density distributions are in good agreement, which confirms the validity of our classification. In general, the B-FlaP ANN classified sources tend to lie on the fainter end of the distribution; that is not a surprise, since the brightest sources are more likely to have been selected for optical spectroscopy in past projects and therefore were not part of the starting BCU list. In Figs. 17, we plot the ANN likelihood of a BCU being a BL Lac against Sr, divided in blocks according to the classification as a BL Lac or a FSRQ based on the ANN method and on the radio flux density. The blocks along the diagonal are those where the two methods agree, and they contain over 50% of the total population of BCUs (295/573). Then, there is a large fraction (190/573, i.e. ∼ 33%) of BCUs for which the ANN method provides a classification, while that based on Sr remains uncertain; these are the top and bottom blocks of the central column. This highlights the power of the ANN method in comparison to the simple flux density: only ∼ 6% of the BCUs can be classified through Sr while they would remain uncertain for ANN. Finally, there is a ∼ 8% of sources for which the two methods disagree (top right and bottom left squares). These are probably quite peculiar objects or spurious associations that deserve a dedicated analysis beyond the scope of this paper. We further note that the analysis based on radio flux density could be subject to outliers, and in particular sources in the bottom left corner could be dim FSRQs that are located at very large redshift. than what is obtainable by the other parameters. To further assess the reliability of the method we performed direct optical observations for a sample of BCUs with Galactic latitude |b| > 10 o and maximum γ-ray flux less than 6 · 10 −8 ph cm −2 s −1 . In those cases where we were able to perform spectroscopic observations we found that the optical spectra were fully consistent with the expectations based on the ANN results. Even the results of benchmarking between the radio data and B-FlaP showed a consistency of assessment with the two approaches. We conclude that, although B-FlaP cannot replace confirmed and rigorous spectroscopic techniques for blazar classification, it may be configured as an additional powerful approach for the preliminary and reliable identification of BCUs and in particular the HSP blazar subclass when detailed observational or multiwavelength data are not yet available. ACKNOWLEDGMENTS Support for science analysis during the operations phase is gratefully acknowledged from the Fermi-LAT collaboration for making the 3FGL results available in such a useful form, the Institute of Space Astrophysics and Cosmic Physics of Milano -Italy (IASF INAF), and the Radioastronomy Institute INAF in Bologna Italy. Part of this work is based on observations collected at Copernico (or/and Schmidt) telescope(s) (Asiago, Italy) of the INAF -Osservatorio Astronomico di Padova. DS acknowledges support through EXTraS, funded from the European Commission Seventh Framework Programme (FP7/2007(FP7/ -2013 under grant agreement n. 607452.
2016-07-26T17:57:57.000Z
2016-07-26T00:00:00.000
{ "year": 2016, "sha1": "3b420b9e3ea517852f12b2f547cab4b57b0b687a", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/462/3/3180/13773901/stw1830.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3b420b9e3ea517852f12b2f547cab4b57b0b687a", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
18286701
pes2o/s2orc
v3-fos-license
Two-year trends and predictors of e-cigarette use in 27 European Union member states Objective This study assessed changes in levels of ever use, perceptions of harm from e-cigarettes and sociodemographic correlates of use among European Union (EU) adults during 2012–2014, as well as determinants of current use in 2014. Methods We analysed data from the 2012 (n=26 751) and 2014 (n=26 792) waves of the adult Special Eurobarometer for Tobacco survey. Point prevalence of current and ever use was calculated and logistic regression assessed correlates of current use and changes in ever use, and perception of harm. Correlates examined included age, gender, tobacco smoking, education, area of residence, difficulties in paying bills and reasons for trying an e-cigarette. Results The prevalence of ever use of e-cigarettes increased from 7.2% in 2012 to 11.6% in 2014 (adjusted OR (aOR)=1.91). EU-wide coefficient of variation in ever e-cigarette use was 42.1% in 2012 and 33.4% in 2014. The perception that e-cigarettes are harmful increased from 27.1% in 2012 to 51.6% in 2014 (aOR=2.99), but there were major differences in prevalence and trends between member states. Among those who reported that they had ever tried an e-cigarette in the 2014 survey, 15.3% defined themselves as current users. Those who tried an e-cigarette to quit smoking were more likely to be current users (aOR=2.82). Conclusions Ever use of e-cigarettes increased during 2012–2014. People who started using e-cigarettes to quit smoking tobacco were more likely to be current users, but the trends vary by country. These findings underscore the need for more research into factors influencing e-cigarette use and its potential benefits and harms. INTRODUCTION Uncertainty surrounds the potential population health impacts of electronic cigarettes (e-cigarettes), and the issue has been dubbed one of the great debates in public health of our time. 1 Research based on internet searches of available brands has indicated substantial growth in the availability of ecigarettes in recent years, 2 which has further increased concern. Much of the controversy has centred on the degree to which the devices are a 'gateway' to smoking cigarettes, their role in renormalising cigarette smoking and their effectiveness in promoting quit attempts. 3 Concern has also focused on variations in potential toxicity of differing brands, linked to a lack of regulation of their manufacture. 4 The lack of certainty over these issues has been coupled with calls for regulation at a variety of levels until these issues can be settled. 5 The European Union (EU) Tobacco Products Directive was passed in 2014 and will be implemented in 2016. Article 20 of the Directive has brought forward specific regulations with regard to the reporting of ingredients, emissions, quality control in production and potential design parameters that could mitigate risk. 6 Nonetheless, the debate on other policy decisions, such as bans on advertising and use of e-cigarettes in public places or their use as cessation aids, is expected to intensify in coming years. Hence, up-to-date data on the prevalence of e-cigarette use are urgently needed in order to inform policy at a national and at a European level. 7 Similarly, as more information on e-cigarettes becomes available, people's perceptions of their safety may change, which may impact use, as well as attitudes towards regulatory measures, for example, while rates of experimentation with ecigarettes have been found to be high in a number of settings, 8 the evidence is less clear on the proportion of experimenters who go on to become regular users. 9 Reasons for use of these products and how these may affect transition to regular use are additional areas of uncertainty. It is also worth investigating whether e-cigarettes are becoming more popular in younger age groups or among non-smokers in particular, as such a finding could potentially support calls for more strict regulation. Previous analyses of EU-wide data in 2012 have assessed the prevalence of e-cigarette use, the relationship between tobacco and e-cigarette use, 10 as well as sociodemographic variation in e-cigarette use. 11 However, the landscape in regard to e-cigarettes is changing constantly. Therefore, the aim of the current study was to assess changes in e-cigarette ever use and in perceptions of its harmfulness, between 2012 and 2014, within 27 EU member states, as well as to explore associations of regular use with sociodemographic factors and reasons for use. Data source We conducted a secondary analysis of data collected in two Eurobarometer surveys, wave 77.1 (February-March 2012) and wave 82.4 (November-December 2014). 12 13 Eurobarometer surveys are funded by the European Commission. A similar multistage probability sampling design was followed in each EU member state in both waves. Primary sampling units (PSU) were selected from each regional unit of each country, proportional to population size. A sample of starting addresses was randomly selected in each PSU, and households were systematically selected following a standard random route starting from these initial addresses. Following the collection of the data, poststratification and population size weighting were applied in each country/ region using Eurostat data on gender, age and area of residence, resulting in nationally representative samples in terms of age, gender and area of residence. A total of n=26 751 individuals aged ≥15 years from 27 EU member states, and n=27 801, aged ≥15 years, from 28 EU member states (including Croatia), were interviewed in 2012 and 2014, respectively. However, since Croatia was not included in the 2012 wave, it was excluded from the analysis; therefore the total sample size in 2014 was n=26 792 (see online supplementary table S1). Interviews were conducted in people's homes and in the language of the respective country. E-cigarette use In 2012, the use of e-cigarettes was assessed within the Eurobarometer with the question: 'Have you ever tried any of the following products? Electronic cigarettes…'; and responses included: 'Yes, you use or used it regularly'; 'Yes, you use or used it occasionally'; 'Yes, you tried it once or twice'; 'No'; and 'Don't know'. In 2014, the question was modified as follows: 'Regarding the use of electronic cigarettes or any similar electronic devices (e-shisha, e-pipe), which of the following statements applies to you?'; and responses included: 'You currently use electronic cigarettes or similar electronic devices (eg, e-shisha, e-pipe)'; 'You used them in the past, but no longer use them'; 'You tried them in the past but no longer use them'; 'You have never used them'; and 'Don't know'. For the comparison between waves, all the respondents who reported that they had ever used or tried e-cigarettes were classified as 'ever users of e-cigarettes'. Among e-cigarette ever users (2014 survey only), respondents who said that they were currently using electronic cigarettes were classified as current e-cigarette users and the rest as former e-cigarette users. Reasons for e-cigarette use In wave 82.4 (2014), respondents who had ever tried e-cigarettes were also asked 'How important was each of the following factors for starting (e-cigarettes)? 1. To be able to smoke in places where tobacco smoking is not allowed; 2. To stop or reduce tobacco smoking; 3. You considered them attractive, cool or fashionable'. For each factor, respondents could either say it was important ('very important'; 'fairly important'); not important ('not very important'; 'not at all important'); or "don't know". Perception of harmfulness Perception of e-cigarette harmfulness was assessed in both waves with the question 'In recent years, electronic cigarettes, or e-cigarettes, have been increasingly marketed in Europe. Do you think that they are harmful or not to the health of those who use them?'. Participants could respond 'yes'; 'no'; and 'don't know'; for our analysis, 'no' and 'don't know' were grouped together. Current tobacco use Smoking status was assessed with the question "Regarding smoking cigarettes, cigars or a pipe, which of the following applies to you?". Individuals who chose the response "You currently smoke" were classified as current smokers, those who selected the response 'You used to smoke but you have stopped' were classified as former smokers and those who responded that 'they have never smoked' were classified as never smokers. Sociodemographic characteristics Data were collected on participants' age (15-24; 25-39; 40-54; and ≥55 years), gender (male; female), educational level (the age when they stopped full-time education: ≤15; 16-19 or ≥20 years of age) and area of residence (rural; urban). Financial difficulties, as a potential proxy for socioeconomic status, were assessed with the question 'During the last twelve months, would you say you had difficulties to pay your bills at the end of the month…?' Response options included: 'Most of the time', 'From time to time' or 'Almost never/never'; for the purpose of this analysis, 'Most of the time' and 'From time to time' were grouped together. Statistical analysis Descriptive results are presented as proportions (%) with 95% CIs, while logistic regression results are presented as adjusted ORs (aOR) with 95% CI. Results are presented by geographic region, according to the United Nations geoscheme. 14 Changes in ever use and perceptions of harmfulness in each country, between 2012 and 2014, were assessed with logistic regression models, adjusted for age and smoking status, as these two were the most important factors associated with e-cigarette use in the 2012 wave. 10 The EU-wide dispersion in ever e-cigarette use and perception of harm was determined using the coefficient of variation, computed as the ratio of the SD to the mean. In order to explore changes in ever use of e-cigarettes and perceptions between 2012 and 2014 in the EU, a logistic regression model was fitted with survey year as an independent variable, adjusted for: age; educational level; difficulty to pay bills; gender; area of residence and smoking status. In order to assess differences in trends, two-way interaction terms between the survey year and age, and between survey year and smoking status, were initially included in the model; however, none of these was statistically significant and they were dropped from the final model. A separate multilevel logistic regression was fitted among respondents who had ever tried e-cigarettes (2014 survey only), where being a current e-cigarette user was the outcome and independent variables included: age; educational level; difficulty to pay bills; gender; area of residence; smoking status; and reasons for trying e-cigarettes. 'Reasons for trying e-cigarettes' was added to the model in a stepwise forward method, as these may be considered as mediating factors between the association of sociodemographic factors and smoking; the significance level to keep variables in the model was set to 0.10. Finally, we calculated the Pearson correlation coefficient between the prevalence of ever smokers (current and former smokers) and the prevalence of e-cigarette ever use at a country level, in 2014, in order to explore whether variation in ever use of e-cigarettes could be explained by differences in the prevalence of current and former smoking. All analyses were performed with Stata 12.0 and weights provided in the official Eurobarometer datasets were used in order to account for the complex design of the survey. E-cigarette use Ever use of an e-cigarette in all 27 EU member states increased from 7.2% (95% CI 6.7% to 7.7%) in 2012 to 11.6% (95% CI 10.9% to 12.3%) in 2014. EU-wide coefficient of variation in ever e-cigarette use was 42.1% in 2012 and 33.4% in 2014. Ever use of e-cigarettes in the 2014 survey varied widely between countries, ranging from 5.7% in Portugal to 21.3% in France. The Pearson correlation coefficient between the prevalence of ever smokers and the prevalence of e-cigarette ever use at a country level was 0.28, indicating some correlation between the two variables. Similarly, several EU member states, such as Malta (aOR=5.46; 95% CI 2.82 to 10.58), showed considerable increase in the odds of ever e-cigarette use, whereas in some countries, the odds of ever e-cigarette use did not change significantly between 2012 and 2014. Also, within the 2014 Eurobarometer survey, approximately one in seven respondents who had ever tried an e-cigarette defined themselves as current e-cigarette users-indicating a transition from experimentation to current use (15.3%; 95% CI 12.9% to 17.7%), with between-EU member state variation ranging from 1.7% in Slovenia to 28.9% in Portugal (table 1). Perception of harmfulness The proportion of respondents who thought that e-cigarettes are harmful increased from 27.1% (95% CI 26.3% to 28.0%) in 2012 to 51.6% (95% CI 50.6% to 52.5%) in 2014, in the EU. In the 2014 survey, there was considerable variation between EU member states regarding the perceived harmfulness of e-cigarettes, with coefficient variation=19.2% (compared to 36.1% in 2012), and prevalence ranged from 31.1% in Hungary to 78.1% in the Netherlands. However, in most European countries, with the exception of Greece and Hungary, where the increase was not statistically significant, the perception that e-cigarettes are harmful increased significantly between 2012 and 2014 (table 2). Factors associated with e-cigarette use After adjusting for tobacco smoking and sociodemographic factors, respondents were more likely to report that they had tried an e-cigarette in 2014, compared to 2012 (aOR=1.90; 95% CI 1.77 to 2.03) (table 3). Being a current or a former smoker significantly increased the likelihood of having ever tried an e-cigarette (aOR=23.36; 95% CI 20.86 to 26.17, and aOR=6.54; 95% CI 5.74 to 7.45, respectively). Also, younger age (especially being 18-24 years old), living in urban areas and higher educational level, were associated with higher likelihood of having ever tried an e-cigarette. Respondents were also more likely to regard e-cigarettes as being harmful in 2014 (aOR=2.98; 95% CI 2.87 to 3.09), while those who were younger, had a higher educational level, less financial difficulties and who were former smokers, were more likely to perceive e-cigarettes as harmful (table 3). All two-way interaction terms between survey year and the other variables were not significant, indicating that the increase in the odds of having tried e-cigarettes between 2012 and 2014 did not significantly differ between men and women; between current smokers, former smokers and never smokers; and so on. Among those who had ever tried an e-cigarette, those defining themselves as current e-cigarette users were more likely to be older. Current e-cigarette users were more likely to have started using e-cigarettes because they thought e-cigarettes could help them quit smoking (aOR=2.82; 95% CI 1.99 to 3.99), as well as to circumvent smoking bans (aOR=1.54; 95% CI 1.19 to 2.00). On the contrary, attractiveness did not seem to influence their decision to become regular e-cigarette users (aOR=0.74; 95% CI 0.53 to 1.02) (table 4). DISCUSSION This analysis of the most up-to-date data from the whole of the EU shows that although perceptions that e-cigarettes are harmful are increasing, levels of ever use are also increasing. Those who began using e-cigarettes as a means to quit tobacco smoking or who used them in order to circumvent smoking bans were more likely to be current users of e-cigarettes. Interestingly, the proportion of youth and adults that reported having used e-cigarettes showed wide variation between European countries. These differences may be partly explained by the different prevalence of smoking in EU member states, considering that current and former smokers were much more likely to have tried e-cigarettes. This hypothesis is in line with the moderate correlation found between the prevalence of ever smokers and the prevalence of e-cigarette ever use at a country level. Moreover, use of e-cigarettes is also promoted as a cessation aid and this appears to be an important reason for many users. 15 Thus, availability and access to cessation aids may have influenced the adoption of e-cigarettes. For example, in Greece and Bulgaria, where smoking prevalence is high and use of evidence-based cessation aids low, 16 ever use of e-cigarettes was reported by more than 10% of the respondents in 2012. Similarly, trends between 2012 and 2014 could have been influenced by a number of factors that might differ between member states. Such factors include affordability of cigarettes and e-cigarettes, regulation of advertising and promotional activities, prevalence of use of other alternative tobacco products (eg, smokeless tobacco in Sweden) and enforcement of smoking bans in public places. In the majority of member states, the proportion of respondents who had tried e-cigarettes increased during the 2-year period between the surveys; most of the exceptions were countries where adoption of e-cigarettes was already high in 2012. Similar to ever use of e-cigarettes, perception of harm and trends over time varied between countries, even though the overall proportion of the population that considered e-cigarettes as harmful almost doubled in 2 years. As e-cigarettes become more popular, more information becomes available and evidence on potential risks associated with its use is accumulated. 17 Perceptions could also be influenced by public health campaigns, advertising and attitudes of health professionals and public health agencies towards e-cigarettes. For example, Public Health England recently published a report highlighting the potential of e-cigarettes as a harm reduction device, 18 whereas most public health agencies in the EU have not done anything similar. Even though this report was published after the second wave of the survey, it might reflect a more favourable stance of authorities towards e-cigarettes in the UK, which may explain why it had one of the lowest proportions of respondents who perceive e-cigarettes as harmful. Previous research has highlighted that the majority of e-cigarette use is among smokers 7 10 and that dual use is common. 19 20 However, there are concerns that e-cigarettes could become popular among non-smokers and possibly serve as a gateway to cigarette smoking. Our analysis showed that non-smokers were much less likely to have ever tried an e-cigarette, compared to smokers; nevertheless, ever use of e-cigarettes increased among them as much as among smokers, between 2012 and 2014, raising concerns regarding their rising popularity in population groups not addicted to nicotine. Our analysis also found that around one in seven people who had ever tried e-cigarettes defined themselves as current users. Many studies to date have failed to differentiate between experimentation and regular use, with the exception of some studies among young people. 21 22 Nonetheless, this one-in-seven figure is higher than reported in previous studies, which may reflect either differences by age or other factors. Additionally, despite its increasing popularity, those who tried an e-cigarette because they considered it attractive were not more likely to become current users, which may be in contrast to the importance of image and attractiveness for conventional cigarettes. 23 This may change as the market for e-cigarettes grows, and may depend on regulations around the advertising of these products. People who started using e-cigarettes as a cessation aid were much more likely to be current users. The effectiveness of e-cigarettes as a cessation aid is still being researched, 24 25 but it seems that a proportion of smokers who are trying to quit may be using it as such. 15 Dual use may also help smokers circumvent smoking bans by using e-cigarettes in places where tobacco smoking is prohibited, thus attenuating the impact of smoking bans. In the present study, those who thought that this was an important reason to try e-cigarettes were more likely to be current users-a possible indication of regular use. These findings may provide some insight into the motivation of people who become regular e-cigarette users and inform policies related to smoking cessation services and the effectiveness of smoking bans in public places. Regarding perceptions of harm caused by e-cigarettes, evidence from the UK on 11-18-year-olds has similarly concluded that perceptions of harm are on the rise. 7 We also found that perception that e-cigarettes are harmful was higher among respondents with higher education and financial status, findings that may indicate socioeconomic inequalities in knowledge about these novel products. However, as the discussion on the risks associated with e-cigarettes is ongoing, 1 it would be interesting to explore how e-cigarette users and non-users perceive these risks in comparison to smoking. 26 27 It must be noted that almost 3 of 10 participants (29.1%) responded that they did not know whether e-cigarettes were harmful, which indicates that there is still a lot of uncertainty regarding the health effects of ecigarettes. However, there is now evidence that e-cigarettes produce potentially harmful emissions, although the potential harms are most likely less than conventional cigarettes. 17 Considering that e-cigarettes are sometimes promoted as 'healthier' alternatives to conventional cigarettes, it would be of more interest to assess whether people consider them equally or less harmful to cigarettes, but, unfortunately, no such data were collected in the Eurobarometer. Hence we decided to focus on people's awareness of potential harmfulness of e-cigarettes and grouped 'no' and 'don't know' reponses together. Strengths and limitations This is the first study to assess the changes in perceptions and use of e-cigarettes in recent years, both nationally and at an EU level. The large sample size and the consistent sampling methodology allowed for reasonable comparisons between countries and years, despite all data being self-reported and no objective assessment of e-cigarette use being carried out. The wording of the questions assessing e-cigarette use was slightly different in 2014, not allowing us to assess changes in current use and potentially introducing misclassification bias. However, our analysis was limited to ever use of e-cigarettes and, despite the different wording between the two waves, there was no ambiguity in which response options reflected at least some use of e-cigarettes; therefore the bias introduced by this is most likely minimal. The question that assessed smoking status was somewhat atypical, but was consistent in both surveys. Moreover, any assumptions of causal relationships should be made with caution, as the data analysed were cross-sectional; longitudinal data would allow for more robust conclusions. Finally, current use was only assessed in 2014, and no data on important issues, such as duration of use and effectiveness as a cessation aid, were collected. 28 Conclusions Levels of ever use of e-cigarettes are increasing, and around one in seven of all people who have ever used e-cigarettes classify themselves as current users. These trends are against a backdrop of increasing perceptions that e-cigarettes are harmful to health, and there are large variations across the EU. Within these two consecutive cross sectional surveys, the majority of e-cigarette use is concentrated among current and former smokers, and people who start using e-cigarettes in order to quit smoking tobacco are more likely to continue to use e-cigarettes. These findings provide novel and extensive information on the prevalence and predictors of current use across the EU, and they highlight differences between member states. Further research in order to identify factors at individual and national level that may affect use of e-cigarettes is needed, ideally with prospective studies that could identify potential causal associations. What this paper adds ▸ Ever use of e-cigarette in the European Union (EU) increased from 7.2% in 2012 to 11.6% in 2014. ▸ EU residents were more likely to consider e-cigarettes as harmful in 2014 (51.6%) than in 2012 (27.1%). ▸ Those who started using e-cigarettes in order to quit smoking or circumvent smoking bans were more likely to become regular users. ▸ A better understanding of the population-level use and impact of e-cigarettes within the EU is needed, especially of the potential impact on smoke-free laws, smoking initiation and cessation.
2017-08-15T02:59:26.750Z
2016-05-24T00:00:00.000
{ "year": 2016, "sha1": "740d83fa8c54d396cdc37df5a33568a06b8d3061", "oa_license": "CCBYNC", "oa_url": "https://tobaccocontrol.bmj.com/content/tobaccocontrol/26/1/98.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "740d83fa8c54d396cdc37df5a33568a06b8d3061", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265660727
pes2o/s2orc
v3-fos-license
Decreased NK cell count is a high-risk factor for convulsion in children with COVID-19 Background The neurological symptoms caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are of increasing concern. Convulsions are among the main neurological manifestations reported in children with coronavirus disease-2019 (COVID-19), and cause serious harm to physical and mental health. This study aimed to investigate the risk factors for convulsion in children with COVID-19. Methods This prospective study was conducted at the Children’s Hospital of Soochow University. In total, 102 COVID-19 patients with convulsion, 172 COVID-19 patients without convulsion, and 50 healthy controls were enrolled in the study. The children’s clinical and laboratory data were analyzed to assess the risk factors for convulsion in COVID-19 patients. Results Convulsions occurred in 37.2% of children, mostly those aged 1–3 years, who were hospitalized with the Omicron variant. The neutrophil count, neutrophil-to-lymphocyte ratio (NLR), monocyte-to-lymphocyte ratio (MLR), platelet-to-lymphocyte ratio (PLR), and mean platelet volume-to-platelet ratio (MPR) were significantly higher in the convulsion group than those in the non-convulsion and control groups (P < 0.01). However, the counts of lymphocytes, eosinophils, platelets, lymphocyte subsets, CD3+ T cells, CD4+ T cells, CD8+ T cells, and NK cells were lower in the convulsion group than those in the non-convulsion and control groups (P < 0.01). Multivariate regression analysis indicated that NK cell count (OR = 0.081, 95% CI: 0.010–0.652) and a history of febrile seizure (OR = 10.359, 95% CI: 2.115–50.746) were independent risk factors for the appearance of convulsions in COVID-19. Conclusions History of febrile seizure and decreased NK cell count were high-risk factors for convulsions in COVID-19 patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-023-08556-7. The subvariants of Omicron are characterized by strong infectivity, short latency, and immune escape.According to the National Center for Disease Control and Prevention (CDC) in China, the prevalent strains during our study were BA.5.2 and BF. 7 [2]. SARS-CoV-2 infection has generated a variety of clinical symptoms.In addition to the ubiquitous respiratory symptoms, some children have also exhibited neurological symptoms during the Omicron wave [3].A multi-center cohort study in the United States demonstrated that 40% of children with COVID-19 exhibited at least one neurological symptom [4].A systematic review by Misra et al. indicated that up to a third of COVID-19 inpatients experienced neurological symptoms [5].LaRovere et al. reported that 22% of children with COVID-19 had a neurological involvement [6].The neurological injuries caused by COVID-19 in children were mainly manifested as headache, myalgia, anosmia, febrile seizures (FSs), encephalitis, myelitis, meningitis, encephalopathy, Guillain-Barré syndrome, and acute disseminated encephalomyelitis, among which anosmia and febrile seizures were the most common [6].FSs are increasingly recognized by physicians as a pediatric emergency and early manifestation of acute necrotizing encephalopathy, encephalitis, and meningitis.Serious nervous system involvement, such as severe encephalopathy or brain edema, was observed in 2.5% of children infected with Omicron subvariants [7].The mortality rate among these patients was as high as 25.5% [6].Persistent and recurrent seizures were the main early manifestations of necrotizing encephalopathy and brain edema.If the occurrence of acute neurological damage can be predicted early, this can facilitate timely intervention for children and improve their prognosis.SARS-CoV-2 is transmitted by direct viral spread and through droplets/airborne from infected patients.It invades vascular, airway, and alveolar epithelial cells, endothelial cells, and macrophages by attaching to the ACE2 receptor [8].An uncontrolled innate immune response such as the excessive release of interleukin-6 (IL-6), IFN-γ, and monocyte chemoattractant protein-1, and imbalanced adaptive immunity are two ways in which SARS-CoV-2 induces lung tissue damage [9].In addition, it is currently believed that the pathogenic mechanism of SARS-CoV-2 infection on the central nervous system (CNS) involves direct invasion of the CNS [10] and an excessive release of pro-inflammatory cytokines, such as tumor necrosis factor-α and interleukins (IL) -1β, -6, -8 and -17 [11,12].Although several review articles have described the respiratory and neurological manifestations of COVID-19 [13,14], the demographic characteristics of presentations involving different organ systems after SARS-CoV-2 infection have not been explored.This study intended to compare clinical and experimental indicators of SARS-CoV-2 infection in children presenting with respiratory and neurological involvement.The secondary objective was to investigate high-risk factors for the development of convulsions in children with COVID-19 to provide an early warning and clues to their pathogenesis. Clinical characteristics of the convulsion and non-convulsion groups in inpatients infected by the Omicron variant The study cohort had a total of 274 inpatients infected by the Omicron variant, with 102 patients experiencing convulsion and 172 patients without convulsion.There were 61 males and 41 females, with a median age of 2.1 (1.4-3.4) years in the convulsion group, and 97 males and 75 females, with a median age of 1.1 (0.25-5.1) years in the non-convulsion group (Table 1).There was no significant difference in the sex ratio between the two groups.A greater proportion of children in the convulsion group were aged 1-3 years and were older compared with the children in the non-convulsion group (P < 0.05).In addition, children in the convulsion group were more likely to have a history of febrile seizure (FS) than those in the non-convulsion group.The duration in the convulsion group was shorter than that in the non-convulsion group (P < 0.05).However, patients in the non-convulsion group were more likely to exhibit cough, wheeze, and polypnea than those in the convulsion group (P < 0.05). Laboratory parameters of the convulsion, non-convulsion, and control groups As shown in Tables 2 and 3, there were no significant differences in sex ratio and age between the control and case groups (P > 0.05).The neutrophil count, neutrophilto-lymphocyte ratio (NLR), monocyte-to-lymphocyte ratio (MLR), platelet-to-lymphocyte ratio (PLR), and mean platelet volume-to-platelet ratio (MPR) were significantly higher in the convulsion group than those in the non-convulsion and control groups (P < 0.01).However, the lymphocyte count, eosinophil count, platelet, lymphocyte subsets, CD3 + T cell count, CD4 + T cell count, CD8 + T cell count, and NK cell count were lower in the convulsion group than those in the non-convulsion and control groups (P < 0.01).In addition, the monocyte count and the globulin, ALT, AST, LDH, and C 4 values were higher in the convulsion and non-convulsion groups than those in the control group (P < 0.01).The ALP value and CD3 − CD19 + B cell count were lower in the convulsion and non-convulsion groups than those in the control group (P < 0.01).The value of procalcitonin (PCT) was higher in the convulsion group compared to Table 2 The hematological profiles of COVID-19 patients infected by Omicron variant with and without convulsion The data presented as median (interquartile range), mean ± standard deviation, [reference value] and n(%).The univariate analyses were performed using Kruskal-Wallis for skewed distribution variables, ANOVA for normal distribution variables and the chi-square test for categorical variables.Abbreviation: WBC white blood cell, NLR neutrophil-to-lymphocyte ratio, MLR monocyte-to-lymphocyte ratio, MPV mean platelet volume, PLR platelet-to-lymphocyte ratio, MPR mean platelet volume-toplatelet ratio, CRP C-reactive protein, PCT procalcitonin.P < 0.05 between a, b and c the non-convulsion group (P < 0.01).There were no significant differences in IgA, IgG, or IgM among the three groups (P > 0.05).The majority of children in both the convulsion and non-convulsion groups had decreased serum calcium and increased lactate and D-Dimer.The proportion of children with elevated lactate was higher in the non-convulsion group than in the convulsion group (P < 0.01). The risk factors for convulsion in children with SARS-CoV-2 infection The clinical and laboratory parameters with statistically significant differences between the convulsion and non-convulsion groups were included in the logistic regression analysis.The three models were built for regression analysis (Table 4).In Model 1 (without any correction factors), history of febrile seizure, cough, polypnea, neutrophils, lymphocytes, eosinophils, NLR, MLR, platelet, PLR, albumin, ALP, lymphocyte subsets, CD3 + T cell count, CD3 + CD4 + T cell count, CD3 + CD8 + T cell count and NK cell count were statistically significant indices for predicting convulsion (P < 0.05).After correcting for gender and age in Model 2, eosinophils and albumin were no longer statistically significant.All indicators were adjusted in Model 3 and history of febrile seizure and NK cell count were independent risk factors for convulsions in children with SARS-CoV-2 infection.In addition, ROC curve analysis indicated that the diagnostic value of history of febrile seizure combined with NK cell count was 0.720 (95% CI: 0.657-0.783,P < 0.01; Fig. 1 and Table 5). Clinical characteristics of convulsion groups I and II Convulsions induced by SARS-CoV-2 infection manifested either as a single convulsion, multiple convulsions, or status epileptic.In this study, the children with a single convulsion and convulsion time of < 5 min were classified as group I and those with multiple convulsions or status epileptic were classified as group II. As shown in Table S1 and S2, there was no significant difference between groups I and II in clinical manifestations or blood routine indexes.However, globulin and IgA were lower in group II than in group I, and ALP was higher in group II than in group I (P < 0.05, Table S3).[16].A greater proportion of COVID-19 patients with convulsions were aged 1-3 years and were older than those without convulsions, which was also consistent with results reported in a previous study [4].In this study, 98.0% of COVID-19 patients with convulsions had fever and were diagnosed with febrile seizures (FSs), which illustrated that FS was the most common neurological sign of SARS-CoV-2 infection [17].In addition, our findings indicated that children with COVID-19 who had a history of FS were more likely to have convulsions than those without a history of FS, independent of the febrile peak of the child's fever.Hematological parameters play a significant role in the early diagnosis of multiple inflammatory illnesses [18][19][20].In this study, COVID-19 patients had lower counts of lymphocytes, eosinophils, and platelets and higher monocyte counts compared with those in healthy children, which is consistent with the results of previous studies [21,22].The lymphopenia was associated with viral infection of lymphocytes via angiotensin-converting enzyme 2 (ACE2) receptors, leading to lymphocyte apoptosis [23].Contrary to previous research [24], neutrophils were elevated rather than decreased in COVID-19 patients in this study, particularly in those with convulsions.This may be related to granulocyte irregularities in severe COVID-19 as well as infection with different viral variant strains [25].The neutrophil-to-lymphocyte ratio (NLR), monocyte-tolymphocyte ratio (MLR), and platelet-to-lymphocyte ratio (PLR) can also be indicators of early inflammation and have been associated with the severity of COVID-19 [26,27].However, linkages of these parameters to SARS-CoV-2-related nervous system injury have been limited.COVID-19 patients with elevated NLR, MLR, and PLR were described in this study, and elevations of these parameters were more pronounced in patients with convulsions.This implies that the inflammatory response was stronger in COVID-19 patients with convulsions than in those without convulsions. We also analyzed the peripheral blood lymphocyte subsets of children infected with SARS-CoV-2.In addition to the decreased total lymphocyte count, the subsets of CD4 + T cells, CD8 + T cells, CD19 + B cells, and NK cells were also decreased.SARS-CoV-2 infection is fought by cytotoxic CD8 + T cells, CD4 + T helper cells, NK cells, and B cells [25].CD8 + T cells and NK cells kill virus-infected cells, and B cells emit neutralizing antibodies [28].However, SARS-CoV-2 can disrupt normal immune responses by depleting the immune process and producing an uncontrolled inflammatory response [25,29].Multivariate logistic regression analysis revealed that a history of FS and a decreased NK cell count were independent risk factors for the development of convulsions in children with SARS-CoV-2 infection.Our results illustrate that NK cells played an important role in the occurrence of convulsions in COVID-19 patients.However, IgA, IgM, and IgG were not elevated in the children with SARS-CoV-2 infection in this study, which may be related to the strong immune escape effect of the virus. It is well known that SARS-CoV-2 infection in children causes multiple organ injury.Mild elevations of ALT and AST were present in this study, which was consistent with previous studies [30].Furthermore, SARS-CoV-2 infection-induced liver failure has been previously reported [31].Interestingly, the values of serum ALP and Ca 2+ decreased in COVID-19 patients in this study.A previous study found that vitamin D was closely associated with the severity of COVID-19 [32].Therefore, we speculate that SARS-CoV-2 might interfere with calcium metabolism and that this may be involved in the occurrence of convulsions. The current study concluded that patients with SE and multiple convulsions were more likely to progress to severe encephalitis, meningitis, and encephalopathy [33].In the present study, there was no significant difference in clinical and peripheral blood indices between patients with a single convulsion and those with multiple convulsions.It is noteworthy that IgA was at low levels in children with complex convulsions.The results of a previous study [34] also indicated that plasma IgA level was associated with prognosis and illustrated that IgA played a protective role in controlling SARS-CoV-2 infection. Definitions Status epileptic (SE): either a single unremitting seizure lasting longer than 5 min or frequent clinical episodes without an interictal return to the baseline clinical state [35]. Laboratory tests SARS-CoV-2 DNA-PCR assay Nasopharyngeal swabs were collected from all patients during admission for SARS-COV-2 assay.The detection was performed by RT-PCR with the SARS-CoV-2 nucleic acid detection kit (DaAn Gene Co., Ltd).All steps were performed according to the manufacturer's instructions.A value below 5 × 10 2 copies ml −1 was considered negative. Routine complete blood count, liver function, and immunoglobulin assays Routine blood count was conducted using a type BC-5310 instrument (Shenzhen Mindray Biomedical Electronics Co., Ltd).Serum ALT (alanine transaminase), AST (aspartate transaminase), ALP (alkaline phosphatase), and LDH (lactate dehydrogenase) were measured using a lactate dehydrogenase assay.Albumin and globulin were measured by biuret and salt out assays, respectively.These biochemical indicators were detected using a HITACHI 7180 biomedical analyzer.Complement C3, C4, immunoglobulin G (IgG), M (IgM), and A (IgA) were detected using a turbidimetric inhibition immunoassay (Orion Diagnostica Oy). All steps were performed according to the manufacturer's instructions. Statistical analysis The data were presented as median [interquartile range], mean ± standard deviation, and n (%).The t-test and analysis of variance (ANOVA) were used for normally distributed variables.The Mann-Whitney U and Kruskal-Wallis tests were used for skewed distribution variables.Categorical variables were compared using chi-squared or Fisher's exact tests.Binary logistic regression analysis was used to calculate the odds ratios (ORs) of variables.Receiver-operating characteristic (ROC) curve analysis was used to evaluate the diagnostic accuracy.The statistical analyses were performed using SPSS version 25.0 (IBM Corp., Armonk, NY, USA).Differences were considered statistically significant at P-values < 0.05. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Table 1 General characteristics of inpatients infected by Omicron variant with and without convulsionThe data are reported as median (interquartile range), mean ± standard deviation or n (%).The univariate analyses were performed using Mann-Whitney U test for skewed distributed data, T-test for normally distributed data and the chi-square test or fisher′s exact test for categorical variables.P < 0.05 had statistical Table 3 The biochemical and lymphocyte subsets examination of COVID-19 patients infected by Omicron variant with and without convulsion The data presented as median (interquartile range), mean ± standard deviation, [reference value] and n (%).The univariate analyses were performed using Kruskal-Wallis for skewed distribution variables, ANOVA for normal distribution variables and the chi-square test for categorical variables Abbreviation: ALT Alanine transaminase, AST Aspartate transaminase, ALP Alkaline phosphatase, LDH Lactate dehydrogenase P < 0.05 between a, b and c Table 4 Factors associated with convulsion in COVID-19 patients infected by Omicron variant (multivariate analysis) Table 5 AUC of each indicator for predicting convulsions in children with Omicron infection
2023-12-06T15:10:13.590Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "6745a28c12b991ecbe4359d168262dee5a99f9a4", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/counter/pdf/10.1186/s12879-023-08556-7", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "6745a28c12b991ecbe4359d168262dee5a99f9a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213055278
pes2o/s2orc
v3-fos-license
Comparison of Lipid Profile Among the Different Stages of Chronic Kidney Disease in Children 1. Consultant, Department of Paediatrics, Mahbubur Rahman Memorial Hospital & Nursing Institute, Rupashdi, Banchharampur, Brahmanbaria, Bangladesh. 2. Assistant Professor, Department of Paediatrics, Khwaja Yunus Ali Medical College Hospital, Enayetpur. Sirajganj, Bangladesh. 3. Associate Professor, Khwaja Yunus Ali Medical College Hospital, Enayetpur. Sirajganj, Bangladesh. 4. Professor, Department of Paediatric Nephrology, Bangabandhu Medical University, Dhaka, Bangladesh. 5. Junior Consultant, Paediatrics, Upozila Health Complex, Kachua, Chandpur, Bangladesh. 6. Associate Professor, Department of Paediatrics, Monno Medical College & Hospital, Manikganj, Bangladesh. 7. Assistant Professor, Department of Paediatrics, Northern International Medical College& Hospital, Dhaka, Bangladesh. Introduction Chronic kidney disease (CKD) is a permanent and significant reduction in glomerular filtration rate or chronic irreversible destruction of the kidney tissue. 1 Chronic kidney disease is defined as abnormalities of kidney structure or function, present for more than 3 months, with implication for health. 2 Lipids are essential components of cell membranes, contributing to cell fuel, myelin formation, subcellular organelle function and steroid hormone synthesis. 3 Children with chronic kidney disease (CKD)/End stage renal disease (ESRD) exhibit various co-morbidities, including dyslipidemia. The prevalence of dyslipidemia in children with CKD and end stage renal disease (ESRD) is high (39-65 %). 4 Insulin resistance, increased Apo lipoprotein C-III and impaired lipolysis are involved in the inappropriate clearance of lipoproteins, contributing to lipid abnormalities in children with chronic kidney disease. 5 Dyslipidemia is prevalent in young child on peritoneal dialysis. The high glucose load from the dialysis fluid might contribute to this high dyslipidemia. 6 Many studies document that prevalence of dyslipidemia is an important risk factor for the development of cardiovascular and cerebrovascular disease in general population as well as in children with CKD. 7 Materials and Methods It was a cross sectional analytic study, conducted in department of Pediatric Nephrology, Bangabandhu Sheikh Mujib Medical University, Dhaka, over a period of 01st January 2016 to 30th June 2016. Fifty Children with CKD who were admitted in inpatient and attended in the outpatient department were included and divided into two groups, Group I and Group II. Children with CKD stage III and IV were included in Group I and stage V and VD in Group II. Data analysis and Statistical Analysis After collection, all the data were checked and edited. Then data were entered into computer with the help of SPSS software for windows programmed version 16. Chi-square and independent samples t-test and other appropriate statistical tests were done based on. Results This cross sectional study was conducted to see the comparison of Lipid Profile among the different Stages of Chronic kidney disease in Children. (P value <0.001) Discussion The present study was conducted in the department of pediatric nephrology, BSMMU, Dhaka, to see the status of Lipid Profile among the different stages of chronic kidney disease in Children. This study analyzed fasting lipid profile of 50 children with CKD. The current study has shown male is predominance with a male to female ratio of 1.27:1.This finding is almost similar to Bonthus et al 5 and Dvorakova et al. 8 The cause of male predominance might be the etiology as obstructive uropathy and glomerulo-nephropathy are more common in male gender. It has been also observed that, most of the patients were of 11-15 year (60%) age group. Bonthus et al5 also found similar type of findings. 5 This similarity may be due to both studies were performed in tertiary level hospital where patients usually attained hospital lately. In our study, it was observed that majority (64%) patient came from rural area and 36% came from urban area as Bangladesh is a developing country most of the people live in rural area. It is also consistent with report of Bangladesh bureau of statistics. 9 It had been observed that, majority 26 (52%) had monthly income between 10000-20000 taka, only 13 (26%) had between 5000-10000 taka, few 11 (22%) had >20000 taka. This reflects the socioeconomic status of patients in our hospital. It is also consistent with report of Bangladesh bureau of statistics. 9 The present study showed that the etiology of CKD included glomerulonephritis 14 (28%), obstructive uropathy 15 (30%), hypoplasia/dysplasia 9 (18%), polycystic kidney disease 9 (18%) and acute kidney injury 3 (6%). Roy RR et al 10 also found similar type of findings in their study, and both of the study done in the same hospital. Kanitar Conclusion It can be concluded from present study that dyslipidemia was common in CKD. Hypertriglyceridemia was being the commonest. It is inversely related to GFR. Children with chronic kidney disease should routinely be checked up with lipid profiles and should be addressed or treated. Larger caliber multicenter studies are recommended for validating the finding of the present study.
2020-01-02T21:46:19.622Z
2019-12-23T00:00:00.000
{ "year": 2019, "sha1": "40f30ac17c0cbbd3965bd3186b7ac901af530311", "oa_license": null, "oa_url": "https://www.banglajol.info/index.php/KYAMCJ/article/download/44416/32610", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9dc7bd3efa809e0d9d7b974cd56a8b75882ac4ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245617414
pes2o/s2orc
v3-fos-license
Deforestation scenarios show the importance of secondary forest for meeting Panama’s carbon goals Context Tropical forest loss has a major impact on climate change. Secondary forest growth has potential to mitigate these impacts, but uncertainty regarding future land use, remote sensing limitations, and carbon model accuracy have inhibited understanding the range of potential future carbon dynamics. Objectives We evaluated the effects of four scenarios on carbon stocks and sequestration in a mixed-use landscape based on Recent Trends (RT), Accelerated Deforestation (AD), Grow Only (GO), and Grow Everything (GE) scenarios. Methods Working in central Panama, we coupled a 1-ha resolution LiDAR derived carbon map with a locally derived secondary forest carbon accumulation model. We used Dinamica EGO 4.0.5 to spatially simulate forest loss across the landscape based on recent deforestation rates. We used local studies of belowground, woody debris, and liana carbon to estimate ecosystem scale carbon fluxes. Results Accounting for 58.6 percent of the forest in 2020, secondary forests ( \ 50 years) accrue 88.9 percent of carbon in the GO scenario by 2050. RT Introduction Conversion of tropical forests to other land uses is a major contributor to the rise in atmospheric carbon dioxide (CO 2 ; e.g., Baccini et al. 2012, Liu et al. 2015. Governments around the world are seeking creative and ambitious approaches to mitigate landbased CO 2 pollution and meet the recommendations of the Intergovernmental Panel on Climate Change to keep climate change to under 1.5 degrees Celsius (IPCC 2018). For example, the Amazon Fund, a trust instrument managed by the Brazilian National Development Bank, finances a diverse set of activities aimed at reducing deforestation in the Brazilian Amazon (Amazon Fund Activity Report(2017)http://www. amazonfund.gov.br/export/sites/default/en/.galleries/ documentos/rafa/RAFA_2019_en.pdf (2017)). Similar efforts to reduce deforestation in neighboring Amazonian countries, as well as in the Congo Basin, and Indonesia are underway (FAO 2011). In 2014, the government of Panama created the ''Alianza por el Millón'' (Alliance for a Million), a government partnership bringing together business, non-governmental organizations, educational institutions and other organizations to reforest one million hectares (Ministerio de Ambiente 2019a). These efforts are advancing in recognition that vast areas of the world could be reforested, thereby making significant inroads to combatting climate change (Griscom et al. 2017;Bastin et al. 2019). Yet simply because an area could support forest does not mean the social and political requirements can be easily met (Holl and Brancalion 2020). For areas that are targeted for reforestation and forest restoration, initiatives will ultimately require a combination of both active reforestation, defined by tree planting on land that was previously forested (Cunningham et al. 2015), as well as passive, natural recovery where land is left fallow and natural processes of forest succession are permitted to take place (Holl and Aide 2011). The ability of tropical secondary forest to rapidly accumulate biomass has been recognized for several decades (e.g., Flint et al. 1999;Silver et al. 2000). Recently, (Poorter et al. 2016) reported on biomass recovery for 45 sites across the Neotropics, finding that, on average, these forests accrue 120 Mg ha -1 in aboveground biomass (AGB) in the first 20 years of recovery. Using the same dataset, (Chazdon et al. 2016) suggested that Latin American second-growth forests could gain as much as 8.48 Pg Carbon over 40 years. In Panama, secondary forests in moist areas on very low phosphorus soils can accrue 40% of the AGB and carbon of mature forest during the first 20 years of forest regrowth (Battermann et al. 2013). Given that these forests draw on the local species pool that is adapted to local biotic and abiotic conditions (Breugel et al. , 2019 and that a major obstacle to active reforestation is lack of knowledge on survivorship and the basic growth characteristics of selected trees Ashton 2016), reliance on natural forest recovery for carbon sequestration presents an attractive option for carbon sequestration initiatives. Forest carbon-based climate change mitigation programs are occurring in a complex socio-ecological context of changing land-use patterns, which makes predicting the future landscape condition largely untenable. Scenario planning is a powerful tool for the exploration of alternative futures in land systems characterized by irreducible uncertainty, while explicitly incorporating relevant science and internally consistent assumptions about drivers, relationships, and constraints (Peterson et al. 2003; Thompson et al. 2012). Scenarios are a rigorous way of asking ''What if?'' and they can help decision makers better anticipate the potential consequences of alternative sets of decisions, so they can either adapt as changes occur or actively take steps to avoid undesirable futures (Chermack et al. 2001). Scenarios can be derived by researchers or planners who have a technical understanding of the system (e.g., Dale et al. 2003;Hall et al. 2012;Roriz et al. 2017). Alternatively, scenarios can be codesigned with groups of stakeholders to increase the salience of the process (McBride et al. 2017). In either case, the development of alternative pathways and the subsequent simulation and visualization of alternative scenarios is a powerful tool for decision makers tasked with planning for the short-and longterm sustainability of land systems (Mallampalli et al. 2016). Many authors have projected changes in forest cover in view of evaluating carbon emissions and/or establishing baselines for Reduced Emissions due to Deforestation and forest Degradation (REDD ?) projects (e.g., Pelletier et al. 2011;Sloan and Pelletier 2012;Stibig et al. 2014). These studies often use remote sensing-based products to estimate deforestation rates and derive empirical relationships between the location of deforestation events and spatial driver variables (Tokola 2015;Thayamkottu and Joseph 2018;Sy et al. 2019). Carbon is often assigned to classified Landsat or MODIS data from limited or localized forest inventories, such that it is difficult or impossible to disentangle local, landscape, and regional variations (Houghton et al. 2001;Tyukavina et al. 2013). Further, while obtaining deforestation rates can be challenging, quantifying forest gain has proven even more difficult, because in contrast to most deforestation events, reforestation is a gradual process and can be confounded with fields in temporary states of fallow. The challenge of accurately following reforestation signals is elevated when clear imagery is limited, as is the case in the tropics (Walker 2016). Indeed, the (Hansen et al. 2013) Global database on forest change no longer includes an assessment of forest recovery . Most efforts to model forest carbon dynamics rely on allometric equations derived from estimates of aboveground biomass in trees (e.g., Duncanson et al. 2019;Kellner et al. 2019) where plot-based estimates are applied to broad forest categories that do not capture the compositional or successional variation present within the landscape Graves et al. 2018). Aboveground tree carbon is readily derived but does not account for all the carbon in the forest ecosystem. Further, lack of consideration of changes in carbon stocks associated with forest loss and recovery masks the magnitude and ultimate consequences of changes resulting from policy decisions. The objective of this study was to evaluate the effects of four deforestation scenarios, where persisting young forest grows over time, on carbon sequestration in central Panama, an area covering 23 percent of the country. Our approach is similar to that of (Sangermano et al. 2012) who used historic rates of deforestation to develop scenarios assessing win-win possibilities for carbon and biodiversity under REDD ? in Bolivia. Our study improves upon recent studies evaluating forest carbon, the possible consequences of REDD ? policies and conservation scenarios in Panama (Pelletier et al. 2011;Sloan et al. 2018) and beyond (e.g., Venter et al. 2009;Avitabile et al. 2016) by including widely recognized but often ignored aspects of carbon accounting. First, we go beyond studies of aboveground tree biomass by using locally derived data on both above-and below-ground tree carbon, soils, woody debris, and lianas to estimate ecosystem carbon stocks. Second, we leverage a local landscape-scale study of secondary forest dynamics to develop a locally adapted model of forest growth allowing an accurate estimate of forest carbon recovery. Tang et al. (2020) followed a similar approach in their work using remote sensing to estimate change in forest carbon in Colombia. Whereas they extracted static data from a chronosequence study, we use dynamic data to model forest growth (see below). We follow Erb et al. (2017) to calculate the hypothetical forest potential in the absence of deforestation going an additional step to also reforest all available vegetated land in our study area. Our scenarios allow us to evaluate the carbon consequences of divergent futures for central Panama and, in doing so, provide a lens through which land use decisions in other regions may be viewed to inform policy. We evaluate potential differences in direction and magnitude of ecosystem carbon dynamics by comparing them to the more commonly employed approach of relying on aboveground tree carbon or biomass in making policy decisions. Finally, we briefly discuss the implications of divergent scenarios on water provisioning ecosystem services. Study region The study region encompasses the mainland area of central Panama (Fig. 1). The region includes 23 percent of the country's land area. Just over half of the nation's population (Instituto Nacional de Estadistica y Censo de Panama 2019) lives within the study area and it includes the Panama Canal Watershed (PCW), an area of critical economic importance where earlier projections of land cover and carbon storage have been conducted (Dale et al. 2003). The region is also highlighted for its provision and dependence on ecosystem services (Heckadon-Moreno et al. 1999;Condit et al. 2001;Ibanez et al. 2002;Hall et al. 2015). We included the area outside of the PCW owing to discussions concerning the possibility of diverting an adjacent river into the PCW in order to overcome projected water shortages (Adamowicz et al. 2019) and because of its potential role in the forest recovery in Panama described by (Wright and Samaniego 2008). Initial forest conditions We combined two datasets to define the initial forest conditions for the scenarios. We utilized the Panama Vegetation-Cover Time-Series (PVCTS) map for 2020 to define the extent of our initial forest area (Walker 2020;Walker 2021a). The PVCTS maps depict forest-cover and forest-cover change in Panama from 1990 to 2020 at 30 m resolution. The PVCTS maps were locally calibrated and validated, shown to be highly accurate (Walker 2020), and are the best available data for this project. We assigned aboveground carbon density values to each PVCTS 2020 forest pixel using a LiDAR derived map from (Asner et al. 2013). Asner et al. (2013) developed their carbon density map for Panama using airborne LiDAR technology combined with satellite imagery from 2012. They estimated the carbon stock within 10% deviation from the plot-level, field-estimated values of carbon density, an unprecedented precision that affords the ability to measure real change in carbon stocks over time (Mascaro et al. 2011;Asner et al. 2013). We reclassified ten of the original 24 land cover classes in the 2020 PVCTS map into a single composite Forest class consisting of High Veg, High Gallery Veg, Deciduous Plantation, Disturbed Forest, Forest Gallery, Evergreen Plantation, Wetland Forest, Water and Wetland Forest Mixed, Mature Forest, and Gallery in Mature Forest (Table SI 1). Wetland Forest and Wetland Forest Mixed together represent 0.68 percent of the study area. We acknowledge potential inaccuracies in applying a broadleaf forest growth model to wetland forest types but, given its limited extent relative to the study area and the goals of scenario projections, we feel it is acceptable. Deciduous Plantations (0.26 percent of the study area) are grown according to rules described below. Percent coverage of land use classes in the 2020 PVCTS map are found in Table SI 1. Finally, we resampled and coregistered the forest mask to 1 ha resolution using a nearest neighbor approach to match the resolution of the carbon density map produced by (Asner et al. 2013). We assigned Asner's 2012 carbon densities values to each 1 ha forest pixel in the 2020 PVCTS forest extent. To temporally align Asner's 2012 carbon densities with the 2020 PVCTS map, we implemented an initial 8-year spin-up whereby each forest pixel was aged ? 8 years according to the carbon accumulation model described below. The combined forest extent and carbon density map was used as the initial starting conditions for our scenarios. Non-forest areas were excluded from the analysis except in the Grow Everything scenario (see below). Deforestation rates and spatial patterns We used (Walker 2020) PVCTS 2016-2020 deforestation maps to define the baseline rate and spatial pattern of deforestation. Because we could not simulate forest gain, we did not estimate the net change . Deforestation in Panama is influenced by a suite of socio-economic factors (e.g., Sloan and Pelletier 2012;Walker 2021b) that may be correlated to landscape features and human settlements. As of 2011, a large discrete deforestation event began in our study area with the clearing of forest for the Petaquilla and Cobre Panama Gold and Copper mines (Fig. 1). As these gold and copper deposits are related to belowground geology and deforestation to high-level political decisions not detectable by geographical features, we excluded this area from our analysis of the spatial allocation of deforestation. We do, however, include this in our analysis of the rate of deforestation. To replicate the historical pattern of deforestation, we quantified the empirical relationships between the location of deforestation events and a suite of spatial driver variables and used these relationships with the Dinamica EGO cellular land-cover change model to project deforestation patterns (see model details below). Distance from Cities, Towns, Rivers, Roads, Slope, Elevation, and Conservation areas were based on spatial data from Smithsonian Tropical Research Institute (STRI). All pre-processing was performed using ESRI's ArcGIS software and co-registered to match the spatial resolution and extent of the initial forest conditions map described above. We evaluated the potential to stratify the study area into sub-regions to account for and regional variability in the predictive power of different spatial correlates of change within the land cover model. However, we did not identify any strata that would improve the relationships and the Weights of Evidence (see model description below) for the driver variables were linear through the full study area, suggesting that a single simulation region was best. Furthermore, constraining the rates of change to sub-regions would create artificial breaks to the patterns of sprawling development surrounding developing cities such as Panama City. Running the model with one region allowed for a natural progression of deforestation from areas that historically experienced high rates into areas of the study area that historically had little development. Secondary growth forest data We estimated the carbon density of young secondary forests using data from the Agua Salud Project, which is located within our central Panama study region (Stallard et al. 2010) and maintains a network of 108 secondary forest dynamics plots (SFD) distributed across a 3000 ha landscape ). The Agua Salud SFD is a highly replicated chronosequence with a nested design where 2 plots per site were placed up slope and down slope to capture within site variability. The SFD is dominated by plots in forest from the first 40 years of forest recovery from pasture. Except for 2014, plots have been measured annually from 2009 to 2018 such that [ 80,000 stems have been measured each year resulting in 1,100,000 measurements of 120,000 independent stems. The high plot replication within age class coupled with repeated plot measurements over time were used to develop our model estimating changes in forest biomass and carbon with time (see below). Based on extensive field visits and inventories throughout central Panama we determined that the Agua Salud SFD reasonably represents forest recovery for forests in central Panama. Ecosystem pools: roots, soil, woody debris, and lianas We used data from studies completed within our study area to estimate carbon stocks in tree roots, soil, woody debris, and lianas to develop ecosystem scale carbon estimates. Sinacore et al. 2017) excavated trees up to 35 cm dbh and determined belowground biomass and carbon for roots down to 2 mm diameter (coarse roots). Average belowground allocation was found to be 27.6 percent of total tree carbon. We thus considered per ha tree AGB found by (Asner et al. 2013) to account for 72.4 percent of total tree biomass and adjusted accordingly. We did not specifically account for fine roots, some of which would have been measured in the soil carbon pool. Neumann-Cosel et al. (2011) evaluated soil carbon in pastures, young forests and those up to 100 years old at Agua Salud. Total soil carbon for pastures was found to be 46.96 Mg C per ha for 0-20 cm depth and 58.43 Mg C per ha for 100-year-old forest and as compared with mature forest on nearby Barro Colorado Island, appear to be in the final stages of carbon accumulation (Neumann-Cosel et al. 2011). We considered the pasture carbon as a baseline with an equivalent annual increase such that our forests increase 0.11 Mg C per ha per year up to 100 years and thereafter to be in a steady state,a linear trend is supported by work completed at Agua Salud by (Püspök 2019). As (Neumann-Cosel et al. 2011) found no significant difference between carbon pools at 10-20 cm depth, for this study we consider anything below 20 cm as immobile, our ecosystem carbon only includes the mobile fraction of soil carbon or the carbon accrued above the pasture baseline. (Gora et al. (2019) measured woody debris in mature forest of Barro Colorado Island, approximately 8 km from Agua Salud. They estimated woody debris as 20.63 Mg per ha, equivalent to 9.78 Mg C per ha. In contrast our young forest recruiting from pastures has little observable woody debris (JS Hall personal observation, M. Larjavaara unpublished data). For the purposes of this study, we considered the (Gora et al. 2019) value to be achieved and in steady state by 100 years. Assuming an equivalent annual increment in woody debris carbon with forest growth, we estimated woody debris increase at a rate of 0.10 Mg C per year. Lai et al. (2017) measured and estimated liana carbon in our Agua Salud SFD network and found a near linear increase in this pool. We used an annual increment derived from their data at 0.15 Mg C per year. This is slightly higher than the rate reported by (Heijden et al. 2015) for old secondary forest. We made the conservative assumption with respect to liana biomass accumulation, that it would be in a steady state beyond 100 years. Modeling changes in carbon density with forest growth Secondary forests We used locally derived allometric equations developed by ) to determine the carbon density of each plot in the SFD network. Asner et al. (2012), in turn, used these data to develop the LiDAR model that estimated carbon throughout Panama (Asner et al. 2013). Lai et al. (2017) fit a model to changes in aboveground biomass (AGB) to the dynamic data from the Agua Salud SFD. Building on Lai et al. (2017) we used data through 2018 to fit five models and select the best model with the highest in-sample and out-of-sample accuracy (see below). We modelled aboveground biomass (AGB, Mg / ha) as a function of forest age (yr) in five candidate mixed-effect model forms: (1) Linear: AGB = b 1 Age. In all models, the regression was forced through the origin such that a forest site has zero AGB at age zero. To account for the nested sampling design, we also included a random plot-within-site effect that accounts for variations at the site level. We included the random effect only in the linear coefficient, b 1 because we assumed that the short-term (ten-year) AGB trajectory at the plot level is near-linear within each of their shorter forest-age ranges, compared to the longer-term AGB-forest age relationship across the chronological landscape. Moreover, models with random effects beyond the linear coefficient failed to converge, yielding unreliable parameter estimations. Prior to analysis, we scaled the forest age variables by dividing it to their standard deviations. Doing so greatly assisted model convergence, especially for the more complex nonlinear Michaelis-Menten and 2-parameter asymptotic exponential models. Next, we quantified the models' in-sample accuracy using AICc (Burnham and Anderson 2002). In addition to AICc, we also compared models with their ability to accurately predict out-of-sample new data. To do so, we used a tenfold cross validation that splits the data into ten exclusive partitions, and for each partition used 90% of the data to train the model and the remaining 10% to test model predictions. The prediction accuracy of each trained model on test data was measured as the root mean square error (RMSE) and the coefficient of determination (R 2 ). When the best model with the lowest RMSE and/or highest R 2 was selected, we refit the model with the whole dataset for a more precise parameter estimation. The best model with the lowest AICc, lowest RMSE and highest R 2 was then used as a basis to grow forest (see below). The mixed-effect models were conducted using the nlme package (Pinheiro et al. 2018) in R v3.6.3 (R Core Team 2019). Using a conversion factor of 0.474 (Martin and Thomas 2011) we converted aboveground biomass to carbon for the equation from the best model above. Carbon densities in the (Asner et al. 2013) data were converted to age using the best AGB-forest age regression model to allow their use in the growth model. Plantations Deciduous Plantations cover 0.26 percent of our study area (see above). Many of these plantations exhibit extremely poor growth (Stefanski et al. 2015) as the species planted commercially, teak (Tectona grandis), grows poorly on infertile, clay, low pH soils (Lugo et al. 1997) that dominate central Panama. For existing plantations, we assumed growth of one-half the rate of secondary forest in aboveground tree C based on observed growth and carbon accumulation in teak plantations on the dominant soils in the PCW (Hall 2013;Stefanski et al. 2015, Sinacore et al. in review), and published site index curves for the region (Keogh 1982). Increases in ecosystem carbon Increases in AGB and C for secondary forests and plantations were completed as described above. To estimate other ecosystem carbon pools, we applied the percent root contribution of (Sinacore et al. 2017) to the pixel level aboveground carbon data of (Asner et al. 2013) and subsequent modeled growth. As we determined soil, woody debris, and soil carbon to all increase at a uniform rate over 100 years, we then applied an annual increase of 0.36 Mg C per year (O.11 Mg C per ha soil carbon ? 0.10 Mg C per ha woody debris ? 0.15 Mg C per ha liana carbon = 0.36 Mg C per ha) up to year 100 after having determined forest age from the AGB-forest age relationship (see above). When converting to carbon dioxide equivalents, we divided the Mg C by the atomic mass of carbon (12.01) and then multiplied the combined atomic mass of one carbon and two oxygen (16.00 each) or 44.01. Scenarios The Recent Trends (RT) scenario projects a linear continuation of the observed rates and spatial allocation of deforestation during the reference period and where remaining forest continues to grow following our growth model. Rates of deforestation are based on the rate of change in the 2016-2020 deforestation maps. This equates to an annual deforestation rate of 0.43 percent. The Accelerated Deforestation (AD) scenario envisions a significant increase in the rate of deforestation. To simulate our Accelerated Deforestation (AD) scenario, growth occurs as described above. We followed the approach of (Sangermano et al. 2012) of incorporating longer term forest change data and more specifically, (Dale et al. 2003), who used the annual deforestation rate of 2.25 percent derived by Heckadon-Moreno et al. (1999) for the period of 1976-1998. To show the potential of forests in contributing to land-based carbon sequestration, we include two additional scenarios. The Grow Only (GO) scenario projects the hypothetical carbon potential of forest within our forest cover mask with no change in the spatial extent of forest in the study region and no deforestation. Our Grow Everything (GE) scenarios goes beyond the GO scenario by further permitting forest to recruit, establish, and grow on all vegetated land within the study area. Thus, the GE scenario adds an additional 530,483 ha of forest area to the other three scenarios, illustrating the ceiling for which forests could theoretically contribute to carbon sequestration up until 2050. Simulating forest cover loss. We use the Dinamica Environment for Geoprocessing Objects 4.0.5 (Soares-Filho et al. 2002) to simulate thirty-years (2020 to 2050) of land-cover change under RT, and AD scenarios using annual time steps. Because the Grow Only and Grow Everything scenarios include no forest-cover loss, they are not incorporated into the Dinamica model. We examined historic deforestation patterns from 2001 to 2011 in relation to a suite of spatial predictor variables (Table 1). We selected these variables based on previous experience modeling land-cover change (Thorn et al. 2016;Thompson et al. 2017), the availability of detailed GIS data for the study region, and our personal knowledge of the region. Dinamica EGO is a spatially explicit cellular automata model of landscape dynamics capable of multi-scale stochastic simulations that incorporate spatial feedback. Dinamica has been used to simulate land-cover change globally, including several applications in Central and South America (Gago-Silva et al. 2017;Kolb and Galicia 2017;Ramírez-Mejía et al. 2017;Roriz et al. 2017;Lima et al. 2018). Dinamica EGO uses a weights-of-evidence (WoE) method to set the transition probability for any given land-cover pixel. The WoE method employs a modified form of Bayes theorem of conditional probability (Goodacre et al. 1993;Bonham-Carter 1994) to derive weights where the effect of each spatial variable on a transition is calculated independently of a combined solution (Soares-Filho et al. 2009). Continuous variables are discretized through an iterative binning process so that individual weights can be calculated for each bin. Dinamica EGO calculates weights (W ?) for each driver variable independently then sums the W ? values to create a composite transition potential map. For each driver variable, positive W ? values predict the future occurrence of new deforestation patches while negative W ? values predict the future absence of new deforestation patches. The Recent Trends and the Accelerated Deforestation scenarios use the same weights and resulting probability maps but differ in their rates of change. As previously noted, the Grow Only scenario contains no forest cover loss while the Grow Everything scenario expands the forest estate to its maximum potential. For a more detailed discussion of Dinamica EGO and the Weights of Evidence methods (see Thompson et al. 2020). Within Dinamica EGO six parameters control the patterns of deforestation: the land cover transition rates, pixel level transition probabilities (W ?), the ratio of ''new'' vs. ''expansion'' patches, the mean and variance of ''new'' and ''expansion'' patch sizes, and the patch shape complexity (i.e. patch aggregation). ''New'' deforestation patches are defined as those occurring in interior forests (i.e. at least one 1 ha pixel away from a forest edge). ''Expansion'' patches are defined as deforestation that occurs in forest edge pixels. Our procedure for estimating these parameters was as follows: (1) The land-cover transition rates for Recent Trends simulation are based on observed changes in the historical reference period (2016-2020). Transition rates for the AD scenario come from Heckadon-Moreno et al. (1999). All other parameters are identical for the RT and AD scenarios. (2) The ratio of new to expansion patches were set to 0.93% new and 0.07% expansion. This ratio was calculated based on the observed ratio in the historical training data. (3) The quantity and size of deforestation patches were based on a normal distribution with a mean of 1.00 ha and variance of 23.95 ha. The true mean patch size in the reference period was 1.59 ha, however the median was 1.00 ha, so to account for the heavy right skew in the patch size histogram, we set the mean patch size to 1.00 ha. (4) Patch shape complexity is controlled by an isometry parameter, which is a multiplier that increases or decreases the underlying transition probability values of neighboring cells around a seed cell. Values greater than 1 result in simpler, more aggregated shapes; values \ 1 result in more complex shapes (i.e., less aggregated). We found the isometry value of 1.0 (no modification) best matched the patch shape complexity observed in the training data. Model accuracy assessment We assessed the performance of our land-cover change model in terms of its robustness to two key assumptions: 1) that the empirical WoE relationship derived during calibration between the spatial driver variables and disturbance events was stationary through subsequent time periods, and 2) that the patch seeding algorithm could replicate the spatial pattern of deforestation observed within the calibration period. For these assessments, we calibrated deforestation in Dinamica using the PVCTS 2011-2016 map then simulated four years of deforestation spanning 2016-2020. We used the Figure of Merit (FOM) metric to quantify the similarity between the simulated and observed PVCTS 2016-2020 maps. The FOM is a ratio comprised of three components: Hits, Misses, and False Alarms. The numerator (Hits) represents the intersection of True Deforestation and Simulated Deforestation (i.e. change pixels in the calibration map that have been correctly simulated as change in the confirmation map). The denominator represents the union of True Deforestation and Simulated Deforestation (Misses ? Hits ? False Alarms), where Misses are the area of error where confirmation change is simulated as persistence, and False Alarms is the area of error where confirmation persistence is incorrectly simulated as change. In addition, using the WoE method applied in Dinamica, we mapped the probability of deforestation based on patterns observed in the 2011-2016 period in relationship to the suite of spatial predictor variables (Table 1). Using histogram equalization, we rescaled the probabilities such that the lowest probability pixel was set to zero and the highest set to 100. We then overlaid observed forest loss from the 2016 to 2020 period on to the rescaled probability map to assess whether the relationship to the spatial variables was stationary through time. The special case of protected and restricted areas Simulated deforestation was limited within several protected areas (Fig. 1). Land cover change maps generated by the Panama Canal Authority and the Ministry of the Environment show limited forest loss between 2000 and 2007 (Martinez 2011, also see Fig. 1) in several protected areas. A similar result was found by (Walker 2020) in her analysis of deforestation between 1990 and 2015 but where landslides resulting from the extreme rainfall event of La Purisima in 2010 are visible and considered deforestation. Thus, we set deforestation rates to zero for the protected areas where very little deforestation was observed (Soberania National Park, Camino de Cruces National Park, Metropolitano Park in Panama City, and restricted areas to the west of the canal with unexploded ordnances). We mapped protected and restricted areas using spatial data supplied by MiAmbiente. All other protected areas were included as a categorical driver variable. Protected areas with historically high rates of forest loss receive higher weights of evidence (W ?) values than those with historically low rates of forest loss. This ensured that simulated transitions within these areas mimicked historic rates and patterns depending on the past effectiveness of their protection status. Final carbon estimates Final carbon estimates at each time step represent net of forest ecosystem growth and loss. Ecosystem carbon gain is described above. With the exception of our GE scenario, no new forest stands were initiated in any of our scenarios. Existing forests all grew in our GO scenario while the forest area was expanded to its maximum potential and grown under GE. Our RT and AD scenarios grew in the same manner but also included carbon loss. Here we used the deforestation model transitions from the RT and AD scenarios to project rates of carbon loss over time. Ecosystem carbon density for a given pixel at a given time step results from ecosystem carbon accumulation based on the increment estimated in growth from initial conditions. The role of secondary forests To illustrate the contribution of secondary forests to carbon accumulation in our scenarios, we identified forest areas with carbon values of \ 10-year-old forest, 10 years B age \ 30-year-old forest, 30 years B age \ 50-year-old forest, and age C 50-year-old forest. Pixel or stand age was determined from our forest growth equation (see above). Accuracy assessment The model's performance, as determined by FOM statistic, exceeds the conventional threshold of acceptability set by Pointius (2008), who states that the ''FOM's minimum percentage must be larger than the deforestation area in the reference region during the confirmation period expressed as a percentage of the forest area in the reference region at the start of the confirmation period''. Our deforestation model FOM is 1.29%, while the net observed change in the confirmation period 2016-2020 was 1.24% (see Table SI 2). The median scaled probability of the true deforested pixels was 76.95, which means that half of the observed deforested pixels occurred in the 23.05% of the landscaped ranked with the highest probability of deforestation values. Given that our analyses are not intended to reproduce the precise location of deforestation events, but rather to emulate the overall landscape patterns, we feel the model performance is more than adequate for exploring plausible future land-use scenarios. Deforestation Rates of deforestation declined dramatically within our study are, including the PCW, from the period before 2000 to the present study (2016 to 2020). Whereas (Dale et al. 2003) reported forest loss rate of between 1.7 and 3.0 percent per year within the PCW between the 1980s and 1998, Walker's PVCTS reveal a deforestation rate between 2016 and 2020 of 0.20 percent per year within this same area. Taking 2.35 percent as the historical deforestation midpoint in the PCW, this represents a 91 percent reduction in the deforestation rate. Forest loss rates outside of the PCW but inside our study area for the period of 2016-2020 averaged 0.56 percent. Interestingly, the watersheds with the highest annual rates of deforestation were immediately south (Caimito 0.88 percent) and west of the PCW (Miguel de la Borda 1.56 percent, Indio 1.12 percent, and Platanal 0.91 percent, see Fig. 1 for watershed locations). Projected future deforestation are shown in Table 2. Correlates of change The Weights of Evidence calculated from the historic training data shows the relative probability of forest loss in relation to a suite of spatial driver variables. The probability of forest loss was highest within 10 km of large city centers and within 1.5 km of smaller towns. Distance to roads was positively correlated with forest loss up to 1 km. Distance to rivers showed only weak association with deforestation at close distances however it had a negative correlation with forest loss past 500 m. All flat and moderate slopes showed a weak positive correlation that quickly became negative past 14 degrees. Lower elevation areas \ 200 m were positively correlated with forest loss while areas [ 200 m were negatively correlated (Fig. 2). While regional stratification often results in an improved parameterization of locally distinct drivers of change this would also have required confining the rates of change to specific sub-regions. Given the high rates of change in our AD scenario, regional stratification was not an option as it would have produced unrealistic spatial patterns at the boundaries between sub-regions. Ecosystem and secondary forest carbon Accounting more fully for ecosystem carbon added [ 60 percent more to aboveground tree carbon in our central Panamanian forest system, whether it is viewed as carbon density or total carbon across the landscape (Tables 4 and 5). The initial study area for the RT, AD, and GO scenarios included 5,102,266 ha of forest estimated to be younger than 50 years old (Table 6). In the GO scenario these forests accrue 21.8 million Mg C to 2050 accounting for 88.9 percent of the carbon accrued across the region. Changes in forest cover and carbon By design forest area was unchanged within the ''Grow Only'' scenario and forest growth resulted in forests gaining 10.2 million Mg C by 2030 and 24.5 million Mg C 2050 in forest ecosystem carbon. We focus on 2030 as the Intergovernmental Panel on Climate Change has highlighted the next 10-years as a time by which significant efforts need to be undertaken if we are to keep temperature increases at or below 1.5 degrees Celsius (IPCC 2019); 2050 is the midpoint through the century and a commonly used milestone for projecting changes, including targets set by the Panamanian government (see below). Under our Recent Trends and Accelerated Deforestation scenarios we estimated a forest loss of 36,707 ha (RT) and 177,035 ha (AD) by 2030 (Table 3). Owing to the capacity of secondary forest to rapidly accumulate carbon, we nevertheless projected a carbon gain of 7.6 million Mg C by 2030, (Table 4; Fig. 5) in our RT scenario. It is noteworthy that our AD scenario resulted in relatively little carbon loss (2.9 million Mg C) by 2030 but nevertheless is 10.4 million Mg C lower than the RT scenario or ten years into the future. The Grow Only scenario accrues approximately 2.5 million and 10.4 million Mg C more than the RT scenarios by 2030 and 2050, respectively. Were policy makers able to flip land use to only allow for forest recovery across the entire region, there is a theoretical possibility of accruing 36.1 million Mg C and 59.1 million Mg C by 2030 and 2050, respectively above the business-as-usual RT scenario (Table 4). Discussion and Conclusions The land-use scenarios examined here demonstrate a wide range of potential trajectories for terrestrial carbon pools in central Panama, with total carbon pools in the year 2050, ranging from 60.9 to 154.1 million Mg. Panama is transitioning to a low carbon economy (Ministerio de Ambiente 2019b) with carbon sequestration through active and passive reforestation planned to account for approximately 337 million Mg CO2e taken out of the atmosphere by 2050 (Ministerio de Ambiente 2019a). Our finding suggests that policy makers' decisions regarding land-use can be a significant part of Panama's climate mitigation strategy. Maintaining recent trends in deforestation would sequester an additional 51.9 million Mg CO2e by 2050 or 15.4% of the national goal in an area representing 23% of the nation's land area. Yet should deforestation be halted, and forests allowed to recover (GO) or all available vegetated land within the study region be protected and permitted to grow as forests (GE), central Panama could sequester between 89.7 and 188.6 million Mg CO2e or between 26.6% and Fig. 2 Weights of evidence linking deforestation with physical and social variables often correlated with deforestation. Weights from 0 to 0.5 are described as ''weak'', 0.5-1.0 as ''moderate'' and [ 1.0 as ''strong''. Positive weights indicate higher than random chance for the presence of a deforestation transition. Negative weights indicate a higher than random chance for the absence of a deforestation transition. Weights at or near zero indicate no significant association between the driver variable at that bin range and the presence/absence of a transition 56.0% of its national goal. In contrast, the AD scenario would have devastating consequences for Panama's land based carbon sequestration releasing an additional 73 million Mg CO2e by 2050, making it extremely difficult for Panama to achieve its planned land-based carbon sequestration objective. Decisions made now regarding land use in Panama will have a profound impact on Panama's ability to meet these targets, a fact not lost on the government. In October of 2020 the President of the Republic, Lorentino Cortizo signed a decree outlining and empowering the Ministry of the Environment to begin the steps necessary to create a country wide program for monitoring greenhouse gas emissions and towards reducing the national carbon footprint (Gaceta Oficial Digital de Panama, 2020). Further, the public comment period for a draft decree to establish a nationwide carbon trading program closed on 2 September 2021 (Ministerio de Ambiente 2021). Beyond these legal instruments, the Government of Panama has several large-scale reforestation projects in the pipeline covering 10 s of thousands of hectares (Ministerio de Ambiente 2020); (Environmental and Facility 2021). Time will tell whether these efforts will indeed result in the hoped-for transition to a low carbon economy. The reason that central Panama's forests can sequester copious amounts of carbon has to do with the very large area of naturally regenerating, low carbon forests (Table 6 and Fig. 5). In 2020, 5.1 million ha or 58.6 percent of forest is younger than 50 years old (Table 6) in our RT, AD, and GO forest mask. Yet these aggrading forests account for 88.9 percent of the carbon in the GO scenario in 2050, an astonishing number given that these forests held only 34.6 percent of the carbon in 2020. Over the next 30 years the additional 530,482 ha of forest in the GE vs GO scenario would gain 48.8 million more Mg C or almost 2 times more than the total gain of the GO scenario alone. Thus, these young secondary forests are the engine that drives the potential carbon benefits in the region. One reason that our carbon scenarios differ from past projections relates to our improved ability to determine carbon stocks in relatively young stands The black dot and error bars are the mean and standard deviation of forest on Barro Colorado Island (BCI), Panama. Mascaro et al. (2011) report half of BCI forests are 80-130 years old and these forests maintaining 15% less carbon than forest [ 400 years of age but that slope was main driver of differences in aboveground carbon density ,790,422 80,790,422 80,790,422 102,611,290 2030 88,353,270 77,934,596 90,947,650 124,461,453 2040 92,663,906 70,245,455 98,831,304 140,913,255 2050 94,946,861 60,859,277 105,265,443 154,078,758 Numbers in columns represent Mg C Numbers in columns represent Mg C within the forest. Early studies classified areas as either with or without forest (e.g., Aide et al. 2013;Hansen et al. 2013), unable to account for fine-scale differences among forest stands within a heterogeneous landscape (Tarbox et al. 2018;Walker 2020). Sloan and Pelletier (2012) wondered, given the state of the art at the time, whether carbon baselines determined with remote sensing were worthwhile. Sloan et al. (2018) subsequently used the same LiDAR derived carbon baseline (Asner et al. 2013) as used herein to provide an accurate and precise estimate of forest carbon heterogeneity at the 1-hectare scale (Asner et al. 2012). Further advances have been made in assessing carbon heterogeneity using high resolution Landsat imagery both in terms of detection (e.g., Baccini et al. 2016;Zarin et al. 2016) and in terms of allometric equations used to estimate biomass and carbon across the landscape (e.g., Chave et al. 2015). Yet these advances still suffer from lack or incorporation of other carbon stocks into forest carbon estimates (see, e.g., Houghton et al. 2000). By leveraging local studies of tree root, soil, liana, and coarse woody debris, our carbon assessment provides an estimate of forest ecosystem carbon and changes therein, consistently estimating over 60 percent more carbon in the ecosystem than that estimated by tree AGB models (Tables 4 and 5). Until recently, estimates of changes in forest carbon completed with remote sensing have been imprecise, not accounting for heterogeneity in forest development and composition. While repeated LiDAR overflights can determine changes in carbon uptake by regrowing forest trees and loss due to degradation, in practice such studies are rare. Tang et al. (2020) extracted data from Colombian forests contained in Poorter et al. (2016) to estimate carbon gain through variability in carbon accumulation rates as related to environmental variables at a global scale. We overcame these constraints by using dynamic data from a landscape scale study of secondary forest dynamics (Fig. 4) Fig. 4). Yet the picture that emerged from our AD model is grim. By 2050 the model predicted a forest loss of over 430,000 ha or almost half of all forest found in 2020. Forest loss will be concentrated west of the Panama Canal and watershed and will likely result in an irreversible severance of forest connectivity across our region (Fig. 5). An estimated 2.9 million Mg C will be lost by 2030 under the AD scenario or 3.5 percent of forest carbon from our 2020 baseline. The apparent disconnect between results from forest and carbon loss is the result of the loss of low carbon or young secondary forest (Fig. 5). Yet by 2050, 19.9 Mg C or 25% the carbon baseline will be lost under this scenario. While this scenario may appear to be an unrealistic extreme, it is grounded in historical data and we believe is plausible due to recent developments. The government of Panama has completed a bridge over the Panama Canal near Colon with plans of opening a new highway in the north along the Caribbean coast linking Colon and Bocas del Toro to the west (URS Holdings and Inc. 2011). In addition to this, the government has negotiated with the government of China to secure contracts for Panamanian beef exports (CentralAmer-icaData.com 2019),Gobierno de la Republica de Panama 2019, also see Huang 2016). Secure markets for beef could help foster conditions that would encourage cattle production and deforestation. Indeed, the government of Panama has long recognized the need to improve agricultural production to achieve food security (Oxford Business Group 2019). The government's past willingness to manipulate protected area boundaries in favor of mining could also lead to large, desecrate areas of mature froest being deforested (see Fig. 1 and above). Finally, as suggested for the forest transition in Puerto Rico (Yackulic et al. 2011) and as projected along the Pacific coast around and north of Panama City (Fig. 5), suburbanization may also reverse any trend in forest regrowth. These types of impactful yet unpredictable events underscore the value of scenario planning-it gives us the opportunity to learn about the consequence of multiple alternative pathways, which will help land managers and decision makers prepare for uncertain futures. It is worth reiterating that none of the scenarios examined here are intended to serve as predictions, but rather they bound a range of potential outcomes and spur ''out of the box'' thinking. Recent and very dramatic changes in deforestation rates in the Brazilian Amazon (Ferrante and Fearnside 2020;Oliveira et al. 2020) illustrate how quickly forest policy changes can impact forests. While our AD scenario might seem extreme, it is not as extreme as actual socio-ecological changes occurring throughout the world. Scenario implications for water provisioning ecosystem services. The Panama Canal Watershed is managed first and foremost for abundant fresh water. Ogden et al. (2013) showed the role of mature and old secondary forests in regulating stream flow. While forests lose more water than pastures through the process of evapotranspiration (Zhang et al. 2001), the soils of these forests can absorb water during the wet season and release it as stream flow during the dry season, defined as the sponge effect (Ogden et al. 2013;Adamowicz et al. 2019). Ogden et al. (2013) have also shown how forests can dramatically reduce the risk and impacts of flooding and Birch et al. (2021) show ten year old regenerating forests in their study site had water flow paths down to 30 cm depth that were similar to those or mature forests. As the government of Panama contemplates where to find new sources of water (e.g., Rio Indio, Fig. 1) for the Panama Canal and the watershed's other users, they would do well to consider the full potential of the regenerating secondary and mature forests of the region to help regulate stream flow. Declarations Ethical Approval This study did not collect human subject data nor did it include the study of animals. No authors have any conflict of interest associated with funders or any aspect of data collection. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2022-01-02T14:29:26.121Z
2022-01-02T00:00:00.000
{ "year": 2022, "sha1": "e21c2cf62b51467c84d1c57fce650c14be7e963b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10980-021-01379-4.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "95f00fda2023dd9e5438e08ff25942348448f660", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
269787259
pes2o/s2orc
v3-fos-license
In silico model development and optimization of in vitro lung cell population growth Tissue engineering predominantly relies on trial and error in vitro and ex vivo experiments to develop protocols and bioreactors to generate functional tissues. As an alternative, in silico methods have the potential to significantly reduce the timelines and costs of experimental programs for tissue engineering. In this paper, we propose a methodology to formulate, select, calibrate, and test mathematical models to predict cell population growth as a function of the biochemical environment and to design optimal experimental protocols for model inference of in silico model parameters. We systematically combine methods from the experimental design, mathematical statistics, and optimization literature to develop unique and explainable mathematical models for cell population dynamics. The proposed methodology is applied to the development of this first published model for a population of the airway-relevant bronchio-alveolar epithelial (BEAS-2B) cell line as a function of the concentration of metabolic-related biochemical substrates. The resulting model is a system of ordinary differential equations that predict the temporal dynamics of BEAS-2B cell populations as a function of the initial seeded cell population and the glucose, oxygen, and lactate concentrations in the growth media, using seven parameters rigorously inferred from optimally designed in vitro experiments. Introduction Tissue engineering is a subfield of biomedical engineering that aims to construct functional tissues and organs using engineering principles applied to biological systems.This field has long relied on rigorous in vitro, ex vivo, and in vivo empirical studies to identify culture conditions that achieve experimental objectives, such as maximizing cell yield/function or minimize variability, among others [1].Once optimal culture conditions have been identified, bioreactor devices and associated protocols can be designed to consistently and economically achieve them. In silico approaches, i.e., the use of mathematical models and relevant computational tools, have become a new paradigm in bioengineering and biomedical studies over the last few years [2].In tissue engineering applications, in silico approaches help researchers to formulate experimentally testable hypotheses for how the cells behave with quantitative and qualitative predictions about cell populations and population ratios [3].In addition, mathematical models can improve our understanding of complex biological phenomena, guide experimental studies, and support the design and optimization of bioreactors and associated experimental protocols [4] for scaled-up production of functional biological tissues.Notably, using mathematical models for tissue engineering applications allows researchers to leverage advances in Simulation-based Design & Optimization (SBDO), a diverse collection of methods and practices for the efficient, systematic use of computer and physical models to support the design and optimization of engineering systems.SBDO has the potential to significantly advance tissue engineering by accelerating discovery and by reducing the time and cost required for the development of next-generation bioreactor devices and experimental protocols [5,6]. Several mathematical models have been proposed for neotissue growth focusing on bone [7,8], cartilage [9], and neural tissues [10,11].Most of these models consist of a system of ordinary and partial differential equations for the cell population and biochemical concentrations [4,8,[10][11][12], while others also consider mechanical cues such as shear stress [7,9,13].These differential equations include coefficients and mathematical expressions representing different aspects of system dynamics, such as diffusion rates, chemical reactions, and modulating effects of biochemical and mechanical cues, all of which must be calibrated and validated. Due to data availability, cost, and timeline constraints, not all the mathematical models found in the literature have been calibrated and/or validated based on specially designed in vitro or ex vivo experiments [9].Instead, researchers may rely on model parameters estimated under different experimental contexts [7], or calibrate their models to reproduce qualitative behavior only [14].Additionally, in most cases, the proposed models have not been analyzed for uniqueness or identifiability [7,8,10,11,13].Without this, the estimated subset of model parameters is not guaranteed to be physically consistent [12,15] and generalizable. One of the potential application areas of the SBDO paradigm is the engineering of functional lung and airway tissues, organoids, and even whole organs, with potential applications in screening and development of drug therapies [16], disease modeling [17] and, eventually, to create de novo tissues and organs for human transplantation [18].A large body of work has focused on engineering airway tissues using synthetic [19][20][21] and donor-derived scaffolds [22,23].Among the many scientific and translational challenges of this line of research, one of the most significant is the design and optimization of protocols and devices to sustain the relevant cells as they deposit and attach to suitable scaffolds, proliferate, migrate, and differentiate into the targeted cell types required for functional tissues.A promising approach focuses on creating scaffolds through partial or total decellularization of donor organs.This strategy can result in 3D scaffolds that already have the necessary biochemical and mechanical cues needed for subsequent recellularization [22,24] with recipient-derived adult cells [24], embryonic stem cells [25], or induced pluripotent stem cells (iPSC) [26,27].Still heavily under research, several airway-relevant cell lines are used instead of donor cells for both accelerating discovery and proof-of-concept studies, including BEAS-2Bs [28][29][30], A549s [31], Calu-3s [32], and human tracheal epithelial cells [33].Despite the utility of these cell types, no mathematical models currently exist to describe their population dynamics under targeted culture conditions in vitro and/or ex vivo.The availability of such mathematical models can help accelerate discovery and translation in tissue engineering. In this work, we propose a detailed, rigorous methodology for developing in silico models for neotissue growth in vitro and ex vivo.We propose a model-based design of experimental protocols (MBDEP) approach that, leveraging these in silico models, defines the spatio-temporal sampling frequency required to optimally infer the model parameters based on experimental data.Starting from a set of biologically-informed model proposals, the proposed model development methodology uniquely combines methods from model inference (e.g., non-linear regression), model selection (data splitting), design of experiments, mathematical statistics (e.g., identifiability analysis [15,34]), sensitivity analysis [35]), and optimization. As a case study to illustrate our model development approach, this paper describes the development of the first mathematical model capable of predicting the population dynamics of bronchio-alveolar epithelial cells (BEAS-2Bs).BEAS-2Bs are non-cancer, immortalized cells that grow in monolayers [36], and replace normal human bronchial epithelial cells as a model in various toxicology studies [37].BEAS-2Bs were chosen to showcase the proposed methodology because they are often used in tissue engineering, organ regeneration, and transplantation studies, including decellularization and recellularization of airway tissue scaffolds [24,38,39], due to their ability to grow in a lab setting and mimic the function of the human airway epithelium [29,37].Thus, the mathematical model developed in this paper will be directly applicable to these contexts.Importantly, the proposed model development methodology is directly applicable to the formulation, calibration and validation of mathematical models for any other single cell line population and, with a suitable family of model proposals, to multicellular population dynamics. Experimental setup Cell culture.In vitro experiments are run for five days with four initial cell populations of 25,000, 50,000, 100,000, and 200,000 cells/well in 6-well plates.Each replicate has three wells seeded with the specified population of BEAS-2Bs [28,37].Each experiment has three replicates (3 × 3 = 9 total wells per initial cell density).The experiments did not involve any media change to observe the effect of more extreme concentrations.We ran four experiments at different glucose and oxygen levels to see their effects on cell population dynamics.For the first experiment, the media was 3 mL Dulbecco's Modified Eagle Medium (DMEM, Gibco, USA) with high glucose and pyruvate, and the cells were cultured at 37˚C, normoxic incubator [40] (18.6% oxygen), with 5% carbon dioxide.The second experiment used the same configuration as experiment one with DMEM with low glucose and pyruvate as the culture medium.For the third and fourth experiments, we altered experiments one and two by culturing the cells at 37˚C in the tri-gas incubator [41], CellXpert C170i (Eppendorf, Germany), with 5% oxygen and 5% carbon dioxide. Measurements.In each experiment, we removed the plates from the respective incubator and took 200 μL media samples and measured glucose, lactate, oxygen, potassium, sodium, and calcium concentrations using RAPIDPoint 500 Blood Gas Systems (Siemens Healthcare Limited, Canada) six hours after seeding and then every 12 hours (Experiment 1), or 24 hours (Experiments 2, 3, and 4), with the last measurement taken at 114 hours (5 days).At the same intervals, we took five images of the wells (EVOS FL Cell Imaging System) to estimate the total cell count. Model development methodology This paper proposes a methodology for the development of mathematical models for neotissue growth dynamics.The overarching problem here is to identify, calibrate and validate a mathematical (in silico) model for the population dynamics of a given cell type as influenced by a pre-defined set of biochemical stimuli.Sections 2.2.2 to 2.2.10 below discuss in more detail each of the corresponding steps of the proposed methodology, shown in Fig 1 .Results of applying this methodology for in silico modeling of cell population dynamics during in vitro culture of BEAS-2Bs cells are presented in Sec. 3. First, we start with designing a model for cell culture in well plates under static (no flow) conditions with different biochemical environments.This model will focus on the effect of chemical substrates (external stimuli) on proliferation and apoptosis rates (cellular responses).The model proposals are informed by the known biology and physics of the system.After creating a library of candidate models that govern the dynamics, the models are studied through structural identifiability analysis (section 2.2.2).The structurally identifiable models are considered for the next steps of model development.The next step is to define the objective function that encodes the model inference goal, typically minimizing the prediction error of the model with respect to the experimental data.Alternative goals may be formulated as part of the objective function or as optimization constraints, e.g., minimizing the variance of the parameter estimates, minimizing the number of parameters to be estimated, or minimizing residuals with respect to physical or empirical laws, among others.Then, an experimental protocol for data collection (e.g., sampling frequency) is designed so that it is optimal for the family of model proposals over a range of assumed noise levels.Once the experiments are conducted and the data collected and post-processed, the methodology focuses on model inference, i.e., model calibration (fitting) and selection.The result of this process is a single model selected from the set of model proposals that best fits the data according to the previously defined objective function.Next, practical identifiability analysis confirms that the inferred model parameters are unique and finite, i.e., that the objective function has a single global optimum.Then, the goodness of fit of the model is quantified (model validation) to provide an estimate of the expected predictive performance of the model under experimental conditions that are different from those used during model calibration and selection.We conclude the model development procedure using global sensitivity analysis to rank the controllable experimental parameters according to their predicted effect on the cell population.Sensitivity analysis is also used as a diagnostic tool for model calibration by identifying the subset of model parameters that have the greatest influence on model fit. Model proposals. In this work, we will discuss the modeling methodology focusing only on the dynamics of the cell population, including proliferation and apoptosis of a single cell line in a biochemical environment that is governed by advection, diffusion, and reaction phenomena.However, the methodology can be applied for inference of mathematical models, including other aspects of neotissue growth, such as scaffold biomechanics, cell-cell, and cellscaffold interactions, cell migration via chemotaxis and haptotaxis, and cell differentiation, by including additional equations codifying the underlying physics [42]. Using the mass balance equation for cell density and for the chemical species concentrations (also referred to as "substrates" in the literature) leads to a set of coupled advection-diffusion-reaction equations that describe the spatiotemporal cells densities and substrates concentrations fields [43].These spatiotemporal equations relate the rate of change of specific concentrations and densities to the diffusion, advection, and reaction rates.The equations are defined as, Here, c i is concentration, and n j is density for each chemical species, i, or cell line, j, respectively.D c i and D n j are the diffusion coefficients, u is the fluid velocity, M c i is the substrate reaction rate, M n j is the cellular responses (proliferation, death, and differentiation), X is the position vector, t is time, C and N are vectors containing all concentrations and densities, respectively, and r� is the divergence operator.The general governing Eqs 3 and 4 have been applied to a broad spectrum of growth and transport of biological processes [44][45][46].Simplified versions of Eqs 1 and 2 can be obtained when the concentration and cell density fields are spatially homogeneous, such as in the case of in vitro submerged static cultures on well plates.Since there is no spatial variability, the diffusion and advection terms in Eqs 1 and 2 can be disregarded, thus Eqs 3 and 4 show that the rate of change for concentrations and cell densities are defined by the reaction and response rates in a homogeneous domain.These rates are the summation of the production and consumption rates in the case of chemical species concentrations, and the summation of proliferation, death, and differentiation rates in the case of cell line densities [8,10,47]. Let us now look at specific, biologically informed functional forms for the right hand side of Eqs 3 and 4. For neotissues consisting of a single cell line that does not differentiate, we propose a model for the growth of the cell population incorporating proliferation and apoptosis rates [42] Here, n, c o , c g , c l , n max , δ, and β are the cell density, concentrations of oxygen, glucose, and lactate, maximum cell density, apoptosis rate constant, and coefficient of proliferation, respectively.As seen in the equation, it consists of two terms, one for the proliferation rate and the other for the death rate.The proliferation rate term is proportionally modulated by two effects [48], the first shows the biochemical stimuli effect, and the other shows the proliferation rate limited by the physical space.A logistic function for the latter term shows that the growth rate per capita linearly drops as the population increases until cell population saturation [49,50]. In Eq 5, the potential modulating effect of the biochemical substrates is usually defined in the literature as, where K i is the Michaelis-Menten Kinetics (MMK) constant.Each of the three substrates can either have a zero-order [8], linear [10], or Michaelis-Menten [7,47] effect on cell growth rate. The modulating effects depend on the cell type and the chemical substrate under consideration and, when unknown, can be the subject of data-driven model selection methods.It is important to note that, depending on the specific cell type, additional chemical substrates may have significant modulating effects on the cell population, e.g., growth factors, blockers and inhibitors, among others.These can be trivially included in the model through additional modulating terms in Eq 4 and additional advection-diffusion-reaction equations (Eq 1). For the purpose of describing the methodology and illustrating it with a specific cell line, we will limit our discussion to the modulating effects of glucose, oxygen, and lactate.These nutrients affect growth the most and play a crucial role in tissue viability [7,47,51].The reaction rates in Eqs 3 and 4 are dependent on concentrations and neotissue density n as, where i 0 is the inhibitor and R i (c i , c i 0 ) is referred to as the reaction term and can take one of several mathematical forms, each representing different kinetics [10,42,52], Here, V i , � c i , � c ii 0 , are the reaction constant, the MMK constant, and the inhibition constant, respectively.With a set of biologically-informed mathematical models, model calibration and selection methods utilizing the experimental observations will find the best kinetics for each substrate and its corresponding parameters (Eqs 6 and 8). The proposed mass-conservation-based neotissue growth model treats the entire population of cells as homogeneous in their density and type without considering multiple cell types, their interactions, and cell transitions from one type to another (i.e., cell differentiation).However, the framework we propose can be easily applied to multiple cell types by adding additional equations similar to Eq 2 for each type, and including transitions between cell types through the response terms.Future spatiotemporal (nonhomogeneous) models will include diffusion and advection terms for the cell population similar to Eqs 1 and 2 [53]. Structural identifiability analysis. Mathematical models in biology are usually defined with differential equations as [15], In this equation, Y is a vector containing a set of state variables, e.g., substrate concentration and cell densities, Θ is a vector of model parameters, u is a vector of the external stimuli, and _ Y is the rate of change of the state variables over time.Usually, not all the state variables can be measured directly during the experiments; thus, the observables, z are denoted as, A mathematical model is identifiable whenever a unique set of observations or measurements would result in one and only one set of model parameters [50].Mathematically, if Θ and Φ are two valid sets of model parameters, then gðY; Θ; uÞ ¼ gðY; Φ; uÞ Structural identifiability analysis is performed on the mathematical model before model calibration, and it focuses on the relation between the state variables and observables.The model is discarded or modified if it is deemed that data collected about the observables, regardless of the amount of data, will not result in a unique set of model parameters.If a model is structurally identifiable, all the model parameters can be estimated from a sufficiently large number of observable measurements [4,54].To make this determination, the analysis consists of creating a modified differential algebraic equation (DAE) form from model equations that meets a certain rank criterion.The observability matrix is then obtained via symbolic techniques.The model is considered structurally identifiable if the observability matrix has full rank [55]. Objective function definition. Generally, inferring a parametric model from data is a mathematical optimization problem, in which a set of model parameters is estimated that maximize a user-defined measure of 'goodness of fit'.Two standard methods for parameter estimation of mathematical models are maximum likelihood estimation (MLE) and nonlinear least squares (NLS) [56], which use the likelihood function and the sum of squared errors as objective functions, respectively.The choice of objective function consolidates the assumptions about the data into the calibration process.In MLE, the posterior probability of the observed data is maximized based on the known or assumed statistical distribution of the data.In contrast, NLS minimizes the sum of squared residuals between observations and predictions (NLS) without any distributional assumption about the data. In this work, we employ MLE because it produces a suitable frequentist formulation of the calibration process without imposing unwarranted assumptions about the data and noise distribution.The likelihood function for independent and identically distributed observations (i.i.d.), Z i , is defined as the multiplication of their probability functions (p) as, Let us assume that the observations are normally distributed, with a time-dependent but unknown mean as predicted by the model with unknown parameters Θ and unknown standard deviation.Then, minimizing the negative log-likelihood objective function, −ℓ(Θ;Z), is mathematically equivalent to the weighted NLS problem defined as [4], where Here, n k , n z , z j , Z j , and σ j (t k ) are, respectively, the number of time points, the number of observables, the model predictions, and the mean and standard deviation of each observable at each time point across experimental replicates, respectively. Model-based design of experimental protocols (MBDEP). We propose an approach to design the data collection protocol so that it minimizes the error of estimated model parameters for a given set of model proposals under expected experimental noise levels.Given the non-linearity of the models we use here, our approach is based on statistical simulation, as follows. First, we identify a set of plausible model parameters, e.g., taking parameter values from the relevant literature, similar experiments with other cell types, or other experiments done with the cell type of interest.Alternatively, some model parameters may be roughly estimated based on the known biology and physics of the process.Next, we do a forward modeling step, in which we use the models with these assumed parameter values to generate simulated noiseless experimental data about the observables.Gaussian noise is then added to this data, i.e., where, � is the noise level, z j (t k ) represents the simulated value of observable j at time t k , and N is the Gaussian probability distribution function.Different noise levels, � 2 {0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5}, can be used for this analysis to observe how different experimental noise levels would affect model inference.Note that here we assume that all observables can be measured with a similar level of experimental error (noise).However, the same approach can be used with different noise levels for each observable, e.g., representing the availability of different measurement techniques or equipment with different levels of accuracy.The simulated data with added noise is then used for MLE or NLS estimation of model parameters (Sec.2.2.7), and the difference between the assumed and the estimated values of the model parameters is calculated. These steps are repeated with multiple sets of simulated data and different spatio-temporal sampling frequencies (i.e., with different amounts of data).The results of this effort allow us to identify the sampling frequencies that are required to ensure that the estimation of model parameters is robust to the expected levels of experimental noise.Furthermore, if there are external constraints on the experimental procedure (e.g., equipment/personnel availability, cost, timelines, etc.), the proposed MBDEP can be used to check the feasibility of model inference and thus signal the need for reformulating experimental goals. 2.2.7 Model inference.Once the experimental data has been collected, the data is split into three subsets for calibration (60% of the data), selection (20%), and validation (20%), following best practices.The calibration data set is then used to formulate, for each candidate model, a nonlinear optimization problem to determine the set of model parameters that best fit the data, i.e., The optimization problem posed in Eq 15 is solved through a multi-start strategy, which helps with detecting and averting the local minima [57].A maxi-min Latin hypercube method is utilized to generate a set of initial guesses that cover the search space with guaranteed lowerdimension projection properties [58,59].Starting from each initial parameter guess, the optimization problem is solved using the Adam stochastic gradient decent-based method [60]. Then, the best-performing solutions are used as starting points for a further optimization stage using the BFGS method [61] to ensure final convergence. Model selection is made between the candidate models by checking how well they match the time evolution of state variables, i.e., cell populations and biochemical concentrations on the selection dataset.Several error metrics (e.g., MSE, RMSE) can be used for this purpose.However, given that the candidate models may have different numbers of parameters, in this work, we use the Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC), to select the model that best balances model complexity and goodness of fit [4,62,63].In Eq 16, log(x) is the natural logarithm of x, k is the number of inferred parameters, and m s is the number of observations in the selection dataset.The use of AIC and BIC has been widely discussed in the literature, and although no definitive recommendations have been made, it is commonly accepted that AIC is an optimal rule for selecting a model among a set that may not contain the true model for the purpose of predicting the dependent variable over unseen data.In contrast, BIC is an optimal rule for selecting the true model (or the lowest-dimension model that best describes the data) among a set of models that contains the true model [64]. Note that model selection is ultimately performed by humans, using expert knowledge about the physics and biology of the situation, and with a clear experimental goal (e.g., prediction vs. description).However, when knowledge about the underlying physical or biological phenomena is incomplete, data-driven model selection between plausible models can help researchers gain knowledge about the phenomenon of interest and may point to unconsidered physics. 2.2.8 Practical identifiability analysis.Practical identifiability analysis is performed after model calibration and selection.It utilizes the inferred model and the experimental measurements to find confidence intervals for each inferred parameter.The sparsity of the experimental data set, or large experimental variability can result in practical unidentifiability [12]. There are several methods to conduct practical identifiability analysis of models.A commonly used method calculates the local sensitivities of the model with respect to its parameters to construct the Fisher information matrix (FIM).The method then either uses the eigenvectors of FIM or constructs the correlation matrix to finalize the analysis [65,66].Null eigenvectors and high correlations point to the unidentifiable parameters.An alternative, which we use in this work, is profile likelihood-based methods, which are invariant to model reparameterization, are not limited to symmetric confidence intervals, and can even detect structural unidentifiabilities [15]. Profile likelihood-based methods consider one model parameter (e.g., θ i ) as fixed at a given value, and then find the MLE of the rest of the parameters (θ j , 8j 6 ¼ i) in Eq 15, i.e., This process is systematically repeated for different values of the fixed model parameter θ i , and then for each parameter θ j in turn.Then, the confidence interval for each parameter is defined as the set where Δ α is the α-quantile of the χ 2 distribution with one degree of freedom.A model parameter is deemed as practically identifiable if the corresponding confidence interval is finite.2.2.9 Goodness of fit.Goodness of fit refers to the assessment of how well the model represents the data using a suitable error metric.In the context of model selection among a set of calibrated models, an unbiased assessment of goodness of fit must rely on hold-out data, i.e., data from the same experimental context (same generating process) but distinct from that used for model calibration and selection.This hold-out data is referred to herein as the validation data set.In this work, we use the mean relative prediction error of the inferred model on the validation dataset as, ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi This value predicts the model accuracy when predicting the state variables for any case within the convex hull of the experimental data used to calibrate, select and validate the models. As an additional metric to assess the accuracy of the model predictions, we calculate the mean relative experimental variance of the data.In other words, this is the standard deviation, among the number of experiment replicates (9), of each observable at each time point, normalized with the value of the mean value of the observable at that time point, and averaged over all time points and observables, i.e. Note that this value expresses the variance among replicates in normalized terms.This information is critical to properly evaluate the prediction error of the models, which cannot be meaningfully expected to be lower than the intrinsic variability of the experimental data. Global sensitivity analysis. Global sensitivity analysis (GSA) quantifies the effect that any input has on the system output, averaged over the input domain, i.e., over the hypercube formed by the Cartesian product of the ranges of each input.In the context of the proposed methodology, GSA can be used to rank the model parameters in terms of their quantitative impact on the model predictions.It also is used for identifying unimportant parameters and parameter space regions where each parameter is most important [67].More importantly, GSA applied to the validated model can allow us to rank the variables that can be controlled during an experiment in terms of their effect on any observable of interest.Whether applied to the model parameters or to the experimentally controllable variables, GSA is an often overlooked and crucial step for both quality assurance and practical application of in silico models [35]. Variance-based GSA methods such as Sobol's [68] and Saltelli's [69] are the most commonly used in the literature, mainly due to their suitability for non-linear, high-dimensional responses, and for their consideration of interactions between input variables [70].Variancebased GSA methods are based on a seminal work by Sobol [68], which showed a general decomposition of non-linear, continuous functions (e.g., Y k in Eq 9) into a set of integrals of increasing dimensionality representing the overall mean, main effects, and interactions of increasing order between variables [71]. where f 0 is a constant, f i (X i ) is the main effect of X i (effect of only varying X i ), and f ij (X i , X j ) is the first-order interaction between X i and X j .Assuming Eq 21 to be square-integrable, the functional decomposition will result in, with, where X *i means set of all variables except X i and E is the expectation operator.Using Sobol's method, we calculate first and total-order interactions for all the model parameters as, S i measures the effect of varying X i alone averaged over variations in other input parameters, standardized by the total variance.These values are calculated with Monte Carlo sampling, allowing the creation of confidence intervals for the sensitivity indices. Results In this section, we describe the application of the proposed methodology for inferring the population dynamics of the airway-relevant BEAS-2Bs cell line, under different biochemical conditions in a (no flow) static culture environment.The following subsections will illustrate how the methodology is applied, step by step, in the development of a model for BEAS-2B cells and show the challenges and benefits of this strategy.The reader can refer to the Materials and Methods section for a full description of the methodology. Model proposals This stage creates a set of model proposals based on the existing knowledge about the specific cell line under study, in this case, BEAS-2B cells.BEAS-2B is a human bronchial epithelial cell line derived from normal, non-cancerous human bronchial tissue closely resembling the primary human bronchial epithelial cell's morphology and functional characteristics.These cells do not differentiate, and their growth rate is known to be affected by oxygen, glucose, and lactate levels [39,72].In a homogeneous, static growth environment with no cell migration or shear stress, Eqs 5 and 6 are reasonable model hypotheses for cell population dynamics of a single cell line that does not differentiate.Also, spatial variations in biochemical concentrations would be negligible, and thus Eq 1 would simplify to Eq 3. The Michaelis-Menten reaction rates for glucose and oxygen (as defined in Eq 8) will yield the initial system of ordinary differential equations (ODEs) as, In these equations, lactate production is calculated using the relationship between aerobic respiration and glycolysis [7,8]. For each of the terms f 1 (c o (t)) and f 2 (c g (t)), which account for the potential effect of oxygen and glucose on the cell population growth rate, we select three candidate models, namely zeroorder (Eq 6a), first-order (Eq 6b), and MMK with positive feedback (Eq 6c).Similar candidate models are chosen for the lactate effect, f 3 (c l (t)), but using MMK with negative feedback (Eq 6d) instead.Thus, the model proposal stage results in a total of 27 different candidate models, which will be investigated later in the model inference step via model selection methods. Structural identifiability analysis To study the structural identifiability of the models proposed for the BEAS-2B cell line, we focus on the most complex candidate model, i.e., the model with the largest number of parameters and/or most severe non-linearity.The implicit assumption is that, if it is deemed structurally identifiable, then simpler models will be as well.This assumption is reasonable because there are only minor differences between the models being considered. Global structural identifiability for the model was confirmed using StructuralIdentifiability. jl; an open-source SciML package for structural identifiability analysis [73][74][75].In the specific case of the BEAS-2B candidate models proposed in the previous step, the most complex model corresponds to Eq 25 with MMK effects for the three substrates.The observables for performing this analysis are the cell population density and glucose and lactate concentrations.Results showed that this model is structurally identifiable. Objective function definition The next step in the methodology is defining the objective function.For the specific case of the BEAS-2B model, we use non-linear least squares (NLS).Let Z j 2 fN i ; C g i ; C l i g and z i 2 fn i ; c g i ; c l i g, then the least square problem becomes finding the parameters Θ* by, where S T is the training set.As mentioned in Sec.2.2.3, minimizing Eq 26 is equivalent to using the maximum likelihood estimator in Eq 12 [15] if the experimental data is assumed to have a Gaussian distribution. Model-based design of experimental protocols We applied the proposed approach for model-based design of experimental protocols (Sec.2.2.4) to the inference of a model for cell population dynamics of BEAS-2B cells.Since this is the first mathematical model for BEAS-2Bs, there is no available data or information on which to base decisions about the frequency and resolution required for cell growth experiments, i.e., the number and timing of the experimental measurements required to properly capture the system dynamics.Let us assume that one of the model proposals selected in Sec.3.1, e.g., Eq 25 with MMK effects, is an appropriate description of the system.We then assume that the model parameters have known values (see supplementary material), which are taken from similar experiments published in the literature for nerve cells [10], osteoblast cells [8], and mesenchymal stromal cells [47].Using this model with initial conditions consistent with our experimental setup (Sec.2.1) and assumed model parameters, we generate synthetic data about the observables, namely cell population and concentrations of glucose, lactate, and oxygen as a function of time, for a total of 16 different initial conditions.Gaussian noise was added to the data as described in Sec. 2.2.4. Solving Eq 26 with the generated synthetic data results in the inferred parameters (Θ*).This process was repeated five times with data sets with different sizes, corresponding to the measurement of the observables at periods of 1, 2, 4, 24, and 48 hours.To compare the performance of the parameter inferences using different sampling periods, we defined error as the difference between the assumed and inferred (Eq 26) parameters, Fig 2 shows the resulting error in parameter inference as a function of the sampling frequency for the observables.For estimating parameters for the set of model proposals considered here, taking cell population and concentration measurements every 24 hours results in a 7.32% error.Since more frequent measurements do not result in better parameter estimates, we selected 24 hours as the sampling period for our BEAS-2B experiments. Running and postprocessing the experiments Experiments were conducted as described in Sec.2.1.We conducted nine replicates of each experimental condition by doing three runs of each experiment, using three well plates in each run.Two-way statistical analysis of variance (ANOVA) test (using the Pingouin [76] Python library) demonstrated statistically significant differences in cell populations between different culture conditions.Specifically, cell populations at time t = 102 hours were significantly affected by the change in the culture environment, suggesting that oxygen and initial glucose concentrations affect the growth dynamics.Notably, since oxygen and glucose affect cell population growth, this implies that five of the 27 model proposals can be discarded, since these would make the cell population after t = 102 hours independent of oxygen or glucose levels.Hence, only 22 model proposals were considered in the next steps of the methodology. To prepare the experimental data for model inference, we implemented a data splitting strategy, following best practices in model fitting, selection, and validation typically found in the machine learning literature.The full dataset contains the time evolution of state variables under four different culture environments with four different initial cell densities, giving a total of 16 experimental conditions.This dataset was split into calibration, selection, and validation datasets containing 60%, 20%, and 20% of the data, respectively.Through this process, we implemented a stratified sampling approach to ensure that all experimental conditions were equally represented in each dataset. Model inference using in vitro data As described in Sec.2.2.7, using the calibration dataset we inferred the parameters ðb; d; V g ; K o ; K g ; K l ; � c g Þ for the 22 candidate models.Due to experimental constraints, it was not possible to measure the oxygen concentration in the growth media, and thus it is assumed to be constant and equal to the oxygen concentration in the environment.We consider this a reasonable assumption, given the time scales involved and the fact that oxygen from the incubator chamber continuously diffuses into the growth media, replenishing the oxygen consumed by the cells.As a result, the advection-diffusion-reaction equation (Eq 1 in Materials and Methods) for oxygen was modified by setting M c o ðNðtÞ; CðtÞÞ ¼ 0, thus indicating a zero rate of change for oxygen concentration.Note, however, that oxygen concentration does affect the cell growth dynamics of BEAS-2B cells.Hence we included K o in our set of model parameters to be estimated. Optimization runs were conducted over a search space spanning multiple orders of magnitude, defined as a hypercube of dimension 7 over the range [10 −7 , 10 7 ].A total of 1300 starting points for the optimization were selected as a maxi-min Latin hypercube over the search space to ensure good coverage, for a total of 1300 × 22 = 28600 runs.This process was implemented in the Julia programming language using Flux.jl[77] and Optim.jl[78] libraries for Adam and BFGS optimizers, respectively. Since parameter calibration is a non-convex multimodal optimization problem with potentially many local optima, we further analyzed the resulting losses and inferred parameters to confirm convergence to a global optimum (S1 Fig in S1 File).These analyses suggest that the model inference does indeed have multiple local optima, but also provide evidence that the optimization strategy used here converges to what is likely the globally optimal model parameters for all 22 candidate models.Once the parameters for all candidate models were inferred with the calibration dataset, we used the selection dataset to calculate the BIC values for all models.Fig 3 shows the resulting BIC values, with different markers/colors indicating the number of first-order modulating effects (Eq 6) present in the model.The BIC criterion strongly suggests that model performance deteriorates as more of the substrates are assumed to have first-order effects on the cell population.Thus, we focused on the top five models (lowest BIC), which include MMK-type effects in at least one of the substrates, namely Lac : where the models have been named according to which substrates incorporate an MMK effect. As seen in Fig 3, the top-performing models have very similar BIC values, so a purely datadriven model selection strategy fails in this case to identify a single winner.However, we note that the model OxyGluLac is mathematically equivalent to the other four models at infinitely large or low values of K i 's, e.g., when K l ! 1, the OxyGluLac model is equivalent to the Oxy-Glu model.Hence we select the OxyGluLac model for the remainder of the study, as the best representation of the underlying cell behavior.This choice is also supported by the known biology of BEAS-2B cells. Table 1 shows the inferred parameters for the OxyGluLac model resulting from the calibration process.Observing the values for K i s, it can be seen that, in the range of the experimental conditions tested in this work,i.The figure shows that the model is fairly balanced in underpredicting and overpredicting the state variables.The model predictions are mostly between error bars for the in vitro experiments.Perhaps the only exception is the top left subfigure, corresponding to the prediction of cell density under normoxia and high glucose concentration in the culture media.In this case, the model predictions consistently underpredict the experimental data starting at the first time point.These differences are more suggestive of a constant bias than of a prediction error, which can be attributed to systematic errors in our measurements for this specific condition.This observation is supported by the larger dispersion seen in the cell population measurements at the first time point.Further analysis (not shown here for brevity, see S2-S4 Figs in S1 File) confirms that experimental observations are within the confidence intervals of the model predictions. Practical identifiability analysis We perform profile likelihood-based practical identifiability analysis using the ProfileLikelihood.jl[79] package.Table 1 shows the resulting confidence intervals for all estimated model parameters are finite, thus proving the practical identifiability of the model based on our experimental data [80]. Goodness of fit The relative root-mean-square prediction error of the inferred model was calculated using the validation data set as described in Sec.2.2.9, Eq 19.Results show that the inferred model has a RMSE of 18.3%.For context, the experimental error (noise), calculated as the average of the standard deviations of experimental replicates for each time point and experimental condition, is 18.7%.Based on this, we consider that the model is sufficiently accurate for its applications in support of BEAS-2B tissue engineering. Global sensitivity analysis A global sensitivity analysis of the OxyGluLac model was performed.Specifically, we calculated the sensitivity of cell population and glucose and lactate concentrations at time t = 114 hr with respect to the experimental conditions that can be controlled, i.e., oxygen concentration in the incubator, initial glucose concentration in the culture media, and initial seeded cell density.For this purpose, we use Sobol's method (Sec.2.2.10) with 40,000 Monte Carlo samples.Firstorder Sobol indices rank the importance of each condition alone, while total-order Sobol indices also include parameter interactions. Fig 5A shows the resulting Sobol indices.It can be seen that the initial cell population has the largest effect on both the terminal cell population and lactate concentration while having only a small effect on the terminal glucose concentration.Taken together, these observations suggest that the culture conditions used in the experiments did not impose significant metabolic constraints on the cells during the first t = 114 hr.This is also consistent with the lack of significant parameter interactions, as evidenced by the similarity between the first-order and total-order Sobol indices.To further analyze the effect of the culture conditions on the cell population, Fig 5B shows the total-order Sobol sensitivity indices of the cell population throughout the duration of the experiment.It can be observed that the sensitivity increases as the experiment unfolds, suggesting that if the experiment were to be run for longer (e.g., 10 to 15 days), we would see a more significant effect of the biochemical substrates on the cell population. Application: Optimizing culture conditions One application of the SBDO paradigm using our model development methodology is studying the effect of different experimental settings efficiently [7,81].Here we study how different media refreshment regimens affect cell population dynamics.In silico, this study is implemented by resetting the concentration values of chemical substrates to be equal to their initial values at series of refreshment time points.Fig 6 shows the estimated cell yields after 43 days of culture under different refreshment periods, from 2 to 24 days, indicated as vertical lines.It is observed that refreshing the culture media every 2 to 10 days maintains a relatively stable cell population, with small oscillations that increase in amplitude as the media refreshment period increases.Conducting the experiment with media refreshments every 10 days or more results in drastic decreases in cell population, which become unrecoverable if the media is not refreshed at least every 14 to 16 days.Note that this in silico study takes only minutes to run on a desktop computer once an inferred and validated model is available.However, conducting all these experiments in vitro would take significant time and incur costs in supplies, equipment, and personnel. Discussion The comprehensive methodology for the development of in silico models for neotissue growth proposed in this work leverages state-of-the-art methods for experimental design, data-driven model inference and selection, and post-hoc analysis, validation, and interpretation of biologically informed models based on systems of differential equations. The proposed methodology addresses the complexity, sparsity, and variability of tissue engineering experiments by implementing specific model formulation calibration, selection, and validation steps that are not typically considered in previous works.For instance, Coy et al. [10] developed a model for the regeneration of nerve cells but did not conduct identifiability and sensitivity analysis, which was used by Villaverde et al. [4].Our proposed methodology uses both structural and practical identifiability analysis to properly assess the suitability of a model and the ability to infer a unique set of parameters based on the available (noisy) experimental data.In comparison with [4], we implement additional model selection and validation steps based on hold-out data.Furthermore, in contrast with [4,65], we add an extra step for global sensitivity analysis, which provides important insights for model diagnosis and interpretation [71]. Uniquely, in our methodology, we propose a novel method for model-based design of experimental protocols (MBDEP).Starting from the set of biologically informed model proposals being considered, and leveraging prior information about model parameters, MBDEP optimally selects the spatial and temporal sampling frequency required to minimize the estimation error of the inferred model parameters, taking into account the expected experimental noise.In addition, MBDEP provides valuable information regarding the suitability of the measurement equipment and techniques, vis-a-vis their expected measurement error, for the intended model inference application.This is in contrast with SBDO literature in non-biology related applications, which has focused instead on experimental designs balancing the needs for model inference and optimization.For instance, [82][83][84] combined simulation models and data-driven surrogate models for the adaptive, sequential design of experiments.In particular, starting from an existing mathematical model of the system, these works determined a set of experimental conditions (i.e., the actual sampling points) required to improve a figure of merit, such as maximizing the information content of the data [85], or optimizing a response variable based on a limited set of computationally expensive deterministic computer experiments [82].In contrast, our proposed model-based experimental protocol design approach aims at determining, a priori, the quantity and quality of noisy data needed for robust inference of mathematical models for neotissue growth formulated as a system of differential equations.However, once a mathematical model has been inferred for the system, adaptive/sequential sampling methods may be used to gather further data to improve the model, capture additional dynamics, or optimize according to experimental goals. In the context of our in vitro experiments with BEAS-2B cells on well plates, where observables vary in time but not in space, the proposed MBDEP allowed us to adequately capture the time evolution of the observables objectively establishing that measuring cell density every 24 hr was necessary to infer model parameters from experimental data over a wide range of measurement noise.Comparatively, previous work in the literature does not typically provide a rationale for certain aspects of their experimental protocols.For instance, Coy et al. [10] used only the terminal cell density after 24 hr for model inference (a total of 15 measurements), Eleftheriadou et al. [11] used 36 measurements to infer a 16-parameter model, while Duchesne et al. [12] used 10 measurements to infer 6 to 8-parameter model.Future extensions of the BEAS-2B models where spatial variations in cell density and substrate concentrations are present could use the proposed MBDEP to determine the distance between measurements (i.e., spatial sampling frequency) and/or their specific location for robust model inference. We applied the proposed methodology to develop the first mathematical model to predict population dynamics of BEAS-2B cells in vitro under different biochemical environments characterized by glucose, oxygen, and lactate concentrations.Over the validation dataset, the resulting BEAS-2B model exhibited a prediction error of 18.3%, an accuracy comparable to the experimental noise (18.7%), thus suggesting the suitability of the inferred model for the design and optimization of bioreactor devices and experimental protocols to maximize cell yield, reduce variability, improve cell coverage, among others [1,86]. BEAS-2B cells are widely used as an in vitro platform for numerous studies in airway/lung homeostasis and disease [28,[87][88][89][90] spanning drug screening, toxicology, viral cellular response, and more recently in tissue engineering applications focusing on optimization of methods for generating bioengineered lungs [23,91].As such, a mathematical model accurately predicting BEAS-2B growth and metabolic activity would be valuable in determining experimental conditions in each context. As an illustration, the inferred model was used to study the effect of the media refreshment period on the resulting BEAS-2B cell population.It was shown that changing the culture media every 1 to 10 days did not have a significant effect on the final cell population after 43 days.Comparatively, cell culture protocols for BEAS-2B cells and similar airway-relevant tissues typically prescribe media changes every 48 hr to 72 hr in growth media.Insights from experimentally validated in silico models may thus translate to significant savings in supplies (e.g., 10X reduction in growth media in this case) and personnel, especially in the context of commercial production of engineered tissues. Concluding remarks The in silico model considered in this work is based on a system of coupled differential equations describing advection-diffusion-reaction of biochemical substrates and their effect on neotissue growth of a single cell type, without considering multiple cell types and transitions between them.The mathematical model family proposed here can be applied to other single cell line populations with their dynamics affected by glucose, lactate and oxygen concentrations.In such applications, all the model parameters would need to be re-calibrated using relevant experimental data.Also, in the case of biochemical stimuli other than glucose, lactate and oxygen, additional terms representing their rates of change would need to be added to the system of equations, along with their potential effect on the cell proliferation term. The framework we propose can be easily applied to multiple cell types by adding additional equations similar to Eq 5 for each cell type, and including transitions between cell types through the response terms [92].Duchesne et al. [12] proposed such a model for the differentiation of chicken erythroid progenitor cells, in which transitions between cell types depended on cell densities only, without considering bio-chemo-mechanical cues.Such formulations could be combined with the proposed methodology for studying the directed differentiation of pluripotent stem cells under different environments [27,93].Additional effects such as shear stress [39], scaffold stiffness, and air-liquid interface exposure could be added to the model, e.g., through Eq 6. In vitro neotissue growth, whether under static or perfusion conditions, is a complex multiscale multi-physics phenomenon [94].To realize the full potential of SBDO applications in tissue engineering, there is a need for end-to-end in silico modeling, including perfusion cell seeding, deposition, attachment, proliferation, migration, and differentiation in response to both biochemical and mechanical cues. In this context, hybrid Lagrangian-Eulerian formulations that consider scaffold biomechanics, cell-cell, and cell-scaffold interactions [95] while tracking the motion of individual cells or cell parcels within a flow field [39,96], are promising approaches.Neotissue growth models such as those presented in this work could be combined with hybrid Lagrangian-Eulerian formulations to achieve end-to-end in silico neotissue growth modeling. Fig 1 . Fig 1. Model development framework.The green box shows the mathematical model development steps, blue boxes depict the mathematical checks on the model, orange boxes display the experimental steps, and yellow boxes show the initial steps in model development.https://doi.org/10.1371/journal.pone.0300902.g001 Fig 2 . Fig 2. Parameter inference errors for different temporal sampling periods for substrate frequency increases.Parameter inference error decreases as concentration and cell population data is collected more frequently.However, collecting samples at intervals shorter than 24 hrs has no further impact on estimation errors for model parameters.https://doi.org/10.1371/journal.pone.0300902.g002 Fig 3 . Fig 3. BIC values for the 22 candidate models.Different markers show the number of first-order effects on the cell proliferation rate.https://doi.org/10.1371/journal.pone.0300902.g003 20] mol m − 3 , lactate has the highest effect among the biochemical substrates considered.This observation is confirmed later in the Global Sensitivity Analysis step.Fig 4 compares the inferred in silico model results versus the experimental observations in all experiments.In this figure array, each column in the figure refers to an observable variable (cell density, glucose, and lactate concentrations), and each row represents a different set of experimental conditions (initial concentrations of glucose and oxygen).In each plot, different lines represent different initial cell densities.As the figure illustrates, the calibrated model is indeed able to capture the effect of different biochemical conditions well and accurately predict the resulting cell population through the experiment for all experimental conditions.Fig 4 also shows the difference between the model prediction and the experimental observations versus the noise of the experimental data, defined as standard deviation over the mean. Fig 4 . Fig 4. Model inference.Inferred model versus the in vitro observations.The in silico model results are shown with curves, and the in vitro model results are shown as dots with error bars showing standard deviation.https://doi.org/10.1371/journal.pone.0300902.g004 Fig 5 . Fig 5. Global sensitivity analysis.A: GSA on terminal values of observables.The x-axis shows the three observables.B: Time evolution of the global sensitivity of cell population to variables controlled during the experiments.https://doi.org/10.1371/journal.pone.0300902.g005 Fig 6 . Fig 6.Study of refreshment periods.Lower media refreshment periods result in higher and more stable cell populations.https://doi.org/10.1371/journal.pone.0300902.g006 as, dn dt ðtÞ ¼ f 1 ðc o ðtÞÞf 2 ðc g ðtÞÞf 3 ðc l ðtÞÞ |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }|ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {
2024-05-17T05:13:15.540Z
2024-05-15T00:00:00.000
{ "year": 2024, "sha1": "525c18992f42c1b22450139ca64a3aaff597dd9d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0300902&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "525c18992f42c1b22450139ca64a3aaff597dd9d", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254046334
pes2o/s2orc
v3-fos-license
Climate change impedes plant immunity mechanisms Rapid climate change caused by human activity is threatening global crop production and food security worldwide. In particular, the emergence of new infectious plant pathogens and the geographical expansion of plant disease incidence result in serious yield losses of major crops annually. Since climate change has accelerated recently and is expected to worsen in the future, we have reached an inflection point where comprehensive preparations to cope with the upcoming crisis can no longer be delayed. Development of new plant breeding technologies including site-directed nucleases offers the opportunity to mitigate the effects of the changing climate. Therefore, understanding the effects of climate change on plant innate immunity and identification of elite genes conferring disease resistance are crucial for the engineering of new crop cultivars and plant improvement strategies. Here, we summarize and discuss the effects of major environmental factors such as temperature, humidity, and carbon dioxide concentration on plant immunity systems. This review provides a strategy for securing crop-based nutrition against severe pathogen attacks in the era of climate change. Introduction Climate change is a major factor in determining where humans can live on the planet under tolerable and safe conditions (Timmermann et al., 2022). Global warming due to environmental destruction and excessive burning of fossil fuels is creating adverse conditions for the continued survival of many plant and animal species and the wellness of the human population (Romań-Palacios and Wiens, 2020). The crops that have made human settlement possible since the dawn of agriculture by providing a stable source of dietary calories are now suffering from the effects of climate change (Challinor et al., 2014;Rising and Devineni, 2020). Biotic stress factors such as pathogens and insect pests reduce crop yield and quality in agricultural settings (Savary et al., 2019;Savary and Willocquet, 2020). Indeed, damage to major crop yields is estimated to reach up to 40% globally (Oerke, 2006;Savary et al., 2012). In warmer and wetter environments more amenable to pathogen growth and spread, the damage they cause can be even more devastating (Velasquez et al., 2018). For example, bacterial blight caused by Xanthomonas oryzae pv. oryzae (Xoo) can decrease yield in rice (Oryza sativa) by up to 80% (Srinivasan and Gnanamanickam, 2005). Wheat blast caused by the fungus Magnaporthe oryzae Triticum can infect wheat (Triticum aestivum) and completely eradicate fields (Islam et al., 2020), as can banded leaf and sheath blight caused by Rhizoctonia solani in maize (Zea mays) (Haque et al., 2022). Moreover, the emergence of new pathogenic strains and the expansion of their effective damage zones due to climate change are two of the most serious threats to crop production and food security (Chaloner et al., 2021). Therefore, efficient strategies are urgently needed to reduce the impact of pathogens on crop growth and yield. According to the disease triangle model, three factors are required for disease development: a susceptible host, a virulent pathogen, and a favorable environment (Scholthof, 2007). Of these, only plant-based strategies are available to affect one side of the triangle with current technologies. Indeed, the development of new crop cultivars conferring innate immunity will be essential for conservation of food resources. Plant breeding has traditionally been performed through laborious and time-consuming genetic crosses to introduce superior alleles into a given background (Lusser et al., 2012). However, biotechnological innovations now offer eight new plant breeding technologies (NPBTs): site-directed nucleases (SDNs), oligonucleotide-directed mutagenesis, cisgenesis and intragenesis, RNA-dependent DNA methylation, grafting, reverse breeding, Agrobacterium-mediated infiltration, and synthetic genomics (Lusser et al., 2011). Among them, SDNs are the most widely used NPBT for a broad range of crops. In particular, development of the clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated nuclease 9 (Cas9) system has ushered in a new era of crop improvement . Therefore, understanding the molecular mechanisms and identifying novel genes conferring desired traits are essential for their targeting by NPBTs in plant breeding. Plants have evolved varied stress responses and defense mechanisms to overcome adverse environmental conditions, about which we have gained a wealth of knowledge thanks to the efforts of countless scientists. Nevertheless, how climate change affects the molecular mechanisms related to plant immunity against pathogens is largely unknown. Luckily, this knowledge gap is beginning to be filled. In this review, we give an overview and discuss the negative effects of temperature, humidity, and carbon dioxide (CO 2 ) concentration on plant defense mechanisms to better understand how to design mitigation strategies. Plant immunity system and defense signaling Plants employ two important immune systems known as pathogen-associated molecular pattern (PAMP)-triggered immunity (PTI) and effector-triggered immunity (ETI) to perceive and respond to pathogen attacks (Thomma et al., 2011). PTI is activated mainly by plasma membrane-localized extracellular pattern recognition receptors (PRRs) that can recognize conserved PAMPs (Monaghan and Zipfel, 2012). For example, recognition of the 22-amino acid region of bacterial flagellin (flg22) by the leucine-rich repeat receptor kinase (LRR-RK) FLAGELLIN SENSING 2 (FLS2) at the plasma membrane leads to formation of a heteromer between FLS2 and BRASSINOSTEROID INSENSITIVE-ASSOCIATED KINASE 1 (BAK1), a member of the LRR receptor-like kinase (LRR-RLK) and also known as SOMATIC EMBRYOGENESIS RECEPTOR-LIKE KINASE 3 (SERK3) (Chinchilla et al., 2007). The FLS2/BAK1 complex phosphorylates the receptor-like cytoplasmic kinase BOTRYTIS-INDUCED KINASE 1 (BIK1) and mitogen-activated protein kinase (MAPK) cascade to activate the downstream signaling pathway, resulting in expression of PTI-related genes (Wang et al., 2020b). Similarly, perception of a highly conserved epitope of bacterial translation elongation factor Tu (EF-Tu) by the LRR-RK EF-Tu RECEPTOR (EFR) also results in PTI activation through heteromerization with BAK1 and phosphorylation of BIK1 (Lal et al., 2018). Moreover, the recognition of plant-derived damage-associated molecular patterns (DAMPs) and phytocytokines by LRR-RKs/RLKs is important for PTI (Hou et al., 2021;Tanaka and Heil, 2021). PTI acts as a basal defense mechanism against various types of pathogens through defense responses that include the induction of defense gene expression, reactive oxygen species (ROS) production, callose deposition, and accumulation of antimicrobial secondary metabolites (Naveed et al., 2020). ETI is triggered following the recognition by intracellular receptor resistance (R) proteins of specific pathogen effectors that can neutralize the plant immune system in the cytoplasm (Chisholm et al., 2006;Jones and Dangl, 2006). ETI activates a prolonged and robust resistance response and rapid localized programmed cell death known as the hypersensitive response (HR) (Coll et al., 2011). Most R proteins are nucleotide-binding leucine-rich repeat proteins (NLRs) that can be classified into three groups based on their N terminus domain: Toll/ interleukin-1 receptor (TIR), coiled-coil (CC), and RESISTANCE TO POWDERY MILDEW 8 (RPW8)-type CC (CC R ) domain (Monteiro and Nishimura, 2018). The ETI signal triggered by TIR-NLRs (TNLs) relies on the three acyl hydrolases ENHANCED DISEASE SUSCEPTIBILITY 1 (EDS1), PHYTOALEXIN DEFICIENT 4 (PAD4), and SENESCENCE-ASSOCIATED GENE 101 (SAG101) (Wiermer et al., 2005). EDS1 interacts directly with PAD4 or SAG101 to form exclusive heterodimers, each with distinct functions in immunity (Wagner et al., 2013;Lapin et al., 2020). It was recently revealed that helper CC R -NLRs such as ACTIVATED DISEASE RESISTANCE 1 (ADR1) and N REQUIREMENT GENE 1 (NRG1) are required for the activation of the EDS1 complex and TNL defense signaling (i.e., EDS1-PAD4-ADR1 and EDS1-SAG101-NRG1) (Pruitt et al., 2021;Sun et al., 2021). The EDS1 pathway is involved not only in ETI but also in basal immunity and promotes salicylic acid (SA) biosynthesis and signaling (Cui et al., 2017). Therefore, EDS1 signaling plays a critical role in SA-dependent and -independent resistance. For CC-NLRs (CNLs), the plasma membrane-localized integrin-like protein NON-RACE SPECIFIC DISEASE RESISTANCE 1 (NDR1) appears to function downstream of CNLs, although several do not require NDR1 to activate ETI (van Wersch et al., 2020). Since NDR1 acts upstream of SA biosynthesis and signaling, it is also involved in SA-dependent resistance (Shapiro and Zhang, 2001). Another plant immune response is referred to as quantitative disease resistance (QDR), which is characterized by a continuous distribution of resistance phenotypes-from highly sensitive to highly resistant-within a population (Poland et al., 2009). QDR is typically partial resistance conferred by multiple small-effect loci, while qualitative disease resistance, also referred as ETI, is complete resistance conferred by a single large-effect gene (French et al., 2016). Since multiple genes are involved in QDR, it is important in the context of the evolutionary pressure imposed by pathogens and confers broad-spectrum resistance to a wide range of pathogens including biotrophic and necrotrophic pathogens (Anderson et al., 2010;French et al., 2016). Most loci identified as quantitative trait loci for QDR are associated with the biosynthesis of the cell wall and defense compounds, thus extending beyond simple pathogen perception (Corwin and Kliebenstein, 2017). Phytohormones participate in and control PTI and ETI. In particular, the three phytohormones SA, jasmonic acid (JA), and ethylene (ET) play critical roles in plant immunity. SA contributes significantly to innate immunity against biotrophic pathogens by evoking local and systemic resistance, whereas JA/ ET play critical roles in plant resistance to necrotrophic pathogens (Glazebrook, 2005;Li et al., 2019). The SA and JA/ ET defense signals can be antagonistic or synergistic (Tsuda and Katagiri, 2010). Abscisic acid (ABA) is also important for innate immunity. ABA interacts with various phytohormones during defense responses (Lee and Luan, 2012;Pieterse et al., 2012). For example, ABA suppresses SA-dependent immunity, leading to greater susceptibility against various pathogens (Berens et al., 2019). However, ABA can also increase plant disease resistance due to closure of stomata which constitutes one of the main entry routes for pathogens (Ton Mauch-Mani, 2004;Melotto et al., 2006;Flors et al., 2008). In response to the stimulus, ABA is primarily biosynthesized in vascular tissues and accumulates in guard cells through ABA transporters (e.g., ATP-binding cassette transporter G [ABCG]) (Merilo et al., 2015). In guard cells, ABA binds to its cognate receptor from the pyrabactin resistance 1/pyrabactin resistance 1-like/regulatory components of ABA receptors (PYR/PYL/RCAR) family, leading to the inactivation of type 2C protein phosphatases (PP2Cs). The alleviation of PP2C-mediated repression of SUCROSE NON-FERMENTING 1 (SNF1)-related protein kinase 2s (SnRK2s) results in activation of the downstream ABA signaling cascade (Hsu et al., 2021). For example, the PP2Cs ABA INSENSITIVE 1 (ABI1) and ABI2 inactivate OPEN STOMATA 1 (OST1), also known as SnRK2.6, thus preventing the phosphorylation of SLOW ANION CHANNEL 1 (SLAC1), which releases anions for stomatal closure. However, perception of flg22 by PRRs increases ABA levels in guard cells to inactivate ABIs, and it results in rapid stomatal closure through the activation of the OST1/SnRK2.6-SLAC1 module (Guzel Deger et al., 2015). Therefore, ABA promotes stomatal closure and prevents pathogen entry into the host plant. ROS signaling is also important for plant immunity. ROS are highly oxidative agents, but they also act as signaling molecules that regulate biotic stress responses (e.g., systemic acquired resistance [SAR] and cell death) (Waszczak et al., 2018). ROS are generated via metabolic and stress signaling pathways. Metabolic ROS are produced in several intracellular compartments (e.g., chloroplast, mitochondria, peroxisomes, and apoplast) during photosynthesis and photorespiration, while signaling ROS are produced mainly by plant NADPH oxidases, mostly from members of the plasma membranelocalized respiratory burst oxidase homolog (RBOH) family (Kangasjärvi et al., 2012;Chapman et al., 2019). Pathogen recognition is accompanied by ROS production through both the metabolic and stress signaling pathways. Recognition of PAMPs by PRRs induces an initial oxidative burst that activates plant basal defenses within the infected cells; effector perception by R proteins then promotes a second oxidative burst that results in HR (Nanda et al., 2010;Torres, 2010). Therefore, ROS play a key role linking pathogen perception and plant defense responses. However, these various plant defense systems may be adversely affected significantly by climate change, as discussed below. The effects of temperature on PTI Environmental factors influence not only pathogenicity but also plant disease resistance (Elad and Pertot, 2014). Temperature is perhaps the most studied climate factor modulating plant-pathogen interactions. Higher average temperatures brought upon by climate change can increase the pathogenicity of phytopathogens by raising their virulence, active geographical regions, fitness, reproduction period/rate, and epidemic risks (Agrios, 2005;Deutsch et al., 2008;Caffarra et al., 2012;Vaumourin and Laine, 2018). Temperature is also one of the most important environmental factors that shapes plant immunity against bacteria, fungi, viruses, and insects (Garrett et al., 2006). Since different host-pathogen interactions behave differently over different temperature ranges, higher temperatures will sometimes work in favor of plant immunity. In many cases though, higher temperature will benefit the pathogen to the detriment of the host (Desaint et al., 2021). In Arabidopsis (Arabidopsis thaliana), higher temperature increases early PTI signaling (through BIK1 and MAPKs) and decreases the occupancy of nucleosomes containing the histone variant H2A.Z, which modulates the plant transcriptome in response to changes in temperature (Kumar and Wigge, 2010;Cheng et al., 2013). Moderately high temperatures (23°C-32°C) will therefore activate PTI-dependent gene expression at the expense of ETI (Cheng et al., 2013). Cysteine-rich receptor-like kinases (CRKs) are one of the largest RLK subfamilies that recognizes pathogens and activates downstream signaling cascades. Recently, Wang et al. identified a CRK from wheat cultivar 'XY 6' conferring high-temperature seedling-plant resistance . The expression level of this gene, TaCRK10, was induced significantly by infection with the fungal pathogen Puccinia striiformis f. sp. tritici causing strip rust at high temperature. TaCRK10 was shown to directly phosphorylate histone H2A in wheat (TaH2A.1) and activate the SA signaling pathway, resulting in enhanced high-temperature seedling-plant resistance to P. striiformis f. sp. tritici . However, several studies have also indicated that PTI can be compromised at high temperature upon inhibition of flg22and SA-induced defense responses (Rasmussen et al., 2013;Huot et al., 2017;Janda et al., 2019). Therefore, further studies are needed to understand the effect of temperature on PTI in detail. The effects of temperature on ETI and SA-dependent immunity Unlike PTI, much work has shown that high temperature decreases immunity evoked by ETI and QDR; this topic was well covered by a previous review (Desaint et al., 2021). Therefore, we focus here on recent important discoveries that illustrate how plant defense mechanisms are affected by high temperature. Disruptions of NLR-and SA-mediated defense signaling by high temperature are thought to be the main reason behind diminished plant innate immunity against pathogens under these conditions. In Arabidopsis, the photoreceptor phytochrome B (phyB) also acts as a thermosensor, whereby far-red light and high temperatures lead to its inactivation (Jung et al., 2016;Legris et al., 2016). DE-ETIOLATED 1 (DET1) and CONSTITUTIVELY PHOTOMORPHOGENIC 1 (COP1), which are two key negative regulators of photomorphogenesis, promote the transcription of PHYTOCHROME INTERACTION FACTOR 4 (PIF4), which encodes a basic-helix-loop-helix (bHLH) transcription factor acting as a positive regulator of growth and negative regulator of immunity (Gangappa et al., 2017;Gangappa and Kumar, 2018). phyB inhibits COP1 and PIF4 to modulate the trade-off between growth and defense. However, inactivation of phyB by high temperature results in the activation of the DET1/COP1-PIF4 module. As a result, PIF4 represses the expression of SUPPRESSOR OF NPR1-1, CONSTITUTIVE 1 (SNC1), which encodes a TNL initiating ETI through the EDS1-PAD4 signaling pathway at high temperature (Gangappa et al., 2017). Since SNC1 and EDS1 play a critical role in plant defense responses such as SA biosynthesis (Zhang et al., 2003;Garcıá et al., 2010), the inhibition of SNC1 expression at high temperature also significantly hinders SA-dependent resistance. Moreover, the SUMO E3 ligase SIZ1 (SAP and MIZ1 DOMAIN-CONTAINING LIGASE1) not only inhibits SNC1-dependent immune response but also enhances COP1 function at elevated ambient temperature (Hammoudi et al., 2018). Therefore, the activation of negative regulators (e.g., PIF4 and SIZ1) of SNC1 lead to impaired ETI and SA-dependent immunity. Recently, the transcription factor bHLH059 was identified as a temperatureresponsive regulator for SA-dependent immunity acting independently of PIF4 (Bruessow et al., 2021). Relative bHLH059 transcript level increased at 22°C compared to 16°C in Arabidopsis ecotype Columbia (Col-0). Total SA contents and resistance to Pseudomonas syringae pv. tomato (Pst) DC3000 decreased at 22°C relative to 16°C in Col-0, but remained similar in the bhlh59 mutant regardless of ambient temperature. Moreover, bHLH059 has the potential to be a negative regulator involved in a defense hub associated with multiple NLRs (Mukhtar et al., 2011), hinting at a new mechanism for the temperature-mediated vulnerability of plant immune responses that should be explored in more detail. SA is major defense phytohormone involved in PTI, ETI, and SAR; importantly, SA-dependent immunity is repressed by high temperature (Velaśquez et al., 2018;Zhang and Li, 2019;Castroverde and Dina, 2021), whereas JA/ET defense signaling are enhanced under elevated temperature (Havko et al., 2020;Huang et al., 2021a). Therefore, any susceptibility to temperature in the context of plant disease resistance is mainly associated with SA signaling. SA is synthesized through the isochorismate synthase (ICS) and phenylalanine ammonia-lyase (PAL) pathways in plants (Lefevere et al., 2020). Especially, pathogen-induced SA production takes place in chloroplasts, from which it is exported to the cytoplasm via the SA transporter EDS5 (Serrano et al., 2013). SA activates NONEXPRESSOR OF PATHOGENESIS-RELATED GENES 1 (NPR1), the master regulator of SA signaling in the cytosol, resulting in the nuclear translocation of NPR1 to induce the expression of pathogenesis-related (PR) genes conferring disease resistance and SAR (Backer et al., 2019). Moreover, although ETI activates SA signaling, SA and NPR1 repress ETI-induced cell death via the formation of SA-induced NPR1 condensates to promote the degradation of proteins (e.g., NLRs, EDS1, WRKY54, and WRKY70) involved in HR (Zavaliev et al., 2020). Huot et al. showed that inhibition of ICS1, which is also called SALICYLIC ACID-INDUCTION DEFICIENT 2 (SID2), under high-temperature conditions raised the susceptibility of Arabidopsis to Pst DC3000 due to the loss of SA biosynthesis and SA defense signaling (Huot et al., 2017). Furthermore, Arabidopsis disease resistance to Pst DC3000 increased at low temperature due to greater SA signaling that can itself be repressed by JA/ET defense signals . However, the molecular mechanisms determining the temperature sensitivity of the SA defense signaling pathway were unknown. Recently, two groups demonstrated different mechanisms by which the SA-mediated immune system is modulated under high temperature ( Figure 1). Kim (CBP60g) being key for the temperature vulnerability of SA defense signaling in Arabidopsis . GUANYLATE BINDING PROTEIN-LIKE GTPase 3 (GBPL3) binds to the promoter region of genes involved in the plant immune system and recruits the Mediator complex and RNA polymerase II to form GBPL defense-activated condensates (GDACs) (Huang et al., 2021b). The recruitment of GBPL3 and the formation of the GDAC at the CBP60g and SYSTEMIC ACQUIRED RESISTANCE DEFICIENT 1 (SARD1) loci, which have partially redundant functions, were necessary for their transcription, and these were attenuated by heat stress . Therefore, the expression of various genes (e.g., ICS1, EDS1, and PAD4) that would normally induce TNL-mediated ETI and SA biosynthesis downstream of CBP60g and SARD1 was suppressed under elevated temperature. However, and surprisingly, optimized CBP60g expression was sufficient to restore SA accumulation and plant immune responses at high temperature without growth or developmental penalty . Another group unraveled the molecular mechanism explaining the temperature vulnerability of CNLs and SA defense signaling in Arabidopsis (Samaradivakara et al., 2022). RESISTANCE TO P. SYRINGAE PV. MACULICOLA 1 (RPM1) and RESISTANCE TO P. SYRINGAE 2 (RPS2) encode two CNLs that recognize type III bacterial effectors indirectly through RPM1-INTERACTING PROTEIN 4 (RIN4) (Mackey et al., 2002;Mackey et al., 2003). P. syringae bacterial effectors such as AvrRpm1 a n d A v r B a c t i v a t e R P M 1 -m e d i a t e d E T I t h r o u g h hyperphosphorylation of RIN4, while AvrRpt2 activates RPS2mediated ETI via the degradation of RIN4 (Axtell and Staskawicz, 2003;Zhao et al., 2021). Plasma membrane-localized NDR1 interacts with RIN4 and is required for the activation of RPS2based ETI in response to AvrRpt2 (Belkhadir et al., 2004;Coppinger et al., 2004;Day et al., 2006). Samaradivakara et al. showed that overexpression of NDR1 rescues the transcript levels of RPS2 and SA-associated genes including those of ICS1 and CBP60g, which are repressed by high temperature, thus resulting in enhanced resistance to Pst DC3000 by maintaining ETI and SA defense signaling under elevated temperature (29°C) (Samaradivakara et al., 2022). In wheat, CNLs such as TaRPM1 and TaRPS2 also positively regulate disease resistance to P. striiformis f. sp. tritici at high temperature through the SA signaling pathway (Wang et al., 2020a;Hu et al., 2021a). Molecular mechanisms demonstrating the negative effect of high temperature on SA-dependent immunity and ETI. In Arabidopsis, the induction of CALMODULIN BINDING PROTEIN 60g (CBP60g) and NON-RACE SPECIFIC DISEASE RESISTANCE 1 (NDR1) is necessary for innate immunity against Pst DC3000. However, under high temperature, the formation of guanylate binding protein-like GTPase (GBPL) defense-activated condensate (GDAC), consisting of GBPL3, Mediator, and RNA polymerase II, at the CBP60g loci and the expression of NDR1 which can increase the transcript levels of RESISTANCE TO P. SYRINGAE 2 (RPS2) and SA-associated genes (Samaradivakara et al., 2022) are repressed significantly, resulting in temperature vulnerability of SA-dependent immunity and ETI. The effects of temperature on cytokinin-dependent immunity A recent study revealed that the phytohormone cytokinin (CK) also plays an important role in plant immunity at high temperatures (Yang et al., 2022). The trade-off between growth and defense modulated by CK can result in opposite effects on plant-pathogen interactions (Choi et al., 2011). Exogenous and endogenous CK both enhance plant resistance against biotrophic pathogens through SA-dependent and -independent immune responses, therefore exerting a potentiation (or priming) defense response activated upon pathogen attack (Conrath et al., 2015;Albrecht and Argueso, 2017). Although CK displays a synergistic effect with SA, increased SA levels can inhibit CK signaling via a negative feedback (Argueso et al., 2012). In addition, high concentrations of CK enhance disease resistance against biotrophic oomycetes in Arabidopsis, while low concentrations raise susceptibility (Argueso et al., 2012). CK can also increase susceptibility to pathogens not only by inhibiting the plant immune system (i.e., PTI and ROS) but also by establishing source-sink relationships (Albrecht and Argueso, 2017;McIntyre et al., 2021). In pepper (Capsicum annuum), Yang et al. showed that infection with Ralstonia solanacearum, a hemibiotrophic pathogen causing bacterial wilt, activates SA signaling at an early stage and JA signaling at a later stage in roots at ambient temperature, but these responses are both impaired at high temperature (Yang et al., 2022). Instead, isopentenyltransferase (IPT) genes, including CaIPT5, encoding a critical enzyme in cytokinin biosynthesis, were upregulated by R. solanacearum infection under high temperature. Surprisingly, exogenous treatment with transzeatin (tZ), the bioactive CK, significantly enhanced disease resistance to R. solanacearum in pepper, tomato, and tobacco (Nicotiana benthamiana) under high temperature, while SA and JA did not (Yang et al., 2022). Moreover, the authors suggested that CK triggers chromatin remodeling, resulting in the upregulation of genes encoding glutathione S-transferase (e.g., CaPRP1 and CaMgst3) and downregulation of genes involved in SA and JA signaling (e.g., CaSTH2 and CaDEF1 (Yang et al., 2022). The effects of temperature on calcium ion-dependent immunity Recently, the molecular mechanisms by which high temperature affects the calcium ion (Ca 2+ )-mediated immune system have also been reported. Ca 2+ is an important second messenger modulating various signaling pathways, including the plant immune response (Yang and Poovaiah, 2003). Biotic/abiotic stresses increase Ca 2+ levels in plant cells; Ca 2+ then binds to calcium-binding proteins (CBPs) and Ca 2+ sensors (e.g., calmodulin [CaM], calmodulin-like proteins [CMLs], calcineurin B-like proteins [CBLs], and calcium-dependent protein kinases CDPKs]) (Bose et al., 2011). The Ca 2+ /CBP complex activates Ca 2+ signaling by regulating the activity of signaling components such as kinases and transcription factors (Iqbal et al., 2020;Junho et al., 2020;Ma et al., 2020). Arabidopsis SIGNAL RESPONSIVE 1 (AtSR1), also known as CALMODULIN-BINDING TRANSCRIPTION ACTIVATOR 3 (CAMTA3), plays a central role in Ca 2+ signaling-mediated immunity (Yuan et al., 2021a). AtSR1 acts as a negative regulator of the plant immune response by decreasing the expression of genes involved in ETI and/ or SA signaling (e.g., EDS1, NDR1, CBP60g, SARD1, and NPR1) directly or indirectly (Du et al., 2009;Nie et al., 2012;Sun et al., 2020;Yuan et al., 2021b). Recently, Yuan and Poovaiah showed that the Ca 2+ influx induced by Pst DC3000 is blocked in Arabidopsis at high temperature (30°C) compared to ambient temperature (18°C). In addition, the susceptibility to Pst DC3000 was reduced in the atsr1 mutant plant compared to the wild type at both 18°C and 30°C (Yuan and Poovaiah, 2022). Moreover, the authors suggested that AtSR1 increases plant vulnerability to temperature by acting on stomatal and apoplastic immunity in an SA-dependent manner. In pepper, the expression of the WRKY transcription factor gene CaWRKY40 is induced by Ralstonia solanacearum infection, high temperature, and major defense phytohormones (e.g., SA, JA, and ET), and CaWRKY40 enhances both R. solanacearum resistance and heat tolerance (DANG et al., 2013). CaWRKY40 forms positive feedback loops with CaWRKY6, BASIC LEUCINE ZIPPER 63 (CabZIP63), and CaCDPK15, all positive regulators of resistance against R. solanacearum and/or heat stress tolerance (Cai et al., 2015;Shen et al., 2016a;Shen et al., 2016b). Recently, two signaling components controlled by CaWRKY40 were identified as positive and negative regulators of R. solanacearum resistance, respectively. CaCBL1 contributes to disease resistance against R. solanacearum at high temperature and participates in the positive feedback loop with CaWRKY40 . However, pepper MILDEW-RESISTANCE LOCUS O5 (CaMLO5) has the opposite function in plant immunity and heat resistance . CaWRKY40 induces the expression of CaMLO5 at high temperature, while CaWRKY40 represses it after R. solanacearum inoculation. CaMLO5 increases tolerance to heat stress but reduces the plant immune response against R. solanacearum. Moreover, the NAM/ ATAF/CUC (NAC) transcription factor CaNAC2c was recently identified as being involved in temperature-responsive immunity (Cai et al., 2021). Expression of CaNAC2c was induced by both high temperature and R. solanacearum inoculation, resulting in positive effects on both thermotolerance and resistance against R. solanacearum but negative effects on pepper growth. CaNAC2c modulated the thermotolerance/immunity trade-off through differential and context-specific interactions with HEAT SHOCK PROTEIN 70 (CaHSP70) and CaNAC029. However, CaNAC2c/ CaNAC029-mediated R. solanacearum resistance was impaired by ABA at high temperature, suggesting that the observed thermotolerance/immunity trade-off might be modulated by an antagonistic interaction between ABA and JA signaling (Cai et al., 2021). The effects of humidity on stomatal immunity Along with temperature, humidity is an influential environmental factor during plant-pathogen interactions. In general, high humidity conditions (e.g., rainfall, high atmospheric humidity, and high soil moisture) are favorable for plant infections not only by phyllosphere pathogens but also by rhizosphere pathogens. Indeed, high humidity increases the incidence of bacterial disease and the potential threat to yield in various crops (Xin et al., 2016). In fact, humidity can be more important than temperature in predicting fungal disease outbreaks (Romero et al., 2021). Since air can maintain more water vapor at high temperature, climate change is frequently accompanied by high humidity. Therefore, understanding the effect of humidity on plant immune mechanisms will be important for ensuring food security. By far, the main target of humidity affecting plant immunity is associated with stomatal control. Stomata consist of two guard cells that play a central role in modulating water transpiration and gas exchange between the plant and the atmosphere to balance the needs of photosynthesis while minimizing drought stress. Therefore, stomatal movements are tightly regulated in response to various environmental stimuli (e.g., humidity and CO 2 ) (Driesen et al., 2020). However, stomata also offer convenient portals through which pathogens can penetrate inner leaf tissues. To mitigate this threat, plants have developed sophisticated signaling networks conferring socalled stomatal immunity (Arnaud and Hwang, 2015;Murata et al., 2015). Guard cells recognize various PAMPs, resulting in PAMP-triggered stomatal closure through the activation of downstream signaling components ( Figure 2A). However, according to a coevolutionary model between plants and their pathogens known as the zigzag model, some adapted pathogens have developed phytotoxins (e.g., coronatine and syringolin A) and effectors (e.g., avirulence protein B [AvrB], hrp-dependent outer protein F2 [HopF2], HopM1, HopX1, and HopZ1) to overcome stomatal immunity and use open stomata as their entry point into the leaf apoplast space (Melotto et al., 2017). Recently, Lie et al. also revealed that Xanthomonas oryzae pv. oryzicola (Xoc) secretes the bacterial effector AvrRxo1 to impair stomatal immunity by inducing the degradation of rice PYRIDOXAL PHOSPHATE SYNTHASE 1 (OsPDX1) involved in ABA biosynthesis (Liu et al., 2022a). Mechanisms of immunity by stomatal closure and their relationship with humidity have been covered in previous reviews (Melotto et al., 2017;Aung et al., 2018). Notably, after pathogens invade internal plant tissues, stomatal closure can support conditions of apoplast hydration auspicious for pathogen colonization. Therefore, we focus here on the most recent mechanisms regulating stomatal conductance after pathogen entry. Since water is essential for the survival of pathogens as well as plants, pathogens have to work hard to obtain water when inside their host plants (Beattie, 2016). Water soaking is a common disease symptom visible as leaf spots caused by virulent bacterial pathogens (Davis et al., 1991;Reimers and Leach, 1991). Bacterial pathogens (e.g., Pst DC3000) induce water soaking to establish a favorable colonization milieu by using their effectors (e.g., WtsE, AvrHah1, HopM1, and AvrE1) (Ham et al., 2006;Schornack et al., 2008;Xin et al., 2016). For instance, Xin et al. identified two effectors (HopM1 and AvrE1) that induce water soaking in Arabidopsis and demonstrated the molecular mechanism by which HopM1 promotes apoplast hydration for bacterial proliferation (Xin et al., 2016). Arabidopsis HopM1 INTERACTOR 7 (AtMIN7), which is an ADP ribosylation factor-guanine nucleotide exchange factor (ARF-GEF) localized to the trans-Golgi-network/early endosome and involved in vesicle trafficking, is identified as a binding partner of HopM1 during a yeast two-hybrid (Y2H) screen and confirmed by pull-down assay . AtMIN7 contributes to PTI and ETI, and the Pst DC3000 effector HopM1 induces its degradation through the host 26S proteasome to suppress plant innate immunity (Nomura et al., 2011). Since AtMIN7 also plays a critical role in limiting fluid loss from plant cells, HopM1-mediated AtMIN7 degradation results in apoplast hydration and provides the favorable water condition needed for Pst DC3000 colonization; notably, high ambient humidity is required for water soaking (Beattie, 2016;Xin et al., 2016). Moreover, HopM1 and AvrE1 increase the expression of ABA-associated genes through transcriptome reprogramming and by raising ABA contents in guard cells (Roussin-Leveilleé et al., 2022). The guard cell-specific ABA transporter ABCG40 is necessary for HopM1-mediated water soaking (Roussin-Leveilleé et al., 2022), while AvrE1 activates ABA signaling through the inhibition of type one protein phosphatases (TOPPs), thereby suppressing SnRK2s (Hu et al., 2022). Therefore, Pst DC3000 utilizes HopM1 and AvrE1 to activate ABA signaling, inducing stomatal closure for water soaking after having invaded the plant inner space. To prevent water soaking, plants promote stomatal reopening to establish a drier apoplast environment in pathogen-infected cells ( Figure 2B). In rice, the osaba1 mutant provided genetic evidence that increased stomatal conductance can enhance disease resistance to Xoo . OsWRKY114 negatively regulated stomatal closure and conferred innate immunity against Xoo by repressing ABA signaling Song et al., 2022). Finally, in Arabidopsis, Lie et al. elucidated the molecular mechanism of stomatal immunity by which stomata reopen following effectortriggered stomatal closure (Liu et al., 2022b). They identified a class of small peptides, named the SMALL PHYTOCYTOKINES REGULATING DEFENSE AND WATER LOSS (SCREWs), and their receptor, the PLANT SCREW UNRESPONSIVE RECEPTOR (NUT), a member of the LRR-RK family. Flg22 treatment increases the expression of SCREWs and NUT, and recognition of SCREWs by NUT promotes the heterodimerization of NUT with BAK1. The NUT/BAK1 complex phosphorylates and enhances the phosphatase activity of ABI1 and ABI2, thus inhibiting the OST1/SnRK2.6-SLAC1 module whose activity promotes stomatal closure. As a result, plants can increase stomatal conductance to prevent water soaking through apoplast dehydration. The effects of carbon dioxide levels on stomatal immunity Since the industrial revolution in the second half of the 18 th century, the concentration of atmospheric CO 2 has begun to increase at an alarming rate. The Mauna Loa Observatory forecasts that the 2022 annual average CO 2 concentration will be 418.3 ± 0.5 parts per million (ppm). This trend is expected to continue and reach 730-1000 ppm by the end of the 21 st century (Alley et al., 2007). Elevated CO 2 levels can increase the yield of C 3 plants by enhancing photosynthesis, but will not benefit C 4 plants (Long et al., 2006). High CO 2 levels will also affect plant-pathogen interactions. However, the effects of CO 2 concentrations on plant defense mechanisms depend on specific plant-pathogen interactions and are complex (Noctor and Mhamdi, 2017). Moreover, the detailed underlying molecular mechanisms are not yet well known. Therefore, we provide below an overview of the best-documented effects of high CO 2 on plant defense mechanisms related to stomata and photorespiration. Like humidity, atmospheric CO 2 concentrations control stomatal immunity. CO 2 promotes stomatal closure through complex signaling networks (Zhang et al., 2018). First, atmospheric CO 2 enters guard cells via the PLASMA MEMBRANE INTRINSIC PROTEIN (PIP) aquaporins, followed by the conversion of CO 2 to bicarbonate (HCO 3 − ) by beta carbonic anhydrases (bCAs) to activate downstream signaling events. Indeed, several studies have shown that the ubiquitous bCA enzymes are involved in the plant defense response. In Arabidopsis, genetic evidence demonstrated that bCA1 and bCA4 contribute to CO 2 -induced stomatal closure by converting CO 2 into HCO 3 − (Hu et al., 2010). The CA activity of bCA1 is required for a full defense response against avirulent Pst DC3000 carrying the effector AvrB (Wang et al., 2009). In addition, the quintuple mutant bca1 bca2 bca3 bca4 bca6 exhibited a partial reduction in SA sensitivity (Medina-Puche et al., 2017). However, Zhou et al. showed that, despite impaired stomatal closure preventing pathogen entry, PTI-mediated SAdependent immunity against virulent P. syringae was enhanced in the bca1 bca4 double mutant (Zhou et al., 2020). Furthermore, they revealed that the PRR-mediated downregulation of bCA1 and bCA4 expression was attenuated by high CO 2 . These results suggest that CO 2 concentration and bCAs regulate plant immunity positively or negatively as a function of compatible and incompatible interactions with the incoming pathogen. In tobacco (N. tabacum), bCA SA-BINDING PROTEIN 2 (SABP2) exhibits lipase activity and confers SA-dependent immunity against tomato mosaic virus (Kumar and Klessig, 2003). Similarly, SABP3 has antioxidant activity and confers HR triggered by Pto-mediated recognition Stomatal immunity restricting pathogen entry or water soaking. (A) Pattern recognition receptors (PRRs)-triggered stomatal immunity. Recognition of pathogen-associated molecular patterns (PAMPs) by PRRs in guard cells promotes stomatal closure to prevent pathogen entry through activation of various signaling pathways such as ABA, SA, ROS, and Ca 2+ (Arnaud and Hwang, 2015;Murata et al., 2015). (B) Stomatal immunity preventing water soaking. After pathogens invade internal plant tissues, stomatal closure can confer apoplast hydration inducing pathogen colonization. To prevent it, the secreted peptides SMALL PHYTOCYTOKINES REGULATING DEFENSE AND WATER LOSS (SCREWs) and the cognate receptor kinase PLANT SCREW UNRESPONSIVE RECEPTOR (NUT) are induced in Arabidopsis. Recognition of SCREWs by NUT increases the activity of the protein phosphatases type 2C (PP2Cs) such as ABA INSENSITIVE 1 (ABI1) and ABI2, and it results in stomatal reopening through inhibition of OST1/SnRK2.6-SLAC1 module (Liu et al., 2022b). of the effector AvrPto (Slaymaker et al., 2002). In addition, silencing of SABP3 increases susceptibility to Phytophthora infestans (Restrepo et al., 2005). The expression of CA (accession number BQ113997) increased in potato (Solanum tuberosum) inoculated with an incompatible P. infestans strain, while it was downregulated in potato inoculated with a compatible P. infestans strain. Recently, Hu et al. also showed that bCA3 confers plant basal immunity in tomato (Hu et al., 2021b). High CO 2 and Pst DC3000 increases the induction of bCA3 expression by the transcription factor NAC43, while the phosphorylation of the serine 207 residue of bCA3 by GRACE1 (GERMINATION REPRESSION AND CELL EXPANSION RECEPTOR-LIKE KINASE 1) results in the activation of plant basal immunity related to the cell wall regardless of stomatal movement or SA signaling. After converting CO 2 into HCO 3 − , ABA signaling has a central role downstream of the convergence point of CO 2 for stomatal closure (Webb and Hetherington, 1997;Negi et al., 2008). Dittrich et al. argued that PYL4 and PYL5 are essential for CO 2 -induced stomatal closure in Arabidopsis (Dittrich et al., 2019). However, CO 2 -induced stomatal closure appears to be triggered by an ABAindependent pathway downstream of OST1/SnRK2.6 without direct activation of OST1/SnRK2.6 (Hsu et al., 2018). Another group also reported results in support of this idea. They developed a SnRK2 activity sensor called SNACS based on Förster resonance energy transfer (FRET) and showed that, although basal ABA levels and SnRK2 signaling are essential for CO 2 -induced stomatal closure, CO 2 signaling did not activate SnRK2s including OST1/ SnRK2.6 and PYL4 and PYL5 were also not required . Therefore, it remains controversial whether CO 2 signaling can act upstream of SnRK2 in the ABA signaling cascade. Moreover, recent studies indicated that ROS signaling is also important for CO 2 signaling for stomatal closure. In Arabidopsis, ROS signals as well as ABA signals are necessary for CO 2 -induced stomatal closure (Chater et al., 2015). He et al. showed that ROS produced by both cell wall peroxidases and NADPH oxidases, together with phytohormones (SA, JA, and ABA), play an important role in CO 2 -signaling during stomatal closure . However, the detailed molecular mechanisms by which ROS modulate CO 2 signaling are still unknown. Therefore, we discuss below the effects of CO 2 on ROS generation and plant immunity. The effects of carbon dioxide on peroxisome-derived hydrogen peroxide Photorespiration was once considered as a wasteful process because it is inefficient compared to the Calvin cycle and occurs when photosynthesis cannot operate. However, many studies have since shown photorespiration is involved in and required for various plant processes (Shi and Bloom, 2021). In particular, photorespiration has a crucial role in plant defenses due to ROS generation (Sørhagen et al., 2013). Hydrogen peroxide (H 2 O 2 ) is a non-radical ROS that is deeply associated with plant defense responses (Smirnoff and Arnaud, 2019). It is produced mainly in leaf peroxisomes during photorespiration, with peroxisomal glycolate oxidase (GOX) and catalase (CAT) acting as major positive and negative regulators of its production, respectively (Foyer et al., 2009;Corpas et al., 2020). Photorespiration and the Calvin cycle are competitively controlled by ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco); thus, high CO 2 levels decrease photorespiration (Long et al., 2004;Busch, 2020). Therefore, high CO 2 would be expected to repress plant immunity. However, several studies have shown that high CO 2 can increase plant defense responses including SA and JA (Noctor and Mhamdi, 2017). In addition, CAT2 was shown to be involved in SA-mediated auxin and JA inhibition of resistance against biotrophs (Yuan et al., 2017). Recently, Williams et al. demonstrated that CO 2 influenced resistance to biotrophic and necrotrophic pathogens differently in Arabidopsis (Williams et al., 2018). Under high CO 2 conditions (1200 ppm), resistance to both the biotrophic oomycete Hyaloperonospora arabidopsidis and the necrotrophic fungus Plectosphaerella cucumerina increased compared to ambient CO 2 (400 ppm). SA appeared to play a minor role in resistance to the biotrophic pathogen, while JA conferred strong resistance against the necrotrophic pathogen. At low CO 2 (200 ppm), resistance to H. arabidopsidis was enhanced through photorespiration-derived H 2 O 2 production, whereas resistance to P. cucumerina declined. Prospects of genome editing for climate resilient crop development Advances in biotechnology have opened up the possibility of overcoming the deleterious effects of climate change on crop plants. Induction of plant innate immunity compromised by climate change improves disease resistance to pathogen under the unfavorable environmental condition, but the constitutive activation of plant immune response retards growth and reduces crop productivity. To address this problem, scientists focused on the strategy to activate plant defense response spatiotemporally using pathogen-induced promoters and pathogen-responsive upstream open reading frames (Kim et al., 2021). However, this method cannot be free from the issue of genetically modified organisms. Therefore, the genome editing technologies based on SDNs (e.g., CRISPR/Cas9) are necessary for the development of climate resilient crops. However, even though genome editing has successfully increased the disease resistance of various crops, there are still significant hurdle to its application to climate change adaptive crop development due to the negative effects of mutations on the crop's performance (Karavolias et al., 2021). Therefore, in order to cope with the future food resource crisis, understanding the various plant immune mechanisms affected by climate change and identifying elite genes that can improve disease resistance through genome editing will be one of the most efficient ways to develop climate resilient crops. Conclusion We are currently living in an unprecedented era of climate change. The consequences of this changing climate may diminish crop production and access to nutrients for all living creatures, concomitantly with the faster adaptation of microorganisms including phytopathogens due to their short life cycle and rapid propagation compared to other and more complex species, causing more severe damage to crop plants. It is clear that the damage to global crop security due to biotic stresses will pose a great challenge to human life in the future. Scientists have recently achieved remarkable progress in this field. Here, we provide an overview of the known and anticipated effects of climate change such as temperature, high humidity, and CO 2 on plant immunity mechanisms. The current efforts to understand how climate change will impact plant immune systems and to develop more efficient NPBTs will make it possible to overcome the incoming crisis through crop improvement that can minimize damage and preserve yields in future pathogen-friendly environmental conditions. Author contributions SS conceptualized and wrote the manuscript. SRP supervised. All authors contributed to the article and approved the submitted manuscript. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Hsu, P. K., Dubeaux, G., Takahashi, Y., and Schroeder, J. I. (2021). Signaling mechanisms in abscisic acid-mediated stomatal closure. Plant J. 105 (2), 307-321. doi: 10.1111/tpj.15067
2022-11-29T14:50:32.419Z
2022-11-29T00:00:00.000
{ "year": 2022, "sha1": "973caaf2e909e0b0324747e4774a60242fe24285", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "973caaf2e909e0b0324747e4774a60242fe24285", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
254976359
pes2o/s2orc
v3-fos-license
Vaccine-based clinical protection against SARS-CoV-2 infection and the humoral immune response: A 1-year follow-up study of patients with multiple sclerosis receiving ocrelizumab Introduction Given the varying severity of coronavirus disease 2019 (COVID-19) and the rapid spread of Severe-Acute-Respiratory-Syndrome-Corona-Virus-2 (SARS-CoV-2), vaccine-mediated protection of particularly vulnerable individuals has gained increasing attention during the course of the pandemic. Methods We performed a 1-year follow-up study of 51 ocrelizumab-treated patients with multiple sclerosis (OCR-pwMS) who received COVID-19 vaccination in 2021. We retrospectively identified 37 additional OCR-pwMS, 42 pwMS receiving natalizumab, 27 pwMS receiving sphingosine 1-phosphate receptor modulators, 59 pwMS without a disease-modifying therapy, and 61 controls without MS (HC). In OCR-pwMS, anti-SARS-CoV-2(S)-antibody titers were measured prior to the first and after the second, third, and fourth vaccine doses (pv2/3/4). The SARS-CoV-2-specific T cell response was analyzed pv2. SARS-CoV-2 infection status, COVID-19 disease severity, and vaccination-related adverse events were assessed in all pwMS and HC. Results We found a pronounced and increasing anti-SARS-CoV-2(S)-antibody response after COVID-19 booster vaccinations in OCR-pwMS (pv2: 30.4%, pv3: 56.5%, and pv4 90.0% were antibody positive). More than one third of OCR-pwMS without detectable antibodies pv2 developed positive antibodies pv3. 23.5% of OCR-pwMS had a confirmed SARS-CoV-2 infection, of which 84.2% were symptomatic. Infection rates were comparable between OCR-pwMS and control groups. None of the pwMS had severe COVID-19. An attenuated humoral immune response was not associated with a higher risk of SARS-CoV-2 infection. Discussion Additional COVID-19 vaccinations can boost the humoral immune response in OCR-pwMS and improve clinical protection against COVID-19. Vaccines effectively protect even OCR-pwMS without a detectable COVID-19 specific humoral immune response, indicating compensatory, e.g., T cell-mediated immunological mechanisms. Introduction The negative implications of the coronavirus disease 2019 (COVID-19) pandemic have gained considerable public attention over the past few years. However, the rapid spread of the disease and its apparent varying severity have emphasized the importance of vaccinating vulnerable individuals, including, for instance, multiple sclerosis (MS) patients (pwMS) receiving disease modifying therapies (DMTs). Several studies assessing the humoral and cellular immune responses following severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) vaccination found reduced anti-SARS-CoV-2(S)-antibody titers and an impaired SARS-CoV-2-specific T cell response in subgroups of pwMS, depending on the DMT regimen. Specifically, pwMS receiving B cell modulating therapies (Bcmts) exhibited a reduced humoral immune response, while T cell-mediated immunity was preserved, which was also demonstrated by our own previous study evaluating a cohort of 59 ocrelizumab-treated pwMS (OCR-pwMS) (1)(2)(3)(4)(5). Longterm data indicate a weakened and short-lasting humoral response to SARS-CoV-2 vaccination in those patients (6). However, analysis of the anti-SARS-CoV-2-antibody and T cell responses after SARS-CoV-2 booster vaccination has revealed somewhat contradictory results. While some studies reported higher levels of anti-SARS-CoV-2-antibody titers and Maledon et al. reported increased CD4 + and CD8 + memory T cell responses after booster vaccination (7)(8)(9)(10)(11), not all studies observed increased B and T cell responses following the booster vaccine (12). The relevance of those findings with regard to the protection of pwMS from SARS-CoV-2 infection and COVID-19 is particularly important for clinical practice. Such data may enable clinicians to adjust treatment regimens and to devise optimal vaccination strategies. Concerning the general influence of DMTs on immunity, previous studies report opposing results. On the one hand, a lower incidence of COVID-19 infection was reported in pwMS and the choice of DMT was not found to be associated with the risk of COVID-19 (13, 14). On the other hand, treatment with ocrelizumab (OCR), an intravenously administered selective monoclonal anti-CD20antibody, was linked to a higher probability of COVID-19 and a more severe disease course (15-22). However, another study reported usually mild to moderate disease severity in pwMS receiving ofatumumab, a subcutaneously administered selective monoclonal anti-CD20-antibody (23). Real-world data regarding the probability of SARS-CoV-2 infection and the disease course of COVID-19 in such patients are therefore needed to clarify the efficacy of vaccine-based clinical protection of pwMS receiving DMTs. Moreover, relating humoral and cellular immune responses to the risk of SARS-CoV-2 infection and the development of COVID-19 could yield important information for future vaccination strategies of pwMS receiving DMTs. We here provide a monocentric retrospective study assessing the seroconversion rate following a third and fourth dose of SARS-CoV-2 vaccination in previously seronegative OCR-pwMS. In addition, we analyzed the probability and severity of COVID-19 infection in pwMS receiving OCR compared to (i) pwMS on other DMTs, (ii) pwMS without (w/o) a DMT and (iii) controls without MS (referred to as healthy controls (HC). Furthermore, we evaluated the relevance of anti-SARS-CoV-2(S)-antibodies and a SARS-CoV-2-specific T cell response, as assessed previously (1), for clinical protection from SARS-CoV-2 infection and symptomatic COVID-19. Finally, adverse events (AEs) of SARS-CoV-2 vaccines up to 16 months following the first vaccination were assessed. Study population 59 OCR-pwMS were included in our initial study analyzing the humoral and cellular immune response after two COVID-19 vaccine doses in 2021 (1). Medical charts were screened to identify patients, who then filled out a patient questionnaire assessing COVID-19 vaccination and infection status. Furthermore, serological parameters were checked to assess anti-SARS-CoV-2 (S)-antibody titers after the third and/or fourth vaccine. Overall, 51 of the 59 OCR-pwMS were included in the current study. In order to increase sample size, we additionally reviewed charts of pwMS treated with OCR at the Department of Neurology of the University Hospital Düsseldorf, Germany, between January 1 st 2021 and June 15 th 2022. Thereby an additional 37 OCR-pwMS who had filled in the patient questionnaire and/or received anti-SARS-CoV-2(S)-antibody testing during clinical routine work-up could be identified. Moreover, 69 pwMS receiving other DMTs (42 natalizumab [NAT-pwMS], 27 sphingosine 1-phosphate modulators [S1P-pwMS]), 59 pwMS without a DMT since the first vaccine [pwMS w/o DMT], and 61 HC who filled out the patient questionnaire were included. All pwMS had been diagnosed by an experienced neurologist in accordance with the 2017 revised McDonald criteria (24). The detailed inclusion and exclusion criteria are summarized in Table 1. The study design is shown in Figure 1. The study was performed according to the Declaration of Helsinki and was approved by the local Ethics Committee of the Board of Physicians of the Region Nordrhein and of the Heinrich Heine University Düsseldorf, Germany (reference number: 5951R). All patients gave informed consent to participate in the study. Patient questionnaire A standardized patient questionnaire was used to assess the number of SARS-CoV-2 vaccinations, vaccination side effects, COVID-19 infection status (asymptomatic SARS-CoV-2 infection confirmed by polymerase chain reaction (PCR) or symptomatic COVID-19 confirmed by either PCR or rapid antigen test) and disease severity (asymptomatic infection, symptomatic infection, hospitalization, intensive care unit (ICU)-treatment) of pwMS receiving different DMTs, pwMS w/o DMT, and HC. Routine blood analysis Routine blood tests were performed in the central laboratory of the University Hospital Düsseldorf. Flow cytometry was used to analyze leukocyte subsets (CD19 + B cells, CD3 + T cells, CD3 + CD4 + T helper cells, CD3 + CD8 + cytotoxic T cells, and CD56 + CD16 + NK cells). Blood samples were prepared using the BD Multitest 6-Color TBNK Reagent (BD Biosciences) according to the manufacturer's instructions. Data acquisition and analysis was performed with a BD Canto (BD Biosciences). Medical, psychiatric, cognitive, or other conditions that compromise the patient's ability to understand the patient information and to give informed consent 3. Treatment with mitoxantrone, azathioprine, mycophenolate mofetil, cyclosporine, or methotrexate within the last 5 years 4. Any previous treatment with alemtuzumab, cyclophosphamide, total body irradiation, or bone marrow transplantation 5. Patients who received immunosuppressants for diseases other than MS or who received long-term corticosteroid treatment 6. Patients with verified infection by human immunodeficiency virus or hepatitis C virus 7. Patients with a systemic autoimmune disorder 8. Patients with medical history of COVID-19 infection or positive abs to the SARS-CoV-2 spike protein and/or nucleocapsid protein before the first vaccine dose Regarding patients who received anti-SARS-CoV-2-ab testing and measurement of the SARS-CoV-2specific T cell response 1. Previous treatment with other B cell modulating therapies (e.g., rituximab, atacicept, belimumab, or ofatumumab) before the start of OCR ab, antibody, COVID-19, Coronavirus disease 2019, SARS-CoV-2, severe acute respiratory syndrome coronavirus 2. Measurement of anti-SARS-CoV-2antibodies Anti-SARS-CoV-2-antibody-analysis in peripheral blood (PB) was performed as a part of routine clinical practice. Immunoassays for the quantitative assessment of antibodies to the SARS-CoV-2 spike (S) protein and nucleocapsid (N) protein (Elecsys Anti-SARS-CoV-2, Roche Diagnostics) were performed according to the manufacturer's instructions. A titer of ≥ 0.8 (anti-SARS-CoV-2(S)-antibodies) and ≥ 1.0 (anti-SARS-CoV-2(N)antibodies) was considered positive. Analysis was performed prior to the first COVID-19 vaccination, after the second (median of (~) 4.1 [range: 2.6 -16.6] weeks), after the third (9 .0 [range: 3.6 -33.7] weeks), and after the fourth vaccine (~7.0 [range: 2.7 -12.6] weeks). Data from the analysis of anti-SARS-CoV-2-antibodies after two vaccination doses have partly been published before (1). Seroconversion after the third and/or fourth vaccine dose was assessed in previously antibody-negative OCR-pwMS after two vaccines. In addition, the anti-SARS-CoV-2(S)antibody response was correlated with the probability and severity of COVID-19 infection as assessed by the patient questionnaire. Quantification of T cell response to SARS-CoV-2 The SARS-CoV-2-specific T cell response in PB assessed by the SARS-CoV-2 Interferon-gamma Release Assay (IGRA; Euroimmun) was measured~4 weeks after the second dose of COVID-19 vaccination, as previously described (1). The recombinant S1 subunit of the SARS-CoV-2 spike protein served as antigen. The SARS-CoV-2-specific T cell response was correlated with the probability and severity of COVID-19 infection as assessed by the patient questionnaire. Data analysis 'GraphPad Prism' (version 9.0.0) was used to perform data analysis and visualization. Data are shown as the median with the range. The D'Agostino & Pearson test was used to test for normality. The Spearman correlation coefficient was used for correlation analysis. In the case of continuous variables, differences between two groups were assessed using the Mann-Whitney U test when comparing two groups or the Kruskal-Wallis test with Dunn test for multiple comparisons. For binary data, Fisher's exact test (two groups) or Chi-square test (more than two groups) was used. A p-value of ≤ 0.05 was considered significant. In order to rule out relevant confounding by differences in time between analysis of anti-SARS-CoV-2(S)-antibody titers and last vaccination, we compared the median time between antibody negative (ab -) and antibody positive patients (ab + ). We found that the time between the second vaccination and antibody testing was comparable between the two groups (abpatients pv2: 4.1 [2.6 -16.6] weeks; ab + -patients pv2: 4.4 [2.7 -8.1] weeks; p = 0.5809). Time between anti-SARS-CoV-2(S)antibody testing and third vaccination tended to be longer in the abgroup; however, differences were not significant (abpatients pv3: 10.3 [4.0 -33.7] weeks; ab + -patients pv3: 6.1 [3.6 -22.0] weeks; p = 0.1590). At pv4, only one patient had a negative anti-SARS-CoV-2(S)-antibody titer. Analysis was performed 12.6 weeks after the fourth vaccine dose compared to 6.0 [2.7 -12.6] weeks in ab + -patients. In addition, no significant differences in anti-SARS-CoV-2(S)-antibody titers could be observed between female and male OCR-pwMS or between RRMS and PPMS patients (Supplementary Figures 1A, B). Correlation analysis revealed a positive correlation between anti-SARS-CoV-2(S)-antibody titers and the peripheral B cell count pv2 (r = 0.6363; p < 0.0001; Figure 2E). Furthermore, anti-SARS-CoV-2(S)-antibody levels pv2 positively correlated with the time between the first vaccination and the last OCR cycle (r = 0.3005; p = 0.0128; Figure 2F). In addition, anti-SARS-CoV-2 (S)-antibody titers pv2 negatively correlated with the number of previous OCR cycles (r = -0.4110; p = 0.0005; Figure 2G). No correlation could be found between anti-SARS-CoV-2(S)antibodies and age, disease duration, or number of previous DMTs ( Supplementary Figures 1 C-E). Taken together, we found that while the humoral immune response to COVID-19 vaccines is impaired in pwMS receiving Bc-mts, additional COVID-19 vaccines can significantly boost the humoral immune response. Effective vaccine-based clinical protection of OCR-pwMS Real-world data on vaccine-based clinical protection of pwMS from SARS-CoV-2 infection and COVID-19 are of particular importance for clinical practice, e.g., for the potential adjustment of treatment regimens and optimal vaccination strategies. Of relevance to this study, Bc-mts have been linked to a higher probability of COVID-19 infection and a more severe disease course (15-21). Therefore, we assessed the SARS-CoV-2 infection status and COVID-19 disease course in a cohort of 81 OCR-pwMS. In total, 23.5% (19/81) of patients had been infected with SARS-CoV-2, of which 84.2% (16/19) had symptomatic COVID-19. The majority of OCR-pwMS was infected early in 2022 when Omicron was the prevailing variant in Germany based on data by the Robert Koch Institute (25 ). Non e of t he OCR-pwMS re quired hospitalization or ICU-treatment. Patients who had not been infected with SARS-CoV-2 had received significantly more vaccines compared to patients who were ( Figure 3A). Interestingly, infected patients tended to be younger than noninfected patients ( Figure 3B), and more female than male OCR-pwMS were infected with SARS-CoV-2 ( Figure 3C). The same results were found for symptomatic COVID-19 ( Figures 3D-F). As chronic diseases are associated with a higher prevalence of COVID-19 and a more severe disease course (26, 27), we analyzed differences in SARS-CoV-2 infection and symptomatic COVID-19 between patients with and without comorbidities (comorbidities in general, hypertension, and diabetes) in our patient cohort. No relevant differences could be found ( Supplementary Figures 2A-C). Moreover, infection status did not differ significantly between RRMS and PPMS patients (Supplementary Figure 2D). As previous studies found an attenuated humoral immune response in pwMS receiving Bcmts (1, 2, 4), we assessed the impact of the humoral and cellular immune response on the clinical outcome. The percentage of OCR-pwMS with positive anti-SARS-CoV-2(S)-antibody titers (non-infected patients: pv2; infected patients: at last antibody testing prior to infection (pv2 or pv3, depending on the time of infection)) was comparable between non-infected and infected patients as well as between patients with and without symptomatic COVID-19 (Supplementary Figure 2E). Likewise, the percentage of infected patients was not increased among OCR-pwMS without anti-SARS-CoV-2(S)-antibodies compared to patients with positive antibody titers (Supplementary Figure 2F). In addition, no significant differences in anti-SARS-CoV-2(S)-antibody titers (non-infected patients: pv2; infected patients: last antibody testing prior to infection (pv2 or pv3, depending on the time of infection)) were detected between groups (Supplementary Figure 2G). Furthermore, interferon-g release by SARS-CoV-2-specific T cells was not significantly different between non-infected and infected patients as well as between patients with and without symptomatic COVID-19 (Supplementary Figure 2H). As there is evidence of a compensatory T cell response in seronegative patients mediating clinical protection, we compared the probability of SARS-CoV-2 infection and symptomatic COVID-19 between seronegative patients with a detectable SARS-CoV-2-specific T cell response (abs -IGRA + ) and seropositive patients with a SARS-CoV-2-specific T cell response (abs + IGRA + ). The percentage of SARS-CoV-2 positive patients was comparable between the two groups (abs -IGRA + : 22.6% (7/31) versus abs + IGRA + : 20.0% (2/10); p = 0.8571). Likewise, the percentage of patients suffering from symptomatic COVID-19 did not markedly differ between groups (abs -IGRA + : 16.1% (5/31) versus abs + IGRA + : 20.0% (2/ 10); p > 0.9999) (Supplementary Figure 1I). Around one third of the abs -IGRA + -OCR patients developed positive anti-SARS-CoV-2(S)-antibodies pv3. Overall, SARS-CoV-2 vaccines mediate effective clinical protection of OCR-pwMS. Attenuated humoral immune response was not associated with a higher risk of SARS-CoV-2 infection or a more severe disease course, supporting the relevance of T cell-mediated immunity for clinical protection. Effective clinical protection and satisfactory safety profile of COVID-19 vaccines among pwMS irrespective of treatment regimen In order to address the question whether OCR-pwMS are at an increased risk of SARS-CoV-2 infection compared to pwMS receiving other DMTs, pwMS without DMTs, and HC, we compared the percentage of SARS-CoV-2-infected individuals and the COVID-19 disease course between groups. The probability of SARS-CoV-2 infection and symptomatic COVID-19 was comparable between OCR-pwMS, NAT-pwMS, S1P-pwMS, pwMS w/o DMT, and HC ( Figure 4A). Of note, none of the 216 pwMS was hospitalized or required ICU treatment. Furthermore, we analyzed the safety profile of SARS-CoV-2 vaccines in pwMS on different DMTs and pwMS w/o DMTs compared to HC. Overall, side effects were mild and less pronounced in pwMS compared to HC ( Figure 4B). Correlation analysis revealed a negative correlation between the number of side effects and age (r = -0.2090; p = 0.0005; Figure 4C). Three Discussion Previous studies reported an attenuated humoral and/or T cellular immune response to SARS-CoV-2 vaccination in pwMS receiving different DMTs (1,3,4,28,29). Given the varying severity of COVID-19 and the rapid spread of SARS-CoV-2, vaccine-mediated protection of vulnerable people, e.g., pwMS treated with different DMTs, has gained particular attention. Determining the optimal vaccination regimen for those patients remains challenging, especially for pwMS receiving Bc-mts. In this context, studies assessing the benefit of a COVID-19 booster vaccination on the SARS-CoV-2 specific immune response yielded controversial results (7)(8)(9)(10)12). Analyzing anti-SARS-CoV-2(S)-antibody titers after the second, third, and fourth vaccine in OCR-pwMS, we found a significant increase in antibody levels (pv2: 30.4%, pv3: 56.5%, pv4: 90.0%). Likewise, the percentage of OCR-pwMS with positive anti-SARS-CoV-2 (S)-antibody titers was higher pv3 compared to pv2 and pv4 compared to pv3, respectively. More than one third of OCR-pwMS with negative anti-SARS-CoV-2(S)-antibodies after two vaccinations had detectable antibodies after the third vaccine dose, which is in line with previous data (11) and validates that COVID-19 booster vaccination increases antibody titers. In general, it is surprising that patients receiving B cell depleting therapy are still able to mount an antibody response. This could be due to the fact that i) usually not 100% of CD20-expressing B cells are depleted and that ii) the CD20-negative B cell compartment also contributes to antibody production. In accordance with previous observations (1, 3, 4, 30), anti-SARS-CoV-2(S)-antibodies positively correlated with peripheral B cell counts as well as with the time between the first vaccination and the last OCR cycle. Mechanistically, B cell progenitors differentiate to plasmablasts and plasma cells upon antigen stimulation which, in turn, produce antigen-specific immunoglobulins (31, 32). The negative correlation between anti-SARS-CoV-2(S)-antibodies and the number of previous OCR cycles underscores the long-lasting immunomodulation mediated by Bc-mts (33). The relevance of our findings regarding the clinical protection of pwMS from SARS-CoV-2 infection and COVID-19 is particularly important for clinical practice. In pwMS receiving Bc-mts, it may help to optimize vaccination schemes and patient monitoring. Although pwMS treated with Bc-mts and S1P-modulators have an attenuated humoral and/or T cellular immune response, our data did not point towards a higher probability of SARS-CoV-2 infection or towards a more severe disease course among such patients. In fact, the amount of asymptomatic and symptomatic SARS-CoV-2 infections was comparable between pwMS on different DMTs, pwMS w/o DMTs, and HC. Of note, none of the pwMS experienced a severe COVID-19 disease course requiring hospitalization or ICU treatment. This indicates effective vaccine-mediated clinical protection of pwMS irrespective of treatment regimen. However, given the low patient numbers, it seems a bit premature to draw definitive conclusions from our cohort. Multicenter studies will be required to address this question. Furthermore, it is conceivable that our results were influenced by differences in protective behavior between cohorts, which should also be addressed in future studies. In addition, in-depth analysis could not identify significant differences in anti-SARS-CoV-2(S)-antibody or SARS-CoV-2specific T cell response pv2 between patients who became infected with SARS-CoV-2 and non-infected patients. Thus, even OCR-pwMS who were not able to mount a sufficient humoral immune response following SARS-CoV-2 vaccination were effectively protected from severe COVID-19. Taking into account the preserved SARS-CoV-2-specific T cell response in nearly all OCR-pwMS, this might be the result of a compensatory SARS-CoV-2-specific T cell response as described previously (1). In this regard, comparison of abs -IGRA + -and abs + IGRA + -OCR-pwMS revealed similar risks of SARS-CoV-2 infection and symptomatic COVID-19, further corroborating this assumption. Correspondingly, previous studies emphasize the importance of a robust T cell response for clinical protection, especially from severe COVID-19 disease courses (34). Accordingly, in an animal model, T cells mediated effective clinical protection from COVID-19 even in the absence of an antibody response (35). Furthermore, we found that non-infected pwMS had received significantly more vaccinations compared to patients who were infected with SARS-CoV-2 or suffered from COVID-19. This emphasizes the positive effects of COVID-19 booster vaccinations even in pwMS who are not able to mount a sufficient SARS-CoV-2-specific humoral immune response. This is in concordance with a previous study reporting a significant reduction in SARS-CoV-2 infection after the third vaccine dose (36). Interestingly, pwMS infected with SARS-CoV-2 tended to be younger than non-infected patients. In the general population, the risk for SARS-CoV-2 infection seems to be similar among different age groups. Nevertheless, the risk for hospitalization and death due to COVID-19 significantly increases with age (37). In our cohort of pwMS, none of the patients experienced a severe COVID-19 disease course. This might be due to a generally more protective behavior among those patients, especially older pwMS. Apart from the differences in age between infected and non-infected patients, we found more female than male OCR-pwMS to be infected with SARS-CoV-2. Gender aspects in the COVID-19 pandemic have been previously assessed (38,39). In this context, a higher infection risk was observed among women at working age, which was attributed to differences in social behavior with women having a higher number of contacts (38). In contrast, male sex was identified as a risk factor for death due to COVID-19 (39). Extending our knowledge on gender aspects is important for optimal prevention and treatment of diseases as for example COVID-19. Regarding the safety profile of COVID-19 vaccines, the short-term safety profile seems to be favorable among pwMS as revealed by our own study and previous ones (1,40,41). Even one year after the first vaccination, no relevant side effects were observed in our cohort demonstrating the safety of COVID-19 vaccines for pwMS. Side effects were even less pronounced in pwMS compared to HC, which might be due to the differences in age between groups as correlation analysis revealed a negative correlation between the number of side effects and age. Increased tolerability of SARS-CoV-2 vaccines in the elderly population has been previously described (42). Changes in the immune response in the sense of immunosenescence might contribute to this observation (42). With regard to disease activity, in two patients, the first MS relapse leading to an eventual diagnosis of RRMS occurred in close temporal association with COVID-19 vaccination. In addition, one RRMS patient without DMT reported MS symptoms fulfilling the criteria for a relapse. However, given the high prevalence of RRMS within the population, the natural relapsing-remitting disease course, and the absence of relapses in pwMS receiving DMTs, an association between COVID-19 vaccination and MS relapses seems unlikely. Although relapses in association with vaccination have been reported (43, 44), a prospective, multicentric observational study could not find an increased short-term risk of clinical relapses after mRNA COVID-19 vaccination (41). However, further large prospective long-term studies are necessary to clarify this issue. We acknowledge that our study is limited by its retrospective design and the high variability in time between vaccination and anti-SARS-CoV-2(S)-antibody testing. This was primarily due to data acquisition during routinely scheduled clinical workups. It is therefore conceivable that weak antibody responses following vaccination have not been detected, especially in patients with long latencies between analysis of anti-SARS-CoV-2(S)antibodies and last vaccination. Furthermore, no data on anti-SARS-CoV-2(S)-antibody titers were available for the control groups and no information on the SARS-CoV-2-variants were available, which might impact disease severity. Another limitation of the study was that the type of vaccine was not captured in all cases. Patients received vaccines from various companies. Thus, the overall patient number in every subgroup would have been too small to perform a meaningful statistical analysis regarding vaccine-type associated effects. On the other hand, the large cohort of pwMS receiving different DMTs, the inclusion of HC, and the analysis of the humoral and T cellular vaccine-induced immune response in combination with SARS-CoV-2 infection status and COVID-19 disease course are the main strengths of our study. Regarding potential bias, we did not subselect OCR-pwMS despite reasonable exclusion criteria (e.g., based on EDSS, disease course, disease duration, number of previous OCR cycles). In addition, the same methods were used for all pwMS as well as for HC and negative results were included in the manuscript. In conclusion, additional COVID-19 vaccinations can boost the humoral immune response in OCR-pwMS and are associated with improved clinical protection against SARS-CoV-2. COVID-19 vaccines mediate effective clinical protection of OCR-pwMS irrespective of the anti-SARS-CoV-2 (S)-antibody status indicating compensatory, e.g., T cell mediated, immunological mechanisms. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by Ethics Comittee, Faculty of Medicine, Heinrich Heine University Düsseldorf, Düsseldorf, Germany. The patients/participants provided their written informed consent to participate in this study.
2022-12-23T14:12:57.097Z
2022-12-23T00:00:00.000
{ "year": 2022, "sha1": "673bddcc88679413c2a9805713cab6b40e1cc4d6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "673bddcc88679413c2a9805713cab6b40e1cc4d6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
220831077
pes2o/s2orc
v3-fos-license
Fields of definition of elliptic fibrations on covers of certain extremal rational elliptic surfaces We study K3 surfaces over a number field $k$ which are double covers of extremal rational elliptic surfaces. We provide a list of all elliptic fibrations on certain K3 surfaces together with the degree of a field extension over which each genus one fibration is defined and admits a section. We show that the latter depends, in general, on the action of the cover involution on the fibers of the genus 1 fibration. Introduction One main distinction of K3 surfaces, among others, is that they form the only class of surfaces that might admit more than one elliptic fibration with section, which is not of product type [17,Lemma 12.18]. It is therefore a natural problem to classify such fibrations. This has been done in the past three decades, via different methods by several authors, see for instance [15], [14], [7], [2], [3], [6] and [1]. Recently, the second and third authors have proposed a new method to classify elliptic fibrations on K3 surfaces which arise as double cover of rational elliptic surfaces. We refer the reader to [5] and [6] for more details. Let X be a K3 surface obtained as a double cover of an extremal rational elliptic surface defined over a number field k. The purpose of this paper is to determine fields of definition of the distinct elliptic fibrations on X, i.e., fields over which the classes of the fiber and of at least one section are defined (see Def. 2.2). We also determine, in some examples, an upper bound for the degree of the field over which the Mordell-Weil group admits a set of generators. Extremal rational elliptic surfaces have been classified by Miranda and Persson in [10]. There are sixteen configurations of singular fibers on such surfaces. We restrict further our attention to smooth double covers of extremal rational elliptic surfaces with distinct reducible fibers, i.e. such that there are no two reducible fibers of the same Kodaira type. Given a genus 1 fibration on such a K3 surface, we show that it admits a section over a field that depends on the action of the cover involution on its fibers (see Theorem 5.3). We illustrate this last result for K3 surfaces that arise as a double cover branched over two smooth fibers of the extremal rational elliptic surfaces with one unique reducible fiber and also on smooth double covers of the surface with fiber configurations either (III * , I 2 ) or (III * , III). Remark that among those sixteen configurations of singular fibers on extremal rational elliptic surfaces only four of them have a unique reducible fiber, namely (I 9 , 3I 1 ), (II * , II), (II * , 2I 1 ) and (I * 4 , 2I 1 ). As only the configuration of reducible fibers plays a role in our arguments, we narrow these down to three classes and study those extremal rational elliptic surfaces, denoted by R 9 , R 2 , and R 4 and the corresponding K3 surfaces X 9 , X 2 , and X 4 , respectively. We denote by R 3 an extremal rational elliptic surface with fibers either (III * , I 2 ) or (III * , III) and its generic K3 cover X 3 . Notice that the surface X 4 also occurs as a double cover of R 3 and hence, X 3 and X 4 belong to the same family of K3 surfaces. A reason to explore elliptic fibrations on X i , i = 2, 3, 4, 9 is that they have different behavior with respect to the cover involution of X i → R i . Fibrations that are preserved by this involution are easier to describe via linear systems of curves on a rational surface, and one can also exhibit a Weierstrass equation for those as pointed out in [1] and [6]. In particular, on X 3 and X 4 , which can be identified, we have two different involutions (induced by the covers X 4 → R 4 and X 3 → R 3 ) and the behavior of each fibration on X 3 X 4 with respect to these two involutions can be different. This paper is organized as follows. In Section 2 we introduce the notations which remain in force during the paper and lay down our setting. Section 3 is devoted to the study of rational curves on the K3 surface X obtained as a double cover of a rational elliptic surface R. More precisely, motivated by the work done in [5] and [6], we study the behavior of the image by the quotient map π : X → R of rational curves on X and we determine the rational curves on X coming from a section defined over k of the elliptic fibration E R . While Section 3 is of geometric nature, Section 4 is dedicated to study the arithmetic of extremal rational elliptic surfaces defined over k. In particular, we obtain the quite intriguing fact that with a possible unique exception all extremal rational elliptic surfaces can be obtained, over the ground field, as a blow-up of base points of a pencil of genus one curves in P 2 or P 1 ×P 1 , Lemma 4.6. Section 5 is dedicated to the study of K3 surfaces coming from double covers of extremal rational elliptic surfaces. We prove in Theorem 5.3 that a genus 1 fibration on X admits a section over a field which depends on the action of the cover involution on the fibers of the genus 1 fibration. Finally, in Sections 6 and 7 we illustrate the previous result. More precisely, in Section 6 we give a classification of elliptic fibrations on the surface X 9 given by a generic double cover of an extremal rational elliptic surface R 9 with an I 9 . We present a fiber class corresponding to each fibration on X 9 using sections and components of the reducible fibers of the fibration induced by the elliptic fibration on R 9 . We also study the Mordell-Weil groups of each fibration and the fields of definition of the fibrations and their Mordell-Weil groups. Section 7 has similar results for the K3 covers of the rational elliptic surfaces R 2 , R 3 and R 4 , with reducible fibers (II * ), (III * , I 2 ) and (I * 4 ), respectively. 1.1. Relation to the literature. Fields of definition of the Mordell-Weil group of non-isotrivial elliptic surfaces were studied independently by Swinnerton-Dyer in [19] and Kuwata in [8] via different methods than the ones presented here. While the first focused on elliptic surfaces fibered over P 1 , the latter dealt with basis of arbitrary genus. Nevertheless, both works are concerned with more general elliptic surfaces than the scope of this paper. In Kuwata's work he supposes that each component of the reducible fibers is defined over the ground field k. Let E be the generic fiber of an elliptic surface defined over k with base curve C. He proves that there is an explicitly computable number m and an explicitly computable extension L/k such that mE(k(C)) = mE(L(C)). Our work differs from Kuwata's in several ways. Firstly, while he focus on one unique elliptic fibration on a surface, we consider one elliptic fibration which we assume is defined over some number field k and use it as a point of start to study the other elliptic fibrations present on the surface. Thus in our work, one elliptic fibration is defined over the ground field, while the others not necessarily. For that reason we are concerned with different fields of definition, namely the one of the elliptic fibration and that of the Mordell-Weil group. Secondly, we focus on an specific class of surfaces, namely K3 surfaces. The further assumption that the K3 is a double cover of an extremal rational elliptic surface guarantees that the fields of definition will be much smaller than those for arbitrary elliptic surfaces. Indeed, fields of definition of the Mordell-Weil group of an elliptic surface can be quite large, for instance in [19] Swinnerton-Dyer constructed an elliptic surface for which the field of definition of the Mordell-Weil group has degree 2 7 .3 4 .5, and the degrees of the fields of definition in Kuwata's work are also much larger than the bounds obtained here. Finally, it is worth to mention that Kuwata's work deal with fields of arbitrary characteristic while we focus on number fields. We expect that our work allows generalizations to that setting and the restriction has been made for the matter of simplicity but also because some of our work builds up on Miranda and Persson's work in [10], and on two of the author's paper [5]. Both settings are restricted to characteristic zero. Preliminaries and setting Let R be a rational elliptic surface, i.e. a smooth projective rational surface endowed with a relatively minimal genus one fibration. We assume throughout this article that such a fibration admits a section. We denote by E R : R → P 1 the elliptic fibration on R. Let d : C → P 1 be a double cover of P 1 branched on 2n points p i , i = 1, . . . , 2n. Then the fiber product R × P 1 C is endowed with an elliptic fibration R × P 1 C → C, induced by E R . We call the fibers E −1 R (p i ), i = 1, . . . , 2n, the branch fibers. If all the branch fibers are smooth, then the fiber product R × P 1 C is smooth, and we denote it by X. Otherwise, R × P 1 C is singular and we denote by X its smooth model such that the elliptic fibration E X : X → C, induced by E R , is relatively minimal. Assume that R, the fibration E R and the zero section O are all defined over a given number field k, which we fix once and for all. If the morphism d is defined over k then so is the fiber product, its possible desingularization X and the inherited elliptic fibration E X . The surface R × P 1 C is naturally endowed with an involution, namely the cover involution of the map R× P 1 C → R induced by the 2 : 1 map d : C → P 1 . It extends to an involution τ ∈ Aut(X) which is the cover involution of the generically 2 : 1 cover X → R. we denote by π the quotient map π : X → X/τ bir R. From now on we make the following assumptions. • d : C → P 1 is defined over k, • n = 1, i.e. d : C → P 1 is branched in two points. Hence C P 1 , • the (two) branch fibers are reduced. As a consequence of the previous assumptions we have that X is a K3 surface over k (see [17,Example 12.5]), the involution τ is non-symplectic, i.e. it does not preserve the symplectic form defined on X, since the quotient of a K3 by a symplectic involution is again a K3 surface (see [13]), and both E X and its zero section are defined over k. Moreover, if the branch fibers are smooth, the reducible fibers of E X occur in pairs that are exchanged by τ . Notation 2.1. We denote by τ * the involution induced by τ on NS(X). We recall that, due to their geometry, i.e. trivial canonical class and regularity, K3 surfaces might admit more than one elliptic fibration, all with basis P 1 , see for instance [17,Lemma 12.18]. Let X be as above, then it admits an elliptic fibration E X and at least another elliptic fibration different from E X [3, §8.1] and [6, Proposition 2.9]. One can divide the elliptic fibrations on X in three different classes, depending on the action of τ on its fibers. In particular, let η be an elliptic fibration on X then, by [5,Section 4.1], it is • of type 1 with respect to τ , if τ preserves all the fibers of η; • of type 2 with respect to τ , if τ does not preserve all the fibers of η, but maps a fiber of η to another one. In this case τ is induced by an involution of the basis of η : X → P 1 . It fixes exactly two fibers and τ * preserves the class of a fiber of η; • of type 3, if τ maps fibers of η to fibers of another elliptic fibration. In this case τ * does not preserve the class of the generic fiber of η. The distinct elliptic fibrations on X are not necessarily defined over k. Moreover, different fibrations might be defined over different fields. The aim of this paper is to take a first step into understanding how the action of the involution τ on the fibers of a given fibration might influence its field of definition. Throughout this paper we adopt the following definition. Definition 2.2. Given X as above and an elliptic fibration η on X, then the smallest field extension of k over which the class of a fiber of η is defined and η admits a section is called the field of definition of the fibration η. We denote it by k η . We denote by k η,MW the smallest field extension of k η over which the Mordell-Weil group of η admits a set of generators. Remark 2.3. The reader should be aware that in Def. 2.2 our starting data is a K3 surface X constructed as a base change of a rational elliptic surface R. Thanks to this construction X inherits an elliptic fibration from R which is defined over a number field k. All other fields of definition that appear in this paper are (possibly trivial) field extensions of k. In this sense, the field of definition is unique, but when considering X without this preliminary data then the field is no longer necessarily unique. Indeed, one could for instance obtain the same X as a double cover of another rational elliptic surface R defined over a different field k . Rational curves on K3 surfaces Let X be a K3 surface as in Section 2. In this section we study the behavior of the image by the quotient map π of the rational curves on X. As in the case of elliptic curves, this behavior depends on the action of the cover involution τ on the rational curve. Lemma 3.1. Let C be a smooth rational curve on X and D = π(C) its image on R. Denote by m the intersection number C · τ (C). Then D is of one of the following types. Proof. Let C be a smooth rational curve on X and D = π(C). By the adjunction formula we have that C 2 = −2. We consider the following cases τ (C) = C and τ (C) = C. (1) τ (C) = C. In this case, the involution can either act as the identity on C or as an involution of C. If the former holds then D is a (−2)-curve on R and therefore it is a component of a fiber of E R . If τ acts as an involution on C then since π * (C) = 2D, we have that Hence D 2 = m − 2. By the adjunction formula we have that D(−K R ) = m. To conclude it is enough to recall that the class of a fiber of the elliptic fibration on R is given by −K R . Thus, D is an m-section of E R if m > 0, or a fiber component of E R if m = 0. Moreover, if π is branched over two different smooth fibers, τ (C) = C implies that τ is an involution of C, and thus D is a section of the elliptic fibration E R . Hence if D is a component of a fiber one must have τ (C) = C, i.e., case (2) with m = 0. The next lemma deals with rational curves on X that come from sections defined over k of the elliptic fibration E R . As sections do not split on the double cover we show that their inverse image is a irreducible curve defined over k. Lemma 3.2. Let P R be a section of E R : R → P 1 that is defined over k, then P X := π −1 (P R ) is an irreducible smooth rational curve of X and τ (P X ) = P X . In particular P X is defined over k. Proof. If P R is a section of an elliptic fibration on a rational surface then it meets the branch locus of R × P 1 P 1 → R, which is given by two fibers, in two points. Thus its inverse image is a 2 : 1 cover of a rational curve branched in two points, i.e. either an irreducible smooth rational curve or the union of two smooth rational curves meeting in two points. If the inverse image of P R is the union of two curves, say P 1 and P 2 , we have π * (P R ) = P 1 + P 2 . Since the inverse image of a fiber F R , which is not a branch fiber, consists of two disjoint fibers, we have π * (F R ) = (F 1 + F 2 ). But then we would have π * (F R )π * (P R ) = 2 = (F 1 + F 2 )(P 1 + P 2 ) = 2(F 1 P 1 ) + 2(F 1 P 2 ), where we used that F 1 and F 2 are linearly equivalent, since they are fibers of the same fibration on X. This would implies that either P 1 or P 2 is a component of a fiber, which is not possible, because they intersect in two points which lie in two different fibers, namely the ramification fibers. We conclude that π −1 (P R ) is a smooth rational curve. Even if one has to blow up some points to obtain X from R × P 1 P 1 , the strict transform of the inverse image of P R , which we denote by P X , remains irreducible and thus τ (P X ) = P X . Since the double cover map d is assumed to be defined over k and so are the points that one has to possibly blow up, we have that P X is also defined over k. Extremal rational elliptic surfaces In what follows we analyze the arithmetic of extremal rational elliptic surfaces defined over k. Let us recall that an extremal rational elliptic surface has Mordell-Weil rank equal to 0, and thus only finitely many sections, i.e. (−1)-curves. Proof. There are two main ingredients in the proof of the statement. The first one is the Shioda-Tate formula which tells us that , · · · , n v − 1} and, since the surface is extremal, MW(E R ) is a finite group. The second is the fact that the absolute Galois group Gk acts on NS(R) preserving the intersection pairing. Recall that both the zero section O and the class of a smooth fiber F are defined over k. A reducible fiber with exactly two components has each component defined over k since the component that intersects the zero section is preserved. Thus in what follows we can focus on reducible fibers with at least three components. By the hypothesis on the reducible fibers being distinct, there are at most two such fibers, say F v1 and F v2 , see the table in [10,Thm. 4.1]. Assume w.l.o.g that F v1 is the fiber with more reducible components. Each reducible fiber is globally defined over k because, by assumption, it is unique. Hence its trivial component is also defined over k. Since the latter intersects at most two other components, these are Gk-conjugate and as a pair they form a Gk-orbit. The same happens to all other components that are not defined over k. Let k R /k be the quadratic extension over which the fiber components of F v1 are defined. We show that each section is defined over k R . The Mordell-Weil group is globally defined over k since its elements are precisely the (−1)-curves in the Néron-Severi group. Moreover because each section C intersects transversally a unique fiber component of F v1 , the point of intersection is mapped by any element in Gk to another point of intersection of a component of F v1 and a section. Since a component of a fiber is mapped by Gk either to itself or to a unique other fiber component defined over k R , the intersection point is also defined over k R . Thus C is a rational curve with a k R -point and hence it is also defined over k R . It remains to show that the components of F v2 are defined over k R . This follows from the fact that after contracting the sections and certain fiber components of F v1 we reach either P 2 or P 1 × P 1 . The components of F v2 are thus rational curves with k R -points that correspond to the contracted curves, and hence are defined over k R as well. Example 4.2. The extremal rational elliptic surface with Weierstrass equation has reducible fibers of types I * 1 and I 4 . Its Mordell-Weil group is Z/4Z with two sections defined over Q, namely [0, 1, 0] and [t 2 −2t, 0, 1], and two conjugate sections, namely , which are defined over a quadratic extension. The reader can find this example as X 141 in [10, Table 5.2]. The next example shows that the hypothesis on the distinct reducible fibers is indispensable in Lemma 4.1. has four reducible fibers of type I 3 . Its Mordell-Weil group is defined over a biquadratic extension Q(i, √ 3). This corresponds to the surface X 3333 in [10, Table 5.3]. See also Remark 4.5, iii). Notation 4.4. In what follows, we keep the notation introduced in Lemma 4.1 and denote by k R the extension of k over which the Néron-Severi group NS(R) admits a set of generators given by fiber components and sections of the elliptic fibration on R, and by G R the Galois group Gal(k R /k). We keep the subscript R for the Galois group to reinforce the dependence on the surface. By Lemma 4.1, if the Kodaira types of the reducible fibers of E R are different then k R /k has degree at most 2. iii) Extremal rational elliptic surfaces with repeated reducible fibers have their Néron-Severi group defined, in general, over extensions of larger degree. For instance, a rational elliptic surface with reducible fiber configuration (2I 5 ) has, in general, its Néron-Severi group defined over an extension of degree four, with cyclic Galois group (see the proof of Lemma 4.6), while a surface R 0 with (2I * 0 ) has, in general, NS(R 0 ) defined over an extension of the ground field with Galois group given by the dihedral group of order 12. Indeed, the Galois group is generated by an involution which preserves each section and switches the two I * 0 -fibers and by S 3 which preserves the fibers, and permutes the non-trivial elements of MW(R 0 ) = (Z/2Z) 2 . Minimal models for extremal RES over k. We recall that every rational elliptic surface defined over and algebraically closed field of characteristic zero can be obtained as the blow-up of the base points of a pencil of generically smooth cubics, [4, §5.6.1] or [9, Lemma IV. 1.2.]. This fact clearly does not hold, in general, over a number field k. For instance, the blowup of the base point of the anti-canonical linear system of a k-minimal del Pezzo surface of degree one is a rational elliptic surface defined over k which does not admit a blow down to P 2 as it is clearly not even k-rational. On the other hand, if one restricts our attention to extremal rational elliptic surfaces then one can show that they are always k-rational, with possible exception given by those with reducible fiber configuration (2I * 0 ) 1 . Still this is not enough to assure that they can be obtained as a blow-up of the projective plane. Indeed, we provide an example in Proposition 6.1 for which this does not hold. Nonetheless, we obtain a quite intriguing fact, namely that with a possible exception of surfaces with configuration (2I * 0 ), all extremal rational elliptic surfaces can be obtained, over the ground field, as a blow-up of base points of a pencil of genus one curves in P 2 or P 1 × P 1 , in Lemma 4.6. Despite its simple proof, this intriguing fact is not in the literature and likely not known to many experts. Since an extremal rational elliptic surface has finite Mordell-Weil group, it has only finitely many curves of negative self-intersection [9, Proposition VIII.1.2]. The Galois group G R acts on NS(R) preserving the intersection pairing. Since, by hypothesis, the zero section of the fibration E R is defined over k it is always preserved by G R . From now on we will use the following notation for the irreducible components of a reducible fiber: the component which intersects the zero section will be denoted by C 0 ; in a fiber of type I n , the components C i , i ∈ Z/nZ are numbered requiring that C i C j = 1 if and only if |i − j| = 1. Lemma 4.6. Let R be an extremal rational elliptic surface defined over k with at most one non-reduced fiber. Then R is k-isomorphic to the blow-up of the base points of a pencil of cubic curves in P 2 or a pencil of curves of bidegree (2, 2) in P 1 × P 1 . In particular, such surfaces are always k-rational. Proof. We recall that the Galois group G R preserves the zero section, maps a fiber of a certain Kodaira type to a fiber of the same Kodaira type and maps sections to sections. The consequences are the following: , then G R maps each section to itself; ii) If every reducible fiber is of different Kodaira type, then G R maps the zero component of each fiber to itself; iii) If both i) and ii) are satisfied, then G R is trivial since in that case the fiber with most components is a non-reduced fiber of type II * , III * or I * 4 (see the table in [10, Thm. 4.1]) and each component is preserved by the Galois group because the zero section and the two torsion are preserved and defined over k; iv) If there is a fiber which is preserved by G R as, for example, in case ii) and it is either of type I n or of type IV * , then G R restricted to that fiber and to MW(E R ) acts trivially or as the hyperelliptic involution because it has to preserve the intersection properties of the components of the reducible fiber of type I n . Using these properties of G R , one is able to find an explicit contraction γ defined over k, which maps a rational elliptic surface R either to P 2 or to P 1 × P 1 for all the extremal rational elliptic surfaces R with reducible fiber configuration different from (2I * 0 ). Fibrations (II * , II), (II * , 2I 1 ), (III * , III), (III * , I 2 , I 1 ), (I * 4 , 2I 1 ): G R is trivial because iii) in the previous list is satisfied. One first contracts all the sections, then contracts the image of the components of the fibers II * , III * , I * 4 , respectively, that are the (−1)-curves after the previous contractions. One iterates this process in order to contract 9 curves. The composition of all these contractions is a map R → P 2 , defined over k. Fibrations (IV * , IV ), (IV * , I 3 , I 1 ): by iv), G R acts trivially or coincides with the hyperelliptic involution. After contracting all the sections, one obtains three (−1)-curves in the image of the IV * -fiber. One is preserved by G R , the other two might be exchanged by it. After contracting these three curves, one is in a similar situation, i.e. there are three (−1)-curves, forming two or three orbits for G R . After contracting also these three curves, one obtains a k-rational map from R to P 2 . Fibrations (I 9 , 3I 1 ), (I 8 , I 2 , 2I 1 ), (I 6 , I 3 , I 2 ): by (iv), G R acts trivially or coincides with the hyperelliptic involution. First one contracts all the sections. Then one contracts some curves in the image of the fibers of type I 9 , I 8 and I 6 respectively, but not in the other reducible fibers. For the fiber I 9 one contracts the images of the components C 0 , preserved by G R , and of C 3 and C 6 , which are either fixed or switched by G R ; after that one contracts the images of the curves C 2 and C 7 , which are also either fixed or switched by G R . For the fiber I 8 one contracts the images of components C 0 and C 4 , which are preserved by G R , and of the components C 2 and C 6 , which are either fixed or conjugate under G R . For the fiber of type I 6 one contracts the images of components C 0 and C 3 , which are preserved by G R . In all the cases one obtains a k-rational map from R to P 1 × P 1 . Fibrations of type (4I 3 ) and (2I 4 , 2I 2 ): in both these cases there are many sections, namely 9 sections in case (4I 3 ) and 8 in case (2I 4 , 2I 2 ). Since the torsion sections are disjoint and G R preserves MW(E R ), one can contract simultaneously all the sections. This produces a k-rational map to P 2 in case (4I 3 ) and to P 1 × P 1 in case (2I 4 , 2I 2 ). Fibration of type (2I 5 , 2I 1 ): we have G R ⊆ Z/4Z, and if G R = Z/4Z then the action of the generator of G R is the following. To obtain a k-rational map to P 2 , one first contracts all the sections, and then one contracts the components C 4 , which form an orbit if G R = Z/4Z. Fibration of type (I * 2 , 2I 2 ) and (I * 1 , I 4 , I 1 ): one contracts first the four sections and then the images of the four simple components of the fiber of type I * i . This gives a k-rational map to P 1 × P 1 . i) If m is odd and R has a unique reducible fiber then R admits a contraction over k to P 1 × P 1 . ii) If m is odd and R has at least two reducible fibers then R admits a contraction over k to P 2 . iii) If m is even then R admits a contraction over k to P 1 × P 1 . Proof. The result follows by the proof of the previous lemma. Indeed, if R is a semi-stable extremal elliptic fibration and m is odd, then the fibration on R is one of the following: (I 9 , 3I 1 ), (2I 5 , 2I 1 ), (4I 3 ). The first fibration corresponds to case i) and can be contracted to P 1 × P 1 , for every action of G R . The other two fibrations correspond to the case ii) and it was already proved that they can be contracted to P 2 . If m is even (case iii)), then the fibration on R is one of the following: (I 8 , I 2 , 2I 1 ), (I 6 , I 3 , I 2 ), (2I 4 , 2I 2 ) and in the proof of the previous lemma is shown that all of them can be contracted to P 2 . Remark 4.8. The converse of the different cases in Proposition 4.7 is not always true; some of the surfaces treated in Lemma 4.6 can be contracted, over an algebraically closed field, to both P 2 and P 1 × P 1 . Whether or not these surfaces can be contracted to both P 2 and P 1 × P 1 over k as well depends on the action of G R , and in particular on the action of the hyperelliptic involution on the reducible fibers. See Proposition 6.1 and Figure 1, where we show this for a surface with fibers (I 9 , 3I 1 ). Double covers of extremal rational elliptic surfaces In the rest of this article we consider K3 surfaces that are double covers of extremal rational elliptic surfaces defined over k and branched on two smooth Gk-conjugate fibers. Let X be such a surface. Recall that since the extremal rational elliptic surfaces considered here 2 are rigid, their K3 double covers have a 2-dimensional moduli space, as each branch point is allowed to vary in P 1 . In this section we show that the field over which a genus one fibration on X admits a section depends on the action of the cover involution on the fibers of the genus one fibration. Notation 5.1. Let R and X be as above and t 1 , · · · , t m ∈ P 1 k points over which the reducible fibers of R are located. Since the base change map X → R is branched only over smooth fibers, there are two distinct points above each t i . Then τ restricted to the pair of fibers of E X above each t i is a field homomorphism, which we denote by σ i . We denote by k τ the Galois field extension of k whose Galois group is generated by σ 1 , · · · , σ m . By construction k τ /k is an extension of even degree dividing 2 m . We denote by k R,τ the compositum of the fields k R and k τ . Lemma 5.2. Let R be an extremal rational elliptic surface as above and X a generic member of the 2-dimensional family given by double covers of R branched in two smooth fibers. Then NS(X) admits a set of generators over k R,τ . Proof. Since the Néron-Severi group has rank 10 and the Mordell-Weil group has rank zero, it follows from the Shioda-Tate formula that the reducible fibers of an extremal rational elliptic surface R have in total 8 components contributing to the set of generators of NS(R). Since X is a double cover of an extremal rational elliptic surface R branched on smooth fibers, the reducible fibers of the inherited fibration E X contribute with 16 components to a set of generators of NS(X). If X is generic among such surfaces then it lies in a 2-dimensional family and hence NS(X) has rank 18 and is generated by fiber components, the zero section and a smooth fiber of E X . All such curves are defined at most over k R,τ . Theorem 5.3. Let R be an extremal rational elliptic surface defined over k such that its reducible fibers are all of distinct Kodaira types. Let X be a K3 surface obtained as a double cover of R branched on two smooth fibers conjugate under Gk, τ the cover involution and η a genus 1 fibration on X. Then the following hold. i) If η is of type 1 w.r.t. τ then η is defined over k R and admits a section over k R,τ . ii) If η is of type 2 w.r.t. τ then it is defined and admits a section over k. Proof. For ii) notice that because the branch locus is smooth there is only one fibration of type 2, namely the one induced by the elliptic fibration on R. Indeed, different fibrations of type 2 correspond to different contractions of (−1)-curves in X/τ that are components of non-relatively minimal elliptic fibrations. Since the branch locus is smooth there are no (−1)-curves to be contracted and, in particular, X/τ R. Since the double cover morphism is defined over k so is the induced elliptic fibration on X and the zero section inherited from R. If η is of type 1 then each fiber is the pull-back of a conic 3 in R [5, Theorem 4.2]. Let C be such a conic. Since NS(R) is generated by curves defined over k R then the class of C has a divisor C 0 whose components are defined over k R . Moreover, as the fibers of η are fixed by τ , the pull-back C 0 is also defined over k R . Its class moves in X giving the elliptic fibration η. The fibrations of type 3 are certainly more difficult to study by using the geometry related with R. Indeed, even if X is a double cover of R, the fibrations of type 3 are not easily related with the geometry of R, by definition, since they are not preserved by the cover involution. But, one is still able to prove that certain fibration of type 3 are defined on certain fields, if one is able to find components of their reducible fiber is a proper way, as observed in the next Remark. Remark 5.4. Since the irreducible components of reducible fibers and of the sections of the elliptic fibration on K3 surface are rational curves, they are rigid in their class. So if their class is defined over a certain field, say k R,τ , and they are irreducible curves, then they are defined over k R,τ . Suppose now that the Néron-Severi group is defined over k R,τ and it is generated by a certain set of classes of irreducible rational curves. If the union of some of these curves is a reducible fiber F of a fibration η, then the reducible fiber F and its class are defined over k R,τ . In particular the fibration η is defined on k R,τ and if also a section of η can be found among the generators of the Néron-Severi, then η is an elliptic fibration on k R,τ . So, in order to prove that a fibration of type 3 defined on a K3 surface satisfying the assumptions of Theorem 5.3, is defined over k R,τ , it suffices to find among the generators of NS(X) a configuration of (−2)-curves which corresponds to a reducible fiber of η. Remark 5.5. We believe that it is always possible to find a fibration of type 3 as in the previous remark, at least for the K3 surfaces X as in Thereom 5.3. We are able to prove this for all the elliptic fibrations of type 3 on the surfaces considered in Sections 6.3, and 7.3 of this paper. Hence for all the surfaces considered in this paper, we have that the field of definition of the elliptic fibrations on the K3 surfaces X as in Theorem 5.3 are at most biquadratic extension of k, by the explicit description of the elliptic fibration and the Remark 5.4. Remark 5.6. Certain sections on elliptic K3 surfaces as above might be defined over a smaller subfield of k R,τ that contains k. See, for instance, the fifth column of lines 2, 3, 4, 9, 11 and 12 in Table 2. Following the geometric classification of extremal rational elliptic surfaces by Miranda and Persson [10, Theorem 4.1], we notice that, among those surfaces, only four of them have only one reducible fiber, namely (I 9 , 3I 1 ), (II * , II), (II * , 2I 1 ) and (I * 4 , 2I 1 ). From a lattice theoretic point of view the surfaces with singular fibers (II * , II) and (II * , 2I 1 ) are the same since, from that perspective, only the reducible fibers matter. Moreover, they share the same properties of interest to us, namely reducible fibers and fields of definition of components of fibers and thus we denote both of them by R 2 . In the following sections, we study those extremal rational elliptic surfaces, denoted by R 9 , R 2 , and R 4 and their corresponding K3 surfaces X 9 , X 2 , and X 4 , respectively. We also study the surface R 3 which has two reducible fibers (III * , III) and its generic K3 cover X 3 . The justification for considering R 3 as well is the fact that the surface X 4 occurs also as double cover of R 3 and hence X 3 and X 4 belong to the same family of K3 surfaces. 5.1. Arithmetic models of extremal rational elliptic surfaces. Over algebraically close fields, all rational elliptic surfaces can be obtained by the blow up of the base points of a pencil of genus 1 curves in the projective plane. Over a number field k, this not longer holds true. Nevertheless, if one restricts attention to extremal rational elliptic surfaces, we have shown in Lemma 4.6 that, with one possible exception, they can be obtained as a blow up of a pencil of genus 1 curves in the plane or in the ruled surface P 1 × P 1 . The realization of the blow down of an extremal rational elliptic surface R to either rational minimal model is connected to, but not always determined by, the Galois group G R introduced in Notation 4.4. More precisely, given singular fiber configurations on an extremal rational elliptic surface might entail more than one possible action of the Galois group Gk on its fiber components and hence, with a few exceptions, it does not make sense anymore to speak about the extremal rational elliptic surface with a given configuration as one does over algebraically closed fields. In what follows we keep the notation R i and X i for a surface with fiber configuration described in the previous paragraph. We study what are the possible actions of Gk on each configuration. We show, in Propositions 6.1 and 7.1 respectively, that R 9 might admit two possible actions, while R 2 , R 3 , R 4 always admit a unique action. 6. The surfaces R 9 and X 9 Let R 9 be an extremal rational elliptic surface with one reducible fiber of type I 9 and X 9 a K3 surface obtained by a double cover of R 9 branched in two smooth Gk-conjugate fibers. In this section, we classify all the possible fibrations of the K3 surface X 9 and determine their types with respect to the cover involution τ 9 , a field over which the class of a fiber is defined and a field over which the Mordell-Weil group is defined. Negative curves on R 9 . Recall that the configuration I 9 is given by 9 smooth rational curves meeting in a cycle with dual graphà 8 (see [9, Table I. 4.1] 4 ). The singular fibers of R 9 are I 9 + 3I 1 and the Mordell-Weil group is Z/3Z = {O, t 1 , t 2 }, where O is the zero section and t 1 and t 2 are 3-torsion sections. The Néron-Severi group of R 9 contains also the classes of the irreducible components of the unique reducible fiber, denoted by C 0 , C 1 , . . . , C 8 . The intersections which are not trivial are the following The following result tells us that R 9 can always be obtained as the blow-up of the eight base points on a pencil of curves of bi-degree (2,2) in P 1 × P 1 , and that if the Galois group G R9 fixes each 3-torsion section then R 9 can also be obtained as the blow-up of the nine base points of a pencil of cubics in P 2 (see also Lemma 4.6). Both blow-ups occur in multiple points, i.e., points with assigned multiplicities. Proposition 6.1. If for every g ∈ G R9 = Gal(k R9 /k) we have g(t 1 ) = t 1 , then G R9 = {id} and R 9 can be contracted both to P 2 and to P 1 × P 1 . If there exists at least one g ∈ G R9 such that g(t 1 ) = t 1 , then g(t 1 ) = t 2 , G R9 = Z/2Z = g and g is the elliptic involution ι R9 restricted to the fiber I 9 . In this case R 9 can be contracted to P 1 × P 1 but not to P 2 . Proof. Let F be the class of a fiber of E R9 . Since F is preserved by G R9 , for each g ∈ G R9 we have 1 = t 1 F = g(t 1 )g(F ) and thus g(t 1 ) is necessarily a section. It is different from O as the latter is fixed by G R9 . Hence either g(t 1 ) = t 1 or g(t 1 ) = t 2 . We begin with g(t 1 ) = t 1 . In that case g(t 2 ) = t 2 and since t 1 intersects the fiber component C 3 and t 2 intersects C 6 , we have g(C 3 ) = C 3 and g(C 6 ) = C 6 . Since each other fiber component intersects one among C 0 , C 3 and C 6 , it is also fixed by g. Hence G R9 is trivial. We pass to the case g(t 1 ) = t 2 . This implies that g(C 3 ) = C 6 . The fiber components intersecting C 3 and C 6 must be switched by g and, a posteriori, so must C 1 and C 8 . We have g(C i ) = C 9−i . Hence, in that case, G R9 has order 2 and is generated by the elliptic involution. The following example illustrates the two different Galois actions that occur in Proposition 6.1. Figure 1. Two ways to contract the fiber I 9 Example 6.4. In Remark 6.3, we saw that the pencil of cubics given by P 9 = (z 0 z 1 z 2 ) + t(z 2 0 z 1 + z 2 1 z 2 + z 2 2 z 0 ) gives rise to an R 9 surface. A Weierstrass equation for this surface is y 2 = x 3 − (432t 3 + 10368)xt + 3456t 6 + 124416t 3 + 746496, and the Mordell-Weil group consists of three sections defined over Q, which are given by [0, 1, 0] and [12t 2 , ±864, 1]. We conclude from 6.1 that in this case we have G R9 = {id}. Another example of an R 9 surface is given by the Weierstrass equation which has Mordell-Weil group given by the section [0, 1, 0] and the two sections Table 5.3]. So the Mordell-Weil group of this surface is trivial over Q, and defined over the quadratic extension Q( √ 3). We conclude from Proposition 6.1 that in this case we have G R9 = Z/2Z, and the surface can not be contracted to P 2 . Let X 9 be a K3 surface obtained by a generic base change of order 2 on the rational elliptic surface R 9 as described in Section 2. Then the elliptic fibration E R9 : R 9 → P 1 induces an elliptic fibration E X9 : X 9 → P 1 on X 9 . We denote by ι X9 the elliptic involution on E X9 . We denote by τ 9 the cover involution of π : X 9 → R 9 . By definition the fibration E 9 is of type 2 with respect to τ . So, by Theorem 5.3, the field of definition of the elliptic fibration and of a section of it is k. Nevertheless there could be other sections or components of some reducible fibers which are not defined over k. Proof. The Néron-Severi group contains the 18 linearly independent classes O X9 , T 1 , T 2 and Θ j i , for i = 1, . . . , 8, j = 1, 2. Hence it has rank at least 18. On the other hand the family of X 9 is a two dimensional family (because of the choice of two branch fibers of the double cover X 9 → R 9 ). So the Néron-Severi has rank at most 18. We conclude that the 18 classes listed before form a basis of NS(X 9 ).The intersection form and the discriminant form of NS(X 9 ) can be explicitly computed and one can check that it has discriminant 9. In particular, a generator for the discriminant group is 2 i /9 and its discriminant form is Z/9Z 8 9 , which is the opposite to the discriminant form of A 8 . The discriminant form of the transcendental lattice is the opposite of the discriminant form of the Néron-Severi group. Hence the transcendental lattice T X9 is an even lattice with signature (2, 2) and discriminant form Z/9Z −8 9 . The transcendental lattice is uniquely determined by these data by [12,Theorem 1.13.2]. We observe that the discriminant form of T X9 is the same as the one of A 8 and that rank (T X9 ) + 4 = rank (A 8 ). Corollary 6.7. The filed k E9 coincides with k R,τ . Proof. By Proposition 6.6 the classes of the reducible fibers and of the sections of E X9 form a basis of NS(X). Each of these classes corresponds to a unique curve (since these are negative curves), which is a smooth rational curve. Hence the field where all these classes are defined coincides with the field where NS(X) is defined. The former is k E9 by definition, the latter is k R9,τ9 by Lemma 5.2. 6.3. Classification of all the possible fibrations of the K3 surface X 9 . In order to find all elliptic fibrations on X 9 , we use Nishiyama's method explained in [14]. As explained in [14, Section 6.1], if one is able to find a lattice T 0 which is negative definite, has the same discriminant form of the transcendental lattice of a K3 surface and its rank is the rank of the transcendental group plus four, then there is an operative method to classify the configuration of the reducible fibers of the elliptic fibrations on the surface. In our particular case, by Proposition 6.6, we put T 0 = A 8 and in order to classify the elliptic fibrations on X 9 (and in particular the lattice W of each of these elliptic fibration, with the notation of [14]) we have to find the orthogonal complements of primitive embeddings of the root lattice A 8 in the 24 possible lattices listed (by their root type) by Niemeier [11,Satz 8.3] (or [14,Theorem 1.7]). By [14,Lemmas 4.1 and 4.3] we know that A 8 embeds primitively uniquely, up to the action of the Weyl group, in A m for m ≥ 8, in D n for n ≥ 9, and in no other root lattice. The orthogonal complements of these embeddings in the 24 Niemeier lattices are then found in [14,Corollary 4.4], and this determines the reducible fibers and the rank of the Mordell-Weil group for each fibration. These results are summarized in Table 1. Note that line 1 is the fibration E X9 . Apart from the torsion part of the Mordell-Weil group, everything is found by Nishiyama's method as explained above. We compute the torsion parts in what follows. 6.3.1. Torsion of the Mordell-Weil group for the elliptic fibrations associated to X 9 . By [18, Table 1], we can immediately conclude that the torsion of the fibrations in lines 2, 3, 4, 5, 8, 9, and 12 is trivial, and the torsion part of fibrations 6, 7, 10, and 11 is either Z/2Z or trivial. Fibration 11 comes from the orthogonal complement of the embedding of A 8 in a lattice N of rank 24 with root type A 24 . We observe that N/A 24 = Z/5Z ( [11,Satz 8.3] or [14,Theorem 1.7]). By [14, Lemma 6.6, iii)], the torsion of the elliptic fibration corresponding to this embedding of A 8 in N has to be contained in N/A 24 , so this fibration does not have a 2-torsion section and the torsion part of the Mordell-Weil group is trivial. Note that, in terms of the notation of our configuration of 2I 9 (see Figure 2), we find a fiber of type I 16 composed of the following curves on X 9 . = 0 [17, Chap. 11 §11.8], and therefore it is a torsion section [17,Theorem 11.5]. Since we know that the fibration in line 11 has trivial torsion, and the fibration in line 7 is the only other one with reducible fiber of type I 16 , we conclude that we found a representation of the fibration in line 7, and therefore the torsion part of the Mordell-Weil group of this fibration is Z/2Z. Finally, we find that the torsion part of the Mordel-Weil groups of the fibrations in lines 6 and 10 are Z/2Z in the same way as we did for line 7. Table 1. Elliptic fibrations of X 9 6.4. Determining the type of each fibration of X 9 . In what follows, we assume that the surface R 9 is general, i.e., its Galois group G R9 is not trivial. The goal of this section is to find an example, for each fibration η in Table 1, and to determine for each example the following: a) The type with respect to the cover involution τ 9 ; b) an upper bound for the degree over k of a field of definition of the fibration, that is, a field over which the reducible fiber and a 0-section are defined; c) an upper bound for the degree over k of a field k η,MW over which the Mordell-Weil group of the fibration admits a set of generators. Corollary 6.8. For each fibration in Table 1, there exists at least one elliptic fibration on X 9 with the properties given in the list which is defined over k R9,τ9 . Proof. The result follows by 5.3 for the fibration of type 1 and 2. For the fibration of type 3, one wants to apply the Remark 5.4. For all the listed fibration with the exception of the 11, we are able to write the class of the fiber as a linear combinantion of Θ j i , O X9 and T k . All these curves are defined on k R9,τ9 , by 6.7. In the case of the fibration 11, we introduced another curve, M . Since its class is written as linear combination of the classes generation NS(X), its class is defined on k R9,τ8 . Since it negative effective class, we deduce that it is supported either on an irreducible rational curve or on the union of rational curves. Since it is a component of a fiber of a certain fibration, at least on the closure of the field of definition of the fibration, it is an irreducbile curve (where it is defined). Hence, M is defined an irreducible smooth rational curve defined over k R9,τ9 We gave an example for each fibrations in Table 1. We choose a section for each of them to be the zero section and we determine their type with respect to τ . By using Proposition 6.5 we describe the properties of the fields k η,M W , which follows by the previous Corollary. The results are listed in Table 2. Table 2. Types of the different elliptic fibrations of X 9 and fields of definition 7. The surfaces R 4 , R 3 , R 2 and the surfaces X 4 , X 3 , X 2 In this section we establish an analogous study for the extremal rational surfaces R i , for i = 4, 3, 2. We classify all the possible fibrations of the K3 surfaces X i and determine their types with respect to the cover involutions τ i , for i = 4, 3, 2. Let R 4 be an extremal rational elliptic surface with one reducible fiber of type I * 4 . Its Mordell-Weil group is Z/2Z = {O, t 1 }, where O is the zero section and t 1 is a 2-torsion section. Recall that a fiber of type I * 4 is given by 9 smooth rational curves meeting with dual graphD 8 , see [9,Table I.4.1]. The Néron-Severi group of R 4 contains also the classes of the irreducible components of the reducible fiber, denoted by C 0 , C 1 , . . . , C 8 . The intersections which are not trivial are the following: Let R 3 be an extremal rational elliptic surface over k with one reducible fiber of type III * . As R 3 is extremal, there is another reducible fiber which is either an I 2 or an III. Its Mordell-Weil group is Z/2Z = {O, t 1 }, where O is the zero section and t 1 is a 2-torsion section. Recall that a fiber of type III * is given by 8 smooth rational curves meeting with dual graphẼ 7 , see [9, Table I Let R 2 be an extremal rational elliptic surface over k with one reducible fiber of type II * . The other singular fibers are either II or 2I 1 . Its Mordell-Weil group is {O}, i.e., it is trivial. Recall that a fiber of type II * is given by 9 smooth rational curves meeting with dual graphẼ 8 , see [9, Table I The following result shows that the surfaces R i have trivial Galois group G Ri , that is its Néron-Severi group admits a set of generators over k given by the zero section, a smooth fiber and the non-trivial fiber components of the reducible fibers. It also presents their contractions of negative curves to minimal k-rational surfaces. Proposition 7.1. Let R be on the following surfaces: R 2 , R 3 , R 4 . Then G R is trivial. Moreover, the surfaces R 2 , R 3 and R 4 can be contracted to P 2 ; the surfaces R 3 and R 4 can be also contracted to P 1 × P 1 and the surfaces R 2 and R 3 can be also contracted to F 2 , the Hirzebruch surface with a unique (−2)-curve. Proof. The proof is similar to the one of Proposition 6.1. Indeed, for R = R 2 or R 3 , each g ∈ G R , g(O) = O and if MW = {O, t 1 }, g(t 1 ) has to be a section different from O and hence g(t 1 ) = t 1 . Thus for each R i , i = 2, 3, the sections are preserved and this implies, arguing via the intersection of the components of the reducible fibers as in Proposition 6.1, that all the components of the unique reducible fibers are fixed. Let us consider the surface of type R 3 . We have three different possibilities, to obtain three different surfaces: • Let us contract the sections O and t 1 . Then we contract the images of C 0 and C 6 (which are now (−1)-curves); the images of C 1 and C 5 ; the images of C 2 and C 4 . There remain the images of C 3 , which is a curve with self-intersection 0, and of C 7 , which is a curve with self-intersection −2. There are no (−1)-curves on this surface, so we obtain a minimal rational surface, with two independent classes in the Néron-Severi group which have self-intersection 0 and −2. Hence we obtained F 2 • Let us contract first the section O and then (in this order), the images of the components C 0 , C 1 , C 2 , C 3 . Now the image of C 7 is a (−1)-curve. We contract it. It remains a unique (−1)-curve, which is the section t 1 . We contract it and then (in this order) the images of the components C 6 and C 5 . We obtain a minimal rational surface whose Néron-Severi group is generated by one class (we contracted 9 curves), which is the image of C 4 . This rational surface is necessarily P 2 . • Let us contract first the section O and then (in this order), the images of the components C 0 , C 1 , C 2 , C 3 . Now the image of C 4 is a (−1)-curve. We contract it. Then we contract t 1 and the image of the component C 5 . We obtain a minimal rational surface, whose Néron-Severi group is generated by the two classes which are the images of C 7 and C 5 . Their self-intersection is 0 and they meet in a point, so we obtained P 1 × P 1 . Let us now consider the surface R 2 (see Figure 3). There is a unique (−1)-curve, the section O. So we contract it, and than we contract (in this order) the images of the components C 0 , C 1 , C 2 , C 3 , C 4 , C 5 . Now both the images of C 6 and C 8 are (−1)-curves and they meet in a point. • If one contracts the image of C 8 , one obtains a minimal surface, whose generators of the Néron-Severi group are the images of C 7 and C 6 and this surface is F 2 (because of the presence of a (−2)-curve, image of C 7 ). • If one contracts the image of C 6 , then one has to contract the image of C 7 and one obtains a minimal rational surface, whose Néron-Severi group has one generator (the image of C 8 ) and thus the surface is P 2 . Let us consider the surface of type R 4 . We contract first the section O and then (in this order) the images of the components C 0 , C 2 , C 3 , C 4 , C 5 , C 6 . Now we have three (−1) curves, i.e. the images of C 7 , C 8 and t 1 . The image of C 8 meets both the images of C 7 and of t 1 : if one contracts the image of C 8 , one obtains the minimal surface P 1 × P 1 ; if one contracts the images of t 1 and C 7 one obtains P 2 . Let X i be a K3 surface obtained by a generic base change of order 2 on the rational elliptic surface R i for i = 4, 3, 2 as in Section 2. Let P i and Q i be the points corresponding to the branch fibers of the cover X i → R i . We have the following result, analogous to Proposition 6.5. Proposition 7.2. The Galois group G E X i of the elliptic fibration E Xi : X i → P 1 is contained in (Z/2Z). It is trivial if and only if the points P i and Q i are defined over the ground field. Proof. The group G Ri is trivial by Proposition 7.1, so the unique Galois action is the one of the cover involution τ i , which is trivial if and only if the branch fibers are defined over the ground field. The elliptic fibrations E Xi , i = 4, 3, 2, are induced by E Ri . We fix the following notation: each component C l (resp. D l ) of a reducible fiber of E Ri corresponds to two curves Θ j l (resp. Φ j l ), j = 1, 2 on X i which are components of two different reducible fibers on X i . Moreover the zero section of E Ri induces the zero section, O Xi , of E Xi and, if there is a torsion section t 1 on R i , it induces a torsion section T 1 on X i . So we have the following curves on X i : Θ j l j = 1, 2; O Xi ; T 1 if i = 2; Φ j l j = 1, 2, l = 0, 1 if i = 3. Denote by π i : X i → R i the double cover of R i induced by the base change and by τ i the cover involution. We have Figures 4, 5, and 6 summarize the above. Note that in Figure 5, Φ 1 2 and Φ 2 2 are both connected to T 1 . Proposition 7.3. The Néron-Severi group of X i has rank 18, signature (1,17), for every i = 2, 3, 4. Figure 4. Reducible fibers and sections of the fibration E X4 on X 4 Figure 5. Reducible fibers and sections of the fibration E X3 on X 3 Figure 6. Reducible fibers and sections of the fibration E X2 on X 2 Both lattices NS(X 4 ) and of NS(X 3 ) are isometric to U ⊕ D 8 ⊕ E 8 and their transcendental lattices are both isometric to U ⊕ U (2), which has the same discriminant group and form as D 8 . In particular X 3 and X 4 lie in the same family of K3 surfaces, namely the family of U ⊕ D 8 ⊕ E 8 -polarized K3 surfaces. The lattice NS(X 2 ) is isometric to U ⊕ E 8 ⊕ E 8 and its transcendental lattice is isometric to U ⊕ U , which has the same discriminant form of E 8 . Proof. The curves in the Figures 4, 5, and 6 (i.e. the curves Θ j l , O Xi , T 1 if i = 2 and Φ j l if i = 3) generate NS(X i ). They are not all linearly independent, but once one extract a basis, one obtains 18 independent generators of NS(X i ). Since one knows all the intersection properties of these generators, one can explicitly compute their intersection matrix. This identifies the lattice NS(X i ) and in particular its discriminant group and form. We observe that all the lattices that appear are 2-elementary, i.e., the discriminant group is (Z/2Z) a , a ∈ N. So the transcendental lattice is a 2-elementary lattice with signature (2, 2). The indefinite 2-elementary lattices are completely determined by their length, i.e., by a, and by another invariant, often denoted by δ, which is zero in all the cases considered. This allows us to identify the transcendental lattices. 7.3. Classification of all the possible fibrations on the K3 surfaces X 4 , X 3 , and X 2 . In the same way as we did for X 9 in Section 6.3, we classify elliptic fibrations on the surfaces X 4 X 3 and X 2 in what follows. By Proposition 7.3 we take T = D 8 for X 3 X 4 , and T = E 8 for X 2 and apply Nishiyama's method. By [14, Lemmas 4.1 and 4.3] we know that D 8 only embeds primitively in D n for n ≥ 8, and E 8 only embeds primitively in E 8 . The orthogonal complements of these embeddings in the 24 Niemeier lattices are then found in [14,Corollary 4.4]. Those results are summarized in tables 3 and 4. We notice that the fibrations on X 2 , X 3 and X 4 were already classified in [6, Z/2Z Table 4. Elliptic fibrations of X 2 7.4. Determining the type of each fibration of X 4 , X 3 , and X 2 . As in Section 6.4 we determine the type of each fibration obtained in Section 7.3 (Tables 3 and 4) with respect to the cover involutions τ i , for i = 4, 3, 2. We determine moreover the sections and their fields of definition. This study allows us to obtain an upper bound for the degree over k of a field of definition k η of a given fibration η, and an upper bound for the degree over k of a field of definition k η,MW of a set of generators of the Mordell-Weil group of the fibration. By Proposition 7.1 we know that the Galois group G Ri is trivial and all the fiber components of R i are defined over k, for i = 4, 3, 2. In order to determine the field of definition of the sections, the only action that is taken into account is the one of the cover involutions τ i , for i = 4, 3, 2. To determine the type of each fibration in Table 3 (resp. Table 4) with respect to τ 4 (resp. τ 3 and τ 2 ), we find a configuration of (parts of the) reducible fibers in terms of the curves in Figure 4 (resp. Figure 5 and Figure 6). The fibration in line 5 (resp. line 2 and line 1) is represented in Figure 4 (resp. Figure 5 and Figure 6). The configurations associated to the fibers in lines 1, 2, 3, 4 and 6 in Table 3 for the K3 surface X 4 are listed below: II * + I * Note that all the reducible fibers listed above only appear once in Table 3 (resp. Table 4), hence we know that they represent the corresponding fibrations in those tables. Therefore, using these configurations, we can determine the type of the corresponding fibration with respect to τ 4 (resp. τ 3 and τ 2 ), and find sections for the corresponding fibration. By choosing a 0-section, we determine whether the different sections are fixed by τ 4 (resp. τ 3 and τ 2 ) or not. The results are listed in Table 5 (resp. Table 6 and Table 7). Table 7. Types of the different elliptic fibrations of X 2 with respect to τ 2 and fields of definition
2020-07-29T01:00:46.444Z
2020-07-28T00:00:00.000
{ "year": 2021, "sha1": "e1403cd0ec4d4be12292d2c43c93aaf993920a87", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e1403cd0ec4d4be12292d2c43c93aaf993920a87", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
21272340
pes2o/s2orc
v3-fos-license
anti-2,2,3,3,6,6,7,7,10,10,11,11,14,14,15,15-Hexadecamethyl-2,3,6,7,10,11,14,15-octasilapentacyclo[10.4.2.24,9.05,8.013,16]icosa-1(17),4,8,12(18),13(16),19-hexaene The title compound, C28H52Si8, was synthesized by condensation of two molecules of 1,2,3,4-tetrakis(chlorodimethylsilyl)benzene with lithium. The 3,4-disila-1,2-benzocyclobutene rings in the centrosymmetric molecule are bridged by 1,1,2,2-tetramethyldisilanylene chains with an anti conformation. The benzene rings are deformed by fusion with a 3,4-disilacyclobutene ring resulting in a slight boat conformation. Two Si—C bonds are bent to reduce the steric repulsion between the methyl groups on the two Si atoms and the methyl groups on another two Si atoms. The title compound, C 28 H 52 Si 8 , was synthesized by condensation of two molecules of 1,2,3,4-tetrakis(chlorodimethylsilyl)benzene with lithium. The 3,4-disila-1,2-benzocyclobutene rings in the centrosymmetric molecule are bridged by 1,1,2,2-tetramethyldisilanylene chains with an anti conformation. The benzene rings are deformed by fusion with a 3,4disilacyclobutene ring resulting in a slight boat conformation. Two Si-C bonds are bent to reduce the steric repulsion between the methyl groups on the two Si atoms and the methyl groups on another two Si atoms. Experimental The condensation of two molecules of 1,2,3,4-tetrakis(chlorodimethylsilyl)benzene with lithium in THF gave 1 in 2% yield (Fig. 1). The structure of 1 was determined by X-ray crystallography (Fig. 2). The molecule lies on an inversion center, and one half of the molecule corresponds to the asymmetric unit. Two 3,4-disila-1,2-benzocyclobutene rings are bridged by 1,1,2,2-tetramethyldisilanylene chains with an anti structure. The anti structure is favorable to avoid the steric hindrance among methyl groups on the 3,4-disilacyclobutene rings. Experimental All operations except for Kugelrohr distillation were carried out in a glovebox. A mixture of 1,2,3,4-tetrakis(chlorodimethylsilyl)benzene (0.200 g, 0.446 mmol) and lithium (13.0 mg, 1.87 mmol) in THF (25 ml) was stirred at room temperature for 14 h. After removal of the solvent, the residue was dissolved in toluene, and insoluble materials were filtered off. The solvent was removed under reduced pressure. Kugelrohr distillation (300 °C/0.9 mm H g) of the residue gave a colorless solid. The solid was recrystallized from hexane to give 1 (3 mg, 2%) as colorless crystals. Single crystals were obtained from hexane by slow evaporation. Refinement All hydrogen atoms were generated at calculated positions and refined as riding atoms with C-H = 0.95 (phenyl) or 0.98 (methyl) Å and U iso (H) = 1.2U eq (phenyl C) or 1.5U eq (methyl C). -2,2,3,3,6,6,7,7,10,10,11,11,14,14,15,15-Hexadecamethyl-2,3,6,7,10,11,14 Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x y z U iso */U eq
2018-04-03T00:52:29.352Z
2013-02-06T00:00:00.000
{ "year": 2013, "sha1": "1ab631ee63dc2d5781c093b47dd4215559128557", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2013/03/00/ds2224/ds2224.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ab631ee63dc2d5781c093b47dd4215559128557", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237333686
pes2o/s2orc
v3-fos-license
Validation of a Flow Channel to Investigate Velocity Profiles of Friction-Reducing Ship Coatings Reducing friction with specialised hull coatings or air lubrication technologies has a potential reducing energy consumption and emissions in shipping. The EU project AIRCOAT combines both by developing a passive air lubrication technology inspired by nature that is implemented on a self-adhesive foil system. Besides validating the friction reduction it is of high interest to understand the underlying mechanism that causes the reduction. Therefore, a flow channel was designed, that creates a stationary turbulent flow within a square duct allowing for non-invasive measurements by laser doppler velocimetry. The high spatial resolution of the laser device makes recording velocity profiles within the boundary layer down to the viscous sublayer possible. Determination of the wall shear stress τ enables direct comparison of different friction reduction experiments. In this paper we validate the methodology by determining the velocity profile of the flat channel wall (without coatings). We further use the results to validate a CFD model in created in OpenFOAM. We find that velocities along the longitudinal axis are generally in good agreement between numerical and experimental investigations. http://www.transnav.eu the International Journal on Marine Navigation and Safety of Sea Transportation Volume 15 INTRODUCTION Friction is a critical factor when it comes to fuel consumption of ships. The lower the friction the lower the energy demand. Therefore, the EU project AIRCOAT 1 aims at reducing hull friction to a minimum. In water ferns, the Salvinia effect of a micro-and nanostructured surface with hydrophobic and hydrophilic characteristics allows for air retention under water [1] and while the air spring effect contributes to the air layer stability [6]. Inspired by the Salvinia effect, the AIRCOAT project intends to develop a biomimetic passive air layer technology. 1 The AIRCOAT project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement N°764553. With a passive air layer covering the hull, the contact area between water and ship is decreased significantly which reduces fuel consumption, carbon dioxide emission as well as acoustic emissions [11,12]. Water flowing along a solid wall is subject to the no-slip condition with a boundary layer where the fluid velocity increases normal to the surface from zero velocity to freestream velocity [17]. But if the fluid flows along an otherwise liquid or gaseous interface the conditions is expected to differ, which results in friction reduction [10]. To better understand the underlying mechanism of the friction reduction over the hydrophobic AIRCOAT surface, Fraunhofer CML (CML) built a flow channel that creates a stationary turbulent flow driven by hydrostatic pressure. Water is send in a circuit and the test section is a square duct of 40 mm edge length. Made from acrylic glass it allows for noninvasive measurement by a laser ABSTRACT: Reducing friction with specialised hull coatings or air lubrication technologies has a potential reducing energy consumption and emissions in shipping. The EU project AIRCOAT combines both by developing a passive air lubrication technology inspired by nature that is implemented on a self-adhesive foil system. Besides validating the friction reduction it is of high interest to understand the underlying mechanism that causes the reduction. Therefore, a flow channel was designed, that creates a stationary turbulent flow within a square duct allowing for non-invasive measurements by laser doppler velocimetry. The high spatial resolution of the laser device makes recording velocity profiles within the boundary layer down to the viscous sublayer possible. Determination of the wall shear stress τ enables direct comparison of different friction reduction experiments. In this paper we validate the methodology by determining the velocity profile of the flat channel wall (without coatings). We further use the results to validate a CFD model in created in OpenFOAM. We find that velocities along the longitudinal axis are generally in good agreement between numerical and experimental investigations. doppler velocimeter (LDV). The device allows to resolve the boundary layer and the evaluation of the wall shear stress in the boundary layer which is important when it comes to the assessment of wall friction [4]. This study will explain the design and methodology of the flow channel built to perform measurements in a fully developed turbulent flow, the experimental setup and measurements, performed with a non-intrusive LDV device. To further validate the experimental data a comparison with literature is presented. This validation of flow conditions in the test section and measurement technique acts as a prerequisite to perform future measurements with the novel air retaining surface to measure the slip velocities over a hydrophobic wall. In order to better understand the underlying principle of drag reduction, the study further presents a validation of a Computational Fluid Dynamics (CFD) model of the channel. The motivation behind the numerical investigations is crosschecking of the experiments as well as creating a reference for modelling and analysing novel coatings through separate developed wall functions in a Reynolds Averaged Navier Stokes (RANS) simulation. Channel Setup The designed flow channel follows a simple and lowcost but effective approach of achieving a fully turbulent flow as well as a fully developed flow profile over the channel height and width. The flow channel setup resembles the setup of [15] and is of a widely adapted setup for this kind of measurement. The three main features to achieve a fully developed and undisturbed (as much as possible) flow for the presented setup are: 1) a flow induced by hydrostatic pressure with an inflow tank based on an overflow principle to minimise any influence of the pump, which fills the inflow tank, 2) a nozzle in front of the test section specifically designed to minimise flow separation and 3) a test section with a length of 3000 mm and a square cross section of 40.00 mm height and width resulting in the length to height ratio of 75. The flow is driven by a constant water column, which is depicted in the general flow tank setup as vertical pipe in Fig. 2. A constant water column is guaranteed by the design of the inflow tank, separated into three compartments, with the vertical pipe attached to the centre compartment. Water is pumped into the left compartment, which will spill any excess water into the centre compartment. When the centre compartment is filled, excess water flows into the right compartment. This last compartment is connected to the basin from which the pump is circulating the water back to the inflow tank, i.e. the first compartment. As long as more water is pumped into the elevated inflow tank than is flowing out of the pipes connecting duct and inflow tank, the water column is constant and producing a stationary flow within the test section. With the flow established, the water is directed into the test section, which is accomplished by using a specifically designed nozzle. Studies have shown that specific geometries reduce the flow separation in nozzles with the best results originating from applying a polynomial of fifth order [18]. This approach has been adapted for the cross-section changing from a circular inflow into the nozzle to a square shaped outflow from the nozzle into the test section. By using additive manufacturing the complex nozzle geometry, depicted in Fig. 2, has been printed and adapted in several iterations to satisfy criteria such as mechanical stability and water tightness. From the nozzle, the flow enters the 3 m long test section where according to [16] fully developed turbulent flow can be expected after passing 70D. The test section is made of acrylic glass and features a removable cover, see Fig. 1. This removable cover allows access over the whole length of the test section. With this key feature it is possible to apply different materials within the duct. In that manner different materials, e.g. the aforementioned foil with hydrophobic properties that can be utilised as ship coating, will be tested regarding friction reduction. The flat and transparent channel walls enable LDV measurements, which is also the main reason for the square cross section. A circular pipe would provide a more preferable environment from a hydrodynamic point of view but would interfere with the laser and cause unwanted refraction and reflections, which again leads to erroneous measurements. Being transparent over the full length there are no limitations to locating the LDV along the flow channel. LDV Measurements With a LDV device it is possible to measure velocity without intruding the flow. The device is mounted perpendicular to the flow channel and the main flow direction and the measurement volume (MV) is oriented normal to the channel wall. The details of LDV principle are well known and described in detail, e.g. in [4]. Within this study the principle is only summarised to explain the basics and why it is a preferable technique for boundary layer assessment based on [2]. Figure 2. Schematic overview of the flow channel with its main components: inflow tank, vertical pipe to build hydrostatic pressure, L-pipe to direct the flow towards the duct, nozzle with changing cross-section as proposed by [18], test section, small nozzle and valve for flow speed control, outflow tank to regulate water column behind the test section, basin with excess water to feed pump and collect excess water from inflow tank. Applying the two Doppler frequencies fD,1 and fD,2, the velocity of the crossing particle can be determined by Eq. 2. The utilised device has laser beams of wavelength λ1 = 532 nm and λ2 = 561 nm meeting under an angle of 16 degrees. Other than measurements with a conventional LDV, which obtains velocity values for the entire MV, a Profile Sensor allows a higher spatial resolution and the velocity as well as the position of a particle crossing the MV can be obtained. The position, z, of the tracer particle can be determined by the frequency quotient and the calibration function,  , which is provided by the manufacturer [7]. With a known zposition the actual fringe spacing and velocity can be derived [3]. For measurements close to wall -as it is intended in our work to analyse the boundary layer flow -the Profile Sensor is of advantage. The LDV Profile Sensor by ILA R& D [8] offers a spatial resolution up to 1% of the measurement volume length. With a determination of particle positions within the MV the boundary layer, with its linear velocity gradient, can be observed. An LDV device allows only for a quasi-point measurement of flow velocity. Therefore, to record a velocity pattern across the channel height or width multiple measurements need to be performed. After a quasi-point measurement is completed the MV is moved to different location by mechanical traversing and new measurement is started. Afterwards the recorded data from one MV is stitched together to present a flow pattern or near-wall velocity profile. . Fringe System of two crossing laser beams. By the combination of two different wave lengths (green areas: λ1 and λ2), the known fringe distances d along with a calibration function the velocity and the position of a crossing particle can be calculated. At a particle crossing the MV causes a change in fringe distance and two Doppler frequencies. 3 CFD SIMULATIONS In order to quickly transfer experimental results to technical implementation and estimate the performance of surface or hull coatings, e.g. the aforementioned passive air layer, in real world applications, the use of CFD simulations is envisaged. However, to be able to use these tools with confidence, they have to be validated by experimental results. Furthermore, CFD simulations can be used for sanity checks of experimental results. If numerical and experimental results are sufficiently similar it is easier to exclude methodical flaws or systematic measuring errors. Consequently, the internal features of the flow channel described above have been replicated in a Computer Aided Design (CAD) environment to feed into the simulation pipeline. Numerical setup In this study a finite-volume approach is used which utilises the OpenFOAM v1806 package [13]. Only one half of the channel is modelled taking advantage of the symmetry in order to reduce the computational effort. The computational domain is depicted in Fig. 4. The setup at hand utilises the Reynolds-Averaged-NavierStokes-Equations with k-ω-SST turbulence model according to [9] to simulate the effects of turbulent flow while limiting the computational effort. Since the flow is gravity driven and assumed to be steady, the buoyantBoussinesqSimpleFoam [14] solver and second-order accuracy schemes have been selected. Physical constants were defined according to Table 1. Due to unknown turbulence properties of the flow at the inlet, initial turbulence parameters were estimated based on the preliminary study and the assumption of fully developed flow with isotropic turbulence at the inlet as given in Table 2. A turbulence parameter study was not conducted as not to distort the validation through false optimisation of input quantities. Preliminary study A preliminary study has been performed to estimate the flow velocities in different parts of the flow channel, namely the vertical pipe, the L-pipe, the nozzle, the test section, the small nozzle and the outflow tank. For this purpose a simple base grid was developed as follows. The maximum cell size was set to 0.011 m. The cell size was halved at the nozzle, which leads into the test section and then halved again for the section stretching from the beginning of the second nozzle through the outflow pipe into the outflow basin. To capture the boundary layer prism cells were applied on the walls. Since the actual local flow velocities were only known for the test section at this stage from the experiments, the thickness of the wall layer was kept constant throughout the domain, which resulted in a variation of y+ values throughout the domain. However, for estimation of the local velocities this seemed to be sufficient. Fig. 4 shows the magnitude of the velocity in the channel's symmetry plane for the base grid configuration. The water enters the inlet at a velocity of about 0.01 m/s and on close inspection flow separation can be spotted on the inside of the sharp 90 degree angle turn inside the L-pipe. The velocity stays approximately constant until it reaches the nozzle before the test section. Moving through the nozzle into the test section, the flow accelerates to close to 0.3 m/s in the centre of the duct. After passing the test section the flow accelerates further in the small nozzle due to the reduction in cross section and exits the outlet pipe into the outlet tank as a distinctive jet. Grid study Based on the results of a preliminary study, a grid study has been performed to study the sensitivity of the results regarding the spatial resolution of the geometry. For this study the maximum cell size was systematically varied by factor of 1.5 to derive one more coarse and two finer grids. The target y + was set to ≤ 1 to accurately model the boundary layer which is of prime importance for the planned application of the channel. This means that the number of boundary layer cells varies between the grids. There were no significant differences found between the different grids with volume fluxes, mean and maximum velocities as well as velocity profiles at the 70D position in the experiment found to be in good agreement. The data presented in the following belongs to the grid with a base size of 0.0103 m, which corresponds to the preliminary mesh (base size = 0.011 m) with a minor adjustment to increase the mesh quality to allow for the changes in the boundary layer mesh. Velocity Patterns of Experiments and Numerical Simulation Velocity patterns are presented for one longitudinal location along the flow channel at 70D, in which D is the hydraulic diameter of the duct. With the current setup and under given circumstances of chosen equipment the Reynolds number is Re = 11.400 in reference to D. The measurements were conducted at one Reynolds number and under the assumption of stationary flow. The normal axis for each pattern is in the centre of the duct for the smallest influence possible by the walls. Presented are velocity patterns across the duct's height, z-pattern, as well as width, ypattern, in Fig. 5. The experimental results are 1D measurements and presented are mean values from four measurements. The different markers indicate the z-pattern and ypattern, respectively. Moreover, an error bar represents the standard deviation for experimental results. The LDV device allows for two parallel measurements at the same time since the two laser beams can record velocities independent from each other, although a measurement event is only valid if the two laser beams detect the same particle. A repetition of that process is performed afterwards. CFD results are presented by dashed and dotted lines, respectively. From Fig. 5 can be seen that the velocity is symmetric at the respective centre line over the height as well as width, which implies a full turbulent flow. Furthermore, a good agreement with CFD results is clearly visible. Especially the centre line flow shows good agreement for z-pattern as well as y-pattern. Mean velocities are similar to CFD results and the standard deviation for the mean values is small. The measurements closer to the respective channel wall show that turbulence increases and uncertainties of the measurement increase as well. This becomes also visible from the comparison of selected values in experimental data close to the wall. With the different velocities for the points closest to the wall for zpattern and y-pattern it becomes visible that the experimental data is highly depending on a thorough set-up and fussy alignment of laser device and flow channel. Figure 6. Velocity profile close to the wall with a linear fit (blue line) to determine wall shear stress. The fit curve is forced through zero to imply zero velocity at the wall. Contrary to CFD results, experimental results do not show zero velocity, since a measurement close to the wall is time consuming and requires high effort. Still, to get an idea of flow conditions a rather coarse measuring grid is sufficient. The comparison shows that the flow at 70D is fully turbulent and the respective pattern is nearly symmetric. Furthermore, the y-pattern differs from the z-pattern due to the orientation of the MV that hasn't been changed during the experiments. A measurement to assess the velocities close to the wall, that allow the determination of wall shear stress, is presented in 4.2. Near Wall Measurements A near wall measurement is performed to estimate the possibilities of boundary layer investigations and determination of wall shear stress, τ, within the current channel construction and setup. The MV is located so close to the wall that half the MV is inside the channel and the other half disappearing in the channel wall. After the successful measurement of 10000 particles crossing the MV, the MV is moved step wise away from the wall. Subsequently all recorded MVs are stitched together to build a nearwall flow profile over half the duct (δ = 20 mm). Fig. 6 depicts the near linear increase of velocity between 200 and 800 micrometre wall distance. A linear fit curve is added to distinguish the intended area. For velocity values further away from the wall, a nonlinear gradient becomes visible. Also, clearly visible are the limits of the LDV device, which allows no detection of particles slower than 0.025 m/s (see Sec. 5 for further discussion). Therefore, the linear fit curve is forced to go through zero, since a point directly at the solid wall has zero velocity. The slope of the straight line, or the gradient of the velocity profile near the wall, is assumed to represent the near wall shear stresses from the channel wall [17]. From the relation in Eq. (4), τ can be calculated and can serve as a variable for further boundary layer evaluations with η being the dynamic viscosity of water and ∂u/∂y the local shear velocity [16]. The determination of τ is highly depending on water carefully recorded velocity and a constant temperature, since small deviations have a strong impact on the result. Fig. 6 indicates room for improvement. Ideally all grey dots would match the blue line. The velocity profile across half the channel duct recorded with a spatial resolution in the near-wall area is presented in Fig. 7, in a semi-logarithmic plot with dimensionless values u + over y + (with u + = u/uτ and y + = (y • uτ )/ν). From this depiction the non-zero velocity from experimental data becomes visible as well. The blue line represents the area of the linear increase and the green line the log law based on u + = 1/κ * ln(y + ) + C + , with κ ≈ 0.41 and C + ≈ 5.2. The logarithmic part of the profile is represented better than the linear portion. This again points out the need for high profoundness in terms of near-wall experiments. Other than that a general agreement between experimental data, theoretical values and literature data, approves the measurement concept and the procedure to determine τ. Although, leaps and irregular values for experimental data implies partly erroneous measurements. This can be ascribed to the stitching process of the MVs and the recording on different days. Moreover, the determination of wall shear stresses is prone to changes in temperature. This is not considered sufficiently within the selected approach. Nonetheless, the linear part is distinguishable from logarithmic area and therefore determination of slip velocity seems realisable. Results from own numerical calculations are not presented due to the fact that the selected RANS approach uses wall functions. These wall functions are boundary conditions which presume predefined turbulence parameter and velocity profiles normal to the walls, which are derived from the law of the wall. Hence, the compliance to the law of the wall is inherent. Figure 7. Velocity profile across half the channel duct in comparison with theoretical values and CFD data from [19]. The green line crosses exp. data in the logarithmic layer, whereas the blue line crosses exp. data in the viscous sublayer. CONCLUSION AND OUTLOOK The general methodology and the proof-of-concept of how to assess the friction reducing capabilities of an air retaining surface were presented. The fundamental principle of the passive air lubrication AIRCOAT surface is to mimic the air retaining properties of the Salvinia fern on a foil system, which has both hydrophilic as well as hydrophobic characteristics. One goal of the AIRCOAT project is to prove the friction reducing properties of such an artificial foil in order to validate its potential as a sustainable future ship hull coating. This study serves as the prerequisite for the long term goal to identify and investigate the slip velocity active over a passive air layer. Here, the methodology was validated in a controlled steady environment by investigating a flat surface. The construction of a flow channel driven by a constant water column, the measurement with a LDV Profile Sensor and the implementation of corresponding CFD simulations were reported. The flow channel follows a low-coast approach with the goal of a fully turbulent flow. One Re number was chosen to compare physical experimental results to a CFD simulation. A comparison of experimental data and CFD results of vertical and horizontal flow patterns showed close resemblance across the square cross-section. Measurements close to the wall showed the advantage of the LDV Profile Sensor that yielded high resolution measurements within the boundary layer and the near-wall area. The linear increase in velocity, i.e. the local shear velocity, was identified and the mean wall shear stress τ determined. The performed experiments concluded that well controlled flow conditions and a thorough experimental setup are utterly important to use the full capability of the high spatial resolution achievable with a Profile Sensor, e.g. sturdy construction or temperature monitored fluids. For future measurements a Profile Sensor with carrier-frequency technique to better identify particles with a near zero velocity is preferable [5]. Such a sensor has the advantage of enabling the determination of flow speeds close to the wall reliably and thus leads to an improved the determination of wall shear stresses. In future experiments the presented methodologyand validated for flat surface with no air layer -will be used to assess turbulent flow above a structured surface with air layer -to identify the velocity profile and the slip velocity. Furthermore, comparing the measured wall shear stress, τ, of flat and structured surface can give a drag reducing capabilities of passive air retaining surfaces. Introducing air into the system will bring new challenges. The phase flow regime with the air layer under water will introduce reflection of the laser beam due to different refraction indices of water and air. A reflecting surface is detrimental in achieving a strong LDV measurement signal. Nonetheless, a comparison of these future measurements with the herein presented methodology and reference measurements allow to achieve a valuable contribution in assessing and validating biologically inspired friction reducing ship coatings or surfaces. The flow channel will further enable the development and validation of custom wall functions for RANS CFD simulations that can subsequently be used to extrapolate the effect of novel coatings such as AIRCOAT to large scales such as ships or the inner walls of pipes and tubes. ACKNOWLEDGEMENT The study was performed as part of the AIRCOAT project. The AIRCOAT project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement N° 764553. Special thanks goes to Prof. Dr.-Ing. Michael Schlüter and his team from the Institute of Multiphase Flows of the Hamburg University of Technology for giving advise during flow channel design and for lending the LDA Profile Sensor. Furthermore, we thank Prof.(i.R.) Dr. Wolfgang Mackens and his team from the "DLR School Lab" of the Hamburg University of Technology for laboratory access.
2021-08-27T17:21:55.768Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "803317437927edcaccbfb2d5c3fb973f5bb3377a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12716/1001.15.01.24", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ce0b470695f66ba7f51a56a9c49612d7ee0163ea", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [] }
247446769
pes2o/s2orc
v3-fos-license
Pruned Graph Neural Network for Short Story Ordering Text coherence is a fundamental problem in natural language generation and understanding. Organizing sentences into an order that maximizes coherence is known as sentence ordering. This paper is proposing a new approach based on the graph neural network approach to encode a set of sentences and learn orderings of short stories. We propose a new method for constructing sentence-entity graphs of short stories to create the edges between sentences and reduce noise in our graph by replacing the pronouns with their referring entities. We improve the sentence ordering by introducing an aggregation method based on majority voting of state-of-the-art methods and our proposed one. Our approach employs a BERT-based model to learn semantic representations of the sentences. The results demonstrate that the proposed method significantly outperforms existing baselines on a corpus of short stories with a new state-of-the-art performance in terms of Perfect Match Ratio (PMR) and Kendall's Tau (Tau) metrics. More precisely, our method increases PMR and Tau criteria by more than 5% and 4.3%, respectively. These outcomes highlight the benefit of forming the edges between sentences based on their cosine similarity. We also observe that replacing pronouns with their referring entities effectively encodes sentences in sentence-entity graphs. In early studies, researchers modeled sentence structure using hand-crafted linguistic features (Lapata, 2003; Barzilay and Lee, 2004;Elsner et al., 2007;Barzilay and Lapata, 2008), nonetheless, these features are domain-specific. Therefore recent studies employed deep learning techniques to solve sentence ordering tasks (Golestani et al., 2021;Logeswaran et al., 2018;Gong et al., 2016;Li and Jurafsky, 2017;. (Cui et al., 2018) used graph neural networks, called ATTOrderNet, to accomplish this task. They used a self-attention mechanism combined with LSTMs to encode input sentences. Their method could get a reliable representation of the set of sentences regardless of their input order. In this representation, an ordered sequence is generated using a pointer network. Since ATTOrderNet is based on fully connected graph representations, it causes to build an association among some irrelevant sentences, which introduces massive noise into the network. Furthermore, since a self-attention mechanism only uses the information at the sentence level, other potentially helpful information such as entity information is missed. To overcome these drawbacks, Yin et al. (2019) developed the Sentence-Entity Graph (SE-Graph), which adds entities to the graph. While in the AT-TOrderNet, every node is a sentence representation, SE-Graph consists of two types of nodes: sen-arXiv:2203.06778v1 [cs.CL] 13 Mar 2022 tence and entity 1 . Moreover, the edges come in two forms called SS and SE: • SS: this edge is between sentence nodes that share a common entity, • SE: this edge connects a sentence and an entity within the sentence labeled with the entity's role. However, the introduced methods perform poorly for short story reordering tasks. The SE-graph solution seems effective for long texts but not for short stories. In this paper, we suggest modifications to the introduced graph methods to improve the performance for short story reordering tasks. Some issues arise in short stories: First, entities are often not repeated in multiple sentences in a short story or text; instead, pronouns refer to an entity. To address this problem, we improve the semantic graph by replacing the pronouns with their corresponding entities. Another issue is a high correlation between the sentences in a short story along with a high commonality of entities across sentences, which leads us to end up with a complete graph in most cases. Our solution is moving towards a Pruned Graph (PG). As with the ATTOrderNet and SE-Graph networks, the PG architecture consists of three components: 1. Sentence encoder based on SBERT-WK model (Wang and Kuo, 2020), 2. Graph-based neural story encoder, and 3. Pointer network based decoder. With PG, the network nodes and SE edges are created based on SE-Graph, in a way that the first and third components are the same. However, in the story encoding phase, after generating the nodes and SE edges based on SE-Graph method, the pruning phase is started on SS edges. This pruning process is defined as follow: each sentence edged out its neighbors with the first and second most cosine similarities (Rahutomo et al., 2012). This method alleviates some problems of the previous two methods in the case of organizing short story sentences. It is noteworthy that pronouns are replaced with entities during pre-processing. Finally, we present a method based on majority voting to combine our proposed graph networkbased method with the state-of-art methods to benefit from each. Contributions of this study are as follows: 1 the entity should be common to at least two sentences 1. Proposing a new method based on graph networks to order sentences of a short stories corpus by: (a) suggesting a new method for creating the edges between sentences, (b) creating a better sentence-entity graph for short stories by replacing pronouns in sentences with entities, (c) Moreover, taking advantage of BERTbased sentence encoder. 2. Using majority voting to combine sentence ordering methods. 2 Related Work Sentence Ordering In early studies on sentence ordering, the structure of the document is modeled using hand-crafted linguistic features (Lapata, 2003;Barzilay and Lee, 2004;Elsner et al., 2007;Barzilay and Lapata, 2008). Lapata (2003) encoded sentences as vectors of linguistic features and used data to train a probabilistic transition model. Barzilay and Lee (2004) developed the content model in which topics in a specific domain are represented as states in an HMM. Some other like Barzilay and Lapata (2008) utilize the entity-based approach, which captures local coherence by modeling patterns of entity distributions. Other approaches used a combination of the entity grid and the content model Elsner et al. (2007) or employed syntactic features (Louis and Nenkova, 2012) in order to improve the model. However, linguistic features are incredibly domain-specific, so applying these methods across different domains can decrease the performance. To overcome this limitation, recent works have used deep learning-based approaches. Li and Hovy (2014) proposes a neural model of distribution of sentence representations based on recurrent neural networks. In (Li and Jurafsky, 2017), graphbased neural models are used to generate a domainindependent neural model. Agrawal et al. (2016) introduced a method that involves combining two points elicited from the unary and pairwise model of sentences. used an LSTM encoder and beam search to construct a pairwise model. Based on a pointer network that provides advantages in capturing global coherence, Gong et al. (2016) developed an end-to-end approach that predicts order of sentences. In another work, by applying an encoder-decoder architecture based on LSTMs and attention mechanisms, Logeswaran et al. (2018) suggested a pairwise model and established the gold order by beam search. In (Pour et al., 2020), we presented a method that does not require any training corpus due to not having a training phase. We also developed a framework based on a sentence-level language model to solve the sentence ordering problem in (Golestani et al., 2021). Moreover, in several other studies, including (Cui et al., 2018) and (Yin et al., 2019), graph neural networks are used to accomplish this task, as explained in the following. In particular, text classification is a common application of GNNs in natural language processing. A GNN infers document labels based on the relationships among documents or words (Hamilton et al., 2017). Christensen et al. (2013) used a GNN in multi-document summarization. They create multi-document graphs which determine pairwise ordering constraints of sentences based on the discourse relationship between them. Kipf and Welling (2017) proposed Graph Convolutional Networks (GCN), which is used in Yasunaga et al. (2017) to generate sentence relation graphs. The final sentence embeddings indicate the graph representation and are utilized as inputs to achieve satisfactory results on multi-document summarization. Another method is presented in Marcheggiani and Titov (2017) where a syntactic GCN is developed with a CNN/RNN as sentence encoder. The GCN indicates syntactic relations between words in a sentence. In a more recent work, Yin et al. (2019) proposed a graph-based neural network for sentence ordering, in which paragraphs are modeled as graphs where sentences and entities are the nodes. The method showed improvement in evaluation metrics for sentence ordering task. In this work, we explore the use of GRN for NLP tasks, es-pecially to perform sentence-ordering on a corpus of short stories. Baselines This section introduces ATTOrderNet (Cui et al., 2018) and SE-Graph (Yin et al., 2019), which achieve state-of-the-art performances and serve as baseline for our work. ATTOrderNet ATTOrderNet introduced in (Cui et al., 2018) is a model using graph neural networks for sentence ordering. The model includes three components as follows: a sentence encoder based on Bi-LSTM, a paragraph encoder based on self-attention, and a pointer network-based decoder. In the sentence encoder, sentences are translated into distributional representations with a word embedding matrix. Then a sentence-level representation using the Bi-LSTM is learned. An average pooling layer follows multiple self-attention layers in the paragraph encoder. The paragraph encoder computes the attention scores for all pairs of sentences at different positions in the paragraph. Therefore, each sentence node is connected to all others where the encoder exploits latent dependency relations among sentences independent of their input order. Having an input set of sentences, the decoder aims to predict a coherent order, identical to the original order. In this method, LSTM-based pointer networks are used to predict the correct sentence ordering from the final paragraph representation. Based on the sequence-to-sequence model, the pointer network-based decoders predict the correct sentence sequence (Sutskever et al., 2014). Specifically, input tokens are encoded using the pointer network as summary vectors 2 , and the next token vector is decoded repeatedly. Finally, the output token sequence is derived from the output token vector. SE-Graph SE-Graph, similarly to ATTOrderNet, consists of three components: 1. a sentence encoder based on Bi-LSTM, 2. a paragraph encoder, 3. a pointer network based decoder Nevertheless, the difference between SE-Graph and ATTOrderNet is only in the encoder paragraph component, described in the following. In contrast to the fully connected graph representations explored by ATTOrderNet, Yin et al. (2019) represented input paragraphs as sentence-entity graphs. The SE-Graph includes two types of nodes: sentence and entity. The entity should be common to at least two sentences to be considered as a node of the graph. There are also two types of edges: SS edges that connect sentence nodes with at least a common entity, and SE edges that link a sentence with an entity within that and with a label of the entity's role. SE edges are labeled based on the syntactic role of the entity in the sentence, such as a subject, an object, or other. When an entity appears multiple times in a sentence with different roles, the role that has the highest rank is considered. The highest rank of roles is the subject role; after that are the object roles. SE-Graph framework utilizes a GRN-based paragraph encoder that integrates the paragraphlevel state along with the sentence-level state. Methodology In this section, first, the problem is formulated, second the dataset is introduced and explains why this dataset is suitable for the sentence ordering task. Then two methodologies are proposed. The first proposed method, called Pruned Graph, is based on graph networks, and the second is based on the majority voting to combine the outputs of three different models. Problem Formulation Consider S(O) is a set of n unordered sentences taken from a coherent text: O = s 1 , s 2 , . . . , s n , The goal of sentence ordering is to find a permutation of sentences of O like S(o ), that corresponds to the gold data arrangement. In other words, sentence ordering aims to restore the original orders: Where S(o * ) represents the original or gold order. As a result a correct output leads to S(o ) = S(o * ). Based on the above definition and notions we propose our sentence ordering method. Dataset In this paper, we used a corpus of short stories, called ROCStories (Mostafazadeh et al., 2016). It contains 98,162 commonsense stories, each with exactly five sentences and an average word count of 50. Mostafazadeh et al. (2017) created ROCStories corpus for a shared task called LSDSem, in which models are supposed to predict the correct ending to short stories. 3,742 of the stories have two options for the final sentence. It is worth noting that humans generated all of the stories and options. We can learn sequences of daily events from this dataset because it contains some essential characteristics: The stories are rich with causal and temporal relations among events, which makes this dataset a highly useful resource for learning narrative structure across a wide range of events. The dataset consists of a comprehensive collection of daily and non-fictional short stories useful for modeling the coherence of a text (Mostafazadeh et al., 2016). Due to these features, ROCStories can be used to learn sequences of sentences. Thus, the corpus is useful for organizing sentences in a text. Pruned Graph Sentence Ordering (PG) We propose a neural network based on the pruned graph for arranging the sentences of short stories, a modified version of the ATTOrderNet (Cui et al., 2018) and Sentence-Entity Graph (Yin et al., 2019). The PG method consists of three components: sentence encoder, story encoder, and decoder. In order to be a fair comparison, we used the same decoder as ATTOrderNet and SE-Graph. Due to space limitations, here we explain our sentence encoder and our story encoder. The Sentence encoder uses BERT encoding to encode sentences, while Story encoder uses a graph neural network for encoding stories. Sentence Encoder: SBERT-WK We use fine-tuned pre-trained SBERT-WK model to encode sentences. BERT contains several layers, each of which captures a different linguistic characteristic. SBERT-WK found better sentence representations by fusing information from different layers, Wang and Kuo (2020). The system geometrically analyzes space using a deep contextual model that is trained on both word-level and sentence-level, without further training. For each word in a sentence, it determines a unified word representation then computes the final sentence embedding vector based on the weighted average based on the word importance of the word representations. Even with a small embedding size of 768, SBERT-WK outperforms other methods by a significant margin on textual similarity tasks (Wang and Kuo, 2020). Story Encoder To use graph neural networks for encoding stories, input stories should be represented as graphs. We propose a pruned graph (PG) representation instead of SE-Graph (Yin et al., 2019) for encoding short stories. Nodes in PG are composed of sentences and entities. We replace pronouns with the entities they refer to since entities are not often repeated from one sentence to another during a short story 3 , we will go into more detail in the experiments. We consider all nouns of an input story as entities at first. After that, we eliminate entities that do not occur more than once in the story. We can formalize our undirected pruned graphs as G = (V s , V e , E), where V s indicates the sentencelevel nodes, V e denotes the entity-level nodes, and E represents edges. Edges in PG graphs are divided into two types: SS and SE. The SS type links two sentences in a story that have the highest or secondhighest value of cosine similarity with each other; and the SE type links a sentence with an entity within that with a label of the entity's role. Equation 4 shows the formula for calculating the cosine similarity, where CosSim is cosine similarity and Emb s i represents vector of sentence i. CosSim(Emb s i , Emb s j ) = Emb s i * Emb s j ||Emb s i ||||Emb s j || (4) SE edges are labeled according to the syntactic role of the entity in the sentence, such as a subject, an object, or other. The role that has the highest rank in an instance of an entity appearing multiple times is considered. The ranking is as follows: subject role, object roles, and other. The use of referring entities rather than pronouns is crucial. Thus, sentence nodes are linked to both sentence and entity nodes, whereas an entity node is not connected to any other entity nodes. For graph encoding, we use GRN , which has been found effective for various kinds of graph encoding tasks. GRN used in our PG is the same 3 we use the Stanford's tool (Lee et al., 2011) as GRN in (Yin et al., 2019), so we do not explain it. Majority Voting We combine the output of three methods to achieve better results in majority voting. Since the stories in Rocstories all have five sentences, there are 20 possible pair sentence orderings as follow: 1. s 1 s 2 or 2. s 2 s 1 , 3. s 1 s 3 or 4. s 3 s 1 , 5. s 1 s 4 or 6. s 4 s 1 , 7. s 1 s 5 or 8. s 5 s 1 , 9. s 2 s 3 or 10. s 3 s 2 , 11. s 2 s 4 or 12. s 4 s 2 , 13. s 2 s 5 or 14. s 5 s 2 , 15. s 3 s 4 or 16. s 4 s 3 , 17. s 3 s 5 or 18. s 5 s 3 , 19. s 4 s 5 or 20. s 5 s 4 Each suggested order for a story includes 10 of the above pair orderings, either of the two pair orderings that have an "or" between them. Through majority voting 4 , we can combine the outputs of three separate methods to generate a final order. According to the number of occurrences in each of the three output arrangements, we assign scores to each of the 20 possible pairings. As a result, each of these possible pairings is scored between 0 and 3. 0 indicates that this pairing does not appear in any of the three methods' outputs, while 3 indicates that it appears in all of them. In the end, all pairs with a greater score of 1 occur in the final orderings 5 . Indeed, these are ten pairs 6 , and with the chosen pairs, the sentences of the story are arranged uniquely. In the following subsection, we are proving that majority voting is a valid way to combine the outputs generated from three different methods for arranging sentences. By using contradiction, we demonstrate the validity of the majority voting method for combining three distinct methods of sentence ordering to arrange two sentences. Assuming the majority voting of three methods fails to create an unique order, then two orders are possible, s 1 s 2 , and s 2 s 1 . In the first case, s 1 appears before s 2 in two or more outputs of the methods, and in the second case, s 2 appears before s 1 in two or more outputs. Due to the three methods, this assumption causes a contradiction. To ordering more than two sentences, it can be proved by induction. Evaluation Metrics We use two standard metrics to evaluate the proposed model outputs that are commonly used in previous work: Kendall's tau and perfect match ratio, as described below. • Kendall's Tau (τ ) Kendal's Tau (Lapata, 2006) measures the quality of the output's ordering, computed as follows: Where N represents the sequence length (i.e. the number of sentences of a story, which is always equal to 5 for ROCStories), and the inversions return the number of exchanges of the predicted order with the gold order for reconstructing the correct order. τ is always between -1 and 1, where the upper bound indicates that the predicted order is exactly the same as the gold order. This metric correlates reliably with human judgments, according to Lapata (2006). • Perfect Match Ratio (PMR) According to this ratio, each story is considered as a single unit, and a ratio of the number of correct orders is calculated. Therefore no penalties are given for incorrect permutations (Gong et al., 2016). PMR is formulated mathematically as follows: where o i represents the output order and o * i indicates the gold order. N specifies the sequence length. Since the length of all the stories of ROCStories is equal to 5, N in this study is always equal to 5. PMR values range from 0 to 1, with a higher value indicating better performance. In (Golestani et al., 2021), we developed the Sentence-level Language Model (SLM) for Sentence Ordering, consisting of a Sentence Encoder, a Story Encoder, and a Sentence Organizer. The sentence encoder encodes sentences into a vector using a fine-tuned pre-trained BERT. Hence, the embedding pays more attention to the sentence's crucial parts. Afterward, the story encoder uses a decoder-encoder architecture to learn the sentencelevel language model. The learned vector from the hidden state is decoded, and this decoded vector is utilized to indicate the following sentence's candidate. Finally, the sentence organizer uses the cosine similarity as the scoring function in order to sort the sentences. An attention-based ranking framework is presented in (Kumar et al., 2020) to address the task. The model uses a bidirectional sentence encoder and a self-attention-based transformer network to endcode paragraphs. In (Yin et al., 2020), an enhancing pointer network based on two pairwise ordering prediction modules, The FUTURE and HISTORY module, is employed to decode paragraphs. Based on the candidate sentence, the FU-TURE module predicts the relative positions of other unordered sentences. Although, the HIS-TORY module determines the coherency between the candidate and previously ordered sentences. And lastly, Prabhumoye et al. (2020) designed B-TSort, a pairwise ordering method, which is the current state-of-the-art method for sentence ordering. This method benefits from BERT and graph-based networks. Based on the relative pairwise ordering, graphs are constructed. Finally, the global order is derived by a topological sort algorithm on the graph. Setting For a fair comparison, we follow Yin et al. (2019)'s settings. Nevertheless, we use SBERT-WK's 768dimension vectors for sentence embedding. Furthermore, the state sizes for sentence nodes are set to 768 in the GRN; The Batch size is 32. In preprocessing, we use Stanford's tool (Lee et al., 2011) to replace pronouns with the referring entities. Results In this paper, we propose a new method based on graph networks for sentences ordering short stories called Pruned Graph (PG). In order to achieve this, we propose a new method for creating edges between sentences (by calculating the cosine similarity between sentences), and we create a better sentence-entity graph for short stories by replacing pronouns with the relevant entities. Besides, to make a better comparison, we also teach the following cases: 1. All nodes in the graph are of the sentence type, and the graph is fully connected. In other words, we train ATTOrderNet on ROC-Stories 7 . 2. The nodes include sentence and entity nodes, and each sentence's node has the edge over all other sentences' nodes (semi fully connected SE-Graph 8 ). 3. The network comprises sentence and entity nodes, and every two sentences with at least one entity in common are connected (SE-Graph 9 ). 7 (Cui et al., 2018) did not train ATTOrderNet on the ROC-Stories dataset. 8 Entity nodes are not connected to all nodes. 9 We train SE-Graph on ROCStories since (Yin et al., 2019) did not. Replacing pronouns with the relevant entities in SE-Graph (SE-Graph + Co-referencing). 5. Similar to PG, but each sentence is connected to a sentence with the highest cosine similarity (semi P G 1 ). 6. Similar to PG; however, each sentence is connected to three other sentences based on their cosine similarity (semi P G 3 ). Note that in the above methods, where the graph also contains the nodes of the entity, there is an edge between a sentence and an entity within it 11 . Table 1 reports the results of Pruned Graph (PG) and the above seven methods. To get the training, validation, and testing datasets, we randomly split ROCStories into 8:1:1. Therefore, the training set includes 78,529 stories, the validation set contains 9,816 stories, and the testing set consists of 9,817 stories. As shown in table 1, our PG beats all seven other methods. The results show that all three of our innovations to the graph-based method have improved the performance. Based on our analysis, the SBERT-WK sentence encoder is more beneficial than the Bi-LSTM. Our experiences also find that using referring entities instead of pronouns is helpful to create a more effective sentence-entity graph. Additionally, it indicates connecting each sentence to two others using cosine similarity is efficient to encode a story. Table 2 reports the results of the proposed method of this paper in comparison with competitors. When compared with ATTOrdeNet, PG improved the Tau by over 8.5% as well as PMR by 13.5%. Furthermore, the Tau is increased by 10.8% and the PMR by more than 16.8% compared to SE-Graph. PG outperforms the state-of-the-art on ROCStories with a more tthan 1.8% increase in pmr and a more than 3.9% improvement in τ . Finally, we merged the outputs of the three methods using the majority voting method, including Enhancing PtrNet, B-TSort, and Our Pruned Graph. Table 3 shows the results of the combination, which improves the PMR and τ criteria by more than 5% and 4.3% on ROCStories, respectively. conclusion This paper introduced a graph-based neural framework to solve the sentence ordering task. This framework takes a set of randomly ordered sentences and outputs a coherent order of the sentences. The results demonstrate that SBERT-WK is a reliable model to encode sentences. Our analysis examined how the method is affected by using a Bi-LSTM model in the sentence encoder component. In addition, we found that replacing pronouns with their referring entities supplies a more informative sentence-entity graph to encode a story. The experimental results indicate that our proposed graph-based neural model significantly outperforms on ROCStories dataset. Furthermore, we recommend a method for combining different Method τ PMR combination 0.8470 0.5488 Table 3: Results of combining of Enhancing PtrNet, B-TSort, and Pruned Graph using majority voting methods of sentence ordering based on majority voting that achieves state-of-the-art performance in PMR and τ scores. In future, we plan to apply the trained model on sentence ordering task to tackle other tasks including text generation, dialogue generation, text completion, retrieval-based QA, and extractive text summarization.
2022-03-15T01:16:15.281Z
2022-03-13T00:00:00.000
{ "year": 2022, "sha1": "0ce5828ab07669377a2e056b4310ae99c2afa614", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0ce5828ab07669377a2e056b4310ae99c2afa614", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
234444431
pes2o/s2orc
v3-fos-license
The Charging Characteristics of Electric Vehicle Group Under the Mode of Shared Travel In the future, travel revolutions such as shared travel will bring great changes to the structure of transportation, and the charging characteristics of electric vehicle will also be affected. This paper takes the new travel mode as the research background and models the driving and charging behavior of electric vehicle group. Then the multi-agent technology is used to build a large-scale electric vehicle group simulation model. This model fully considers the capacity constraints of traffic network and charging station, which can be used to simulate the actual behavior of electric vehicle group. Finally, the charging load curve of the electric vehicle group under the new travel mode is obtained based on simulation. The result shows that as the proportion of shared travel increases, the daily average charging load keeps increasing, and the charging load peak comes earlier and lasts longer. Introduction Travel upgrades such as shared travel and electric vehicle (EV) are increasingly affecting the energy structure of the transportation. According to the prediction of BP Energy Outlook, the number of EV worldwide will increase to 350 million by 2040, which accounts for 15 percent of total vehicles, and approximately 25 percent of the passenger vehicle mileage will be provided by EV [1]. However, owing to the uncertainty of EV charging load distribution in time and space with the increasement of EV permeability and change of travel mode, the power system will face unknown challenges. As a new generation of transportation, EV's charging characteristics have been widely concerned. Reference [2] describes the random behavior of individual EV and analyzes the change of its charging status. Reference [3] designs a smart energy management system to solve the charging problem of large-scale EV group. Since the fundamental function of EV is to serve people's travel, it is necessary to consider its traffic characteristics [4]. Reference [5] uses Markov model to simulate the driving mode of EV and analyzes the influence of its charging load on the power system. Reference [6]- [7] use real traffic network data for simulation and predict the charging demand of urban EV group. At present, many references have conducted research on the driving and charging characteristic of EV, but when it comes to the modeling, the constraints of traffic capacity and charging station capacity have not been well considered. It is not reasonable to carry out simulation by setting parameters only with uniform values such as average velocity and average charging power. In addition, we have not seen any reference researching on the charging characteristics of the EV group in the shared travel mode. In the traditional mode, private vehicles are idle most of the time. However, the utilization rate of vehicles in the shared travel mode is greatly improved, and the travel service capacity of a shared vehicle is about 7-15 ordinary private vehicles [8]. Therefore, the driving behavior and charging behavior of EV group in the new mode will be quite different. IOP Conf. Series: Earth and Environmental Science 619 (2020) 012068 IOP Publishing doi:10.1088/1755-1315/619/1/012068 2 Given the above problems, this paper will conduct research on the charging characteristics of the EV group in shared travel mode. Firstly, the behavior of EV under the new travel mode are modeled. The capacity constraints of the traffic network and charging station are fully considered. Then the multi-agent technology is used to build the agent model to simulate the driving and charging behavior of EVs. Finally, the actual traffic network data is selected for simulation to analyze the influence of travel upgrades on EV group charging characteristics. Description and model of EV behavior The behavior of EV can be generally divided into four states: idle, driving, charging, and vehicle-togrid (V2G). Since the study focuses on the mobile load characteristics of EV, we pay more attention to charging rather than discharging. In addition, in the shared travel mode adopted by the simulation, the utilization rate of EV for driving will be significantly improved, and the proportion of idle EV that can participate in V2G is relatively low. Therefore, the V2G state is not involved in the model. Traffic Characteristics EV is restricted by the traffic network mainly in its driving velocity. The EV velocity varies greatly when driving in different traffic conditions. When the road is clear of traffic, the main constraint of its velocity lies in the velocity limit of the road grade. When the traffic capacity is limited during the travel peak, EV velocity will decrease significantly. Therefore, the influence of traffic network is mainly reflected in the two constraints, including designed velocity and road capacity of different road grades. It is obviously unreasonable to use a uniform average velocity in the simulation, and this paper set the driving velocity based on the road resistance function of the US Bureau of Public Roads (BPR) [9], as shown in (1). where, i t is the driving time of vehicle on the road i with traffic flow i q ; where, i v is the velocity of the vehicle on the road i where the traffic flow is i q ; 0 i v is the free velocity of the vehicle when the road i is clear, related to the design velocity of the road. In addition, the shared travel mode mainly changes the way people travel, not the time. In the actual travel scene, due to the similarity between people's living habits, the travel demand in the daytime is obviously greater than that in the evening, and the travel time will also appear to a certain degree of concentration, such as the rush hour in the commuting period. Therefore, even if the travel pattern changes significantly, travel demand and distribution are still determined by people's own daily schedule, and the temporal distribution of traffic flow caused by this will not be affected too much. Thus, the traffic flow in the shared travel mode basically conforms to the current temporal distribution of local traffic flow. For example, Fig. 1 shows the temporal distribution of traffic flow from 0:00 to 24:00 in Guangzhou, China [10], where one period represents one hour. In order to compare the influence brought by different travel modes, it is assumed that the total amount and temporal distribution of travel demand remain unchanged, and the service capability of a vehicle in shared mode is equivalent to that of 10 vehicles of traditional mode [8]. Then, if the total travel demand remains the same, but the service capability of a single vehicle is improved, the total number of vehicles required will correspondingly decrease. When the proportion of shared travel mode is k , the number of vehicles k Q in the region should meet (3), in which 0 Q is the quantity of vehicles when all vehicles are in the traditional travel mode. Charging Characteristics The charging characteristics of EV are constrained by the capacity of charging stations, mainly reflected in the charging power and charging time. Charging stations are usually equipped with fast charging and slow charging spot. When EV is in slow charging, the charging power basically depends on the power of the on-board charger (OBC), and the power of OBC is generally no more than 10kW. On the other hand, EV's fast charging power is determined by its own Battery Management System (BMS) and the output power of charging facilities. On the basis of meeting the power matching of charging facilities, EV's power depends on the fast charging power supported by itself and there are large differences among different EV models. As for the charging time, EV may not be able to start charging immediately after reaching the charging station when the charging capacity is limited and other peak moments occur. Unlike the fuel vehicle, which only takes a few minutes to refuel, even the Tesla Model 3, the EV with currently the highest fast charging power, takes about half an hour to recharge to 80%. Therefore, the queuing time of EV before charging has a great impact on the charging time, which cannot be ignored. To sum up, both fast charging and slow charging of charging stations will be included in the modeling. When EV arrives at the charging station, priority should be given to the fast charging spot. Without exceeding the upper limit of the output power of the charging facility, the charging power should be equal to the fast charging power supported by the EV model. When the fast charging is not available, then go for the slow charging and the power is equal to the OBC power. When there is no spare charging facility, the EV needs to queue up. Multi-agent simulation model As the scale of the research on EV group expand, it is hard to study the EV behavior by conventional linear manner. For this, multi-agent technology has great advantages in simulating the interaction behavior of EV group [11]- [12]. Therefore, this paper uses the Java agent development framework (JADE) platform for modeling, to simulate the driving and charging behavior of EV group. The model consists of four types of agents: map agent, time agent, charging agent and vehicle agent. The information communication between agents is shown in Fig. 2 Fig. 2. Information communication among agents in the model Basic Assumptions of the Model The following basic assumptions are made to facilitate the simulation: 1) The topological graph in graph theory is used as the simulation map. The elements in the traffic network are simplified into two kinds of objects: edges and nodes. The edge represents the road, the node represents the intersection or the roadhead, and the weighted degree of node is the sum of weights of connected edges, as shown in (4). Here, the weighted degree of the node is calculated by taking the road grade as the edge weight. where, G is the weighted degree of node; n is the total number of edges connected to the node; i g is the weight of edge i . 2) The charging facilities are set uniformly with the concept of charging nodes. Every charging node has a certain number of charging spots, and each spot can only serve one EV at a time. In addition, public charging facilities should usually be located at the hub with convenient transportation [13]. In the traffic network, the nodes with a large weighted degree are connected to relatively more roads, with larger traffic flow. Therefore, the charging nodes are preferentially set at these nodes, and the charging node with a larger weighted degree also has more charging spots. 3) The simulated vehicle is the individual transportation such as EV and fuel vehicle, which does not involve public transportation. Vehicle agents are set to simulate EVs and fuel vehicles. EVs have three states: idle, driving, and charging. The position and the state of charge (SOC) of EV in the idle state remain unchanged; the driving state reflects the decrease of the SOC and the change of the position; the charging state reflects the increase of the SOC and the invariable position. And fuel vehicles only consider the idle state and the driving state. 4) As for the travel selection, more vehicles will pass through the traffic hub node in probability. Therefore, the vehicle has a higher probability to go to the node with a large weighted degree when choosing the driving target. When planning the driving path, the shortest time is taken as the path selection basis. Map Agent The main function of the map agent is to provide travel services, such as location initialization, driving path planning, road condition information, etc. In the simulation, every time a vehicle enters or leaves a certain road, the map agent will record accordingly. According to the current traffic flow of each road, the map agent uses (1) to calculate the driving time required for each road and keep it updated. When the travel request of vehicle agent is received, the map agent will provide the driving path. When receiving the request of charging node, considering the charging capacity limit, the map agent will send all charging nodes within a radius of 3 km for the vehicle agent to choose. IOP Conf. Series: Earth and Environmental Science 619 (2020) 012068 IOP Publishing doi:10.1088/1755-1315/619/1/012068 5 Charging Agent Charging agent provides charging service for EVs and records charging data. In the simulation, every charging node will update the number of EVs after each time step. When receiving a charging inquiry from an EV, the charging agent will make a judgment based on the current situation of the charging node, which has been questioned. If there is any charging spot available, the charging agent will inform the EV that it can go there for charging; otherwise, it will be informed that the charging spot is full. When the charging agent receives the information that an EV arrives at a certain charging node and requests charging, the EV will be informed to queue up if the charging spot is full. The EV will be allowed to charge if the charging spot is available, priority will be given to the quick-charging spot, and the charging time and charging power will be recorded. Time Agent The time agent provides the update service of simulation time, and at the same time undertakes the regulation task of the temporal distribution of traffic flow. The time agent will calculate the number of vehicles that should be driving in each period according to the temporal distribution of traffic flow. During the simulation, the time agent will continuously update the number of vehicle agents in each state and make corresponding adjustments, so as to regulate the number of vehicles driving in each period basically following the temporal distribution of traffic flow. When the simulation time is over, the time agent will send the signal to other agents to stop the simulation. Vehicle Agent There are multiple vehicle agents in the simulation, and each vehicle agent represents one EV or fuel vehicle. The following uses EV as an example to introduce the algorithm. EV is firstly initialized and then enter the idle state. EV in idle state will first check the SOC. When the SOC is less than 20%, EV will ask the map agent for the nearby charging nodes and drive to the corresponding node for charging. If the SOC is sufficient, EV will remain idle until it receives the driving task, and then it will change to the driving state. The EV in the driving state updates its velocity according to (2) every time it passes a simulation step, and notifies the map agent to update the traffic flow of the road every time it enters a certain road. When it arrives at the destination, the EV will stop driving, go into idle state, wait for the next trip, and decide whether it needs to be recharged based on the SOC. After reaching the charging node, EV will start charging or wait in line according to the current condition of charging spots. After charging, EV will check the driving task. If there is an unfinished task, EV will enter the driving state and continue the trip; otherwise, EV will enter the idle state. Fig. 3 below shows the algorithm flow. Example analysis The traffic network of Tianhe district, Guangzhou, China, is selected as the simulation map. Its current quantity of small passenger vehicles is about 300,000. The traffic flow temporal distribution is shown in Fig. 1, and the driving demand of vehicles under different travel modes all obeys this distribution. The simulation parameters are shown in Table 1. Charging There are a total of 22 charging nodes and a total of 862 charging spots. Time The simulation starting at 0:00 and ending at 24:00. The time step is 60s. Vehicle The EV permeability is 15%, and the total number of vehicles in different travel mode ratios is calculated according to (4). Fig. 1 and Fig. 4, there is a certain correlation between the charging curve and traffic flow curve. From the peak and valley periods, the temporal distribution of charging load lags behind traffic flow for a period of time. In addition, as the proportion of shared travel increases, the daily average charging load keeps increasing, the charging load peak comes earlier and lasts longer, and the correlation with the traffic flow curve is higher. In the traditional mode, the EV driving utilization rate is low. The behavior of a single EV is completely dependent on the schedule of its owner, and the behavior pattern is relatively dispersed. On the contrary, the shared travel mode has brought the higher driving efficiency and lower vehicles quantity. An EV serve multiple users. The behavior of EV group have higher similarity in probability, so their driving and charging time are more concentrated. Thus, the correlation between charging load curve and traffic flow curve is higher. At the same time, most EVs in the shared travel mode are in the driving state during the travel peak and then cause the charging peak. However, due to the influence of the endurance millage, the charging peak usually lags behind the travel peak for a period of time, which is related to the average velocity of power consumption. The EV in the shared travel mode has a higher average number of trips, so its average velocity of power consumption is faster. Therefore, the higher the proportion of new travel mode, the earlier the charging peak. As for the daily average charging load, since the travel demand is similar, the total driving mileage in different travel modes is also similar, so the total power consumption is approximately the same. However, under the traditional mode, the quantity of EVs is large and its utilization rate is low, so the average power consumption of a single EV is much less, and many EV with higher initial SOC even do not need to be recharged. On the other hand, shared EVs almost all have charging demand after a day's driving, and some with small battery capacity even need to be charged more than once. Thus, the total charging times are more and the daily average charging load in shared travel mode is higher. Coupled with the capacity constraint of charging station, the charging limitation of EVs in the new travel mode is more serious in the charging peak. Therefore, although the peak value is similar, the charging peak in the shared travel mode lasts longer and fades slowly. Conclusion This paper takes the shared travel as the background and uses multi-agent simulation to conduct research on the charging characteristic of EV group. The behavior of the EV group in the mode of shared travel is described and modeled in detail, and a large-scale EV group simulation model was built by the multi-agent technology. Through simulation, the influence of shared travel on EV charging load was preliminarily found. With the increasement of shared travel proportion, the daily average charging load and variation range of EV group will increase, and the charging peak will come earlier and last longer, which will easily bring adverse effects to the power system. In the future, we will conduct in-depth research on the impact of travel revolution and study other travel upgrades such as autonomous driving and the increase of EV permeability. At the same time, the simulation modeling of charging station and traffic network will be improved to better simulate the
2020-12-24T09:12:26.575Z
2020-12-22T00:00:00.000
{ "year": 2020, "sha1": "fccd2fc150e6bd3ae5a83460d27c0328039587d1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1755-1315/619/1/012068", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "277fb2bd79d48386d55a11556351332fc1cb109a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }